Newt Global - Dallas, TX
posted 2 months ago
We are seeking an experienced AWS Data Engineer with a strong background in Snowflake and banking experience to join our team on a W2 contract basis. The ideal candidate will have over 10 years of total IT experience, with at least 5 years dedicated to Hadoop and big data technologies. This role requires advanced knowledge of the Hadoop ecosystem, including hands-on experience with HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, and Solr. The successful candidate will be responsible for designing and developing data pipelines for data ingestion and transformation using Scala or Python, with a strong emphasis on Spark programming, particularly PySpark. In this position, you will leverage your expertise in building pipelines using Apache Spark and your familiarity with core AWS provider services. A hands-on approach to Python and PySpark, along with basic libraries for machine learning, is essential. Additionally, exposure to containerization technologies such as Docker and Kubernetes, as well as aspects of DevOps including source control, continuous integration, and deployments, will be beneficial. The role requires a system-level understanding of data structures, algorithms, and distributed storage and compute. We are looking for a candidate with a can-do attitude who excels at solving complex business problems and possesses strong interpersonal and teamwork skills. Team management experience is crucial, as you will be leading a team of data engineers and analysts. Experience with Snowflake is a plus, and a Bachelor’s degree or equivalent is required.