There are still lots of open positions. Let's find the one that's right for you.
As a Data Engineer, you will play a crucial role in interpreting business needs and selecting appropriate technologies to implement data governance for shared and/or master sets of data. Your responsibilities will include collaborating with key business stakeholders, IT experts, and subject-matter experts to plan and deliver optimal data solutions. You will be tasked with creating, maintaining, and optimizing data pipelines as workloads transition from development to production, ensuring seamless data flow for specific use cases. Additionally, you will perform both technical and non-technical analyses on project issues, ensuring that technical implementations adhere to quality assurance metrics. Your role will also involve analyzing data and systems architecture, creating designs, and implementing information systems solutions. To succeed in this position, you will need to have an active Top Secret/SCI clearance and a Bachelor's degree or equivalent practical experience. A strong understanding of cloud architectures and enabling tools, particularly AWS Cloud (GovCloud/C2S), is essential. Familiarity with Amazon Web Managed Services (AWS) and working knowledge of software platforms and services such as Docker, Kubernetes, JMS/SQS, SNS, Kafka, AWS Lambda, NiFi, and Airflow will be beneficial. Proficiency in programming languages and technologies including JavaScript, Elasticsearch, JSON, SQL, and XML, as well as experience with datastores like MongoDB/DynamoDB, PostgreSQL, S3, Redshift, JDBC/ODBC, and Redis, is required. You should also be comfortable working in Linux/Unix server environments and have experience with Agile development methodology. In this role, you will be responsible for defining and communicating a clear product vision for our client's software products, aligning with user needs and business objectives. You will create and manage product roadmaps that reflect both innovation and growth strategies, partnering with a government product owner and a product team of 7-8 FTEs. Your responsibilities will also include developing and designing data pipelines to support an end-to-end solution, maintaining artifacts related to ETL processes, and integrating data pipelines with AWS cloud services to extract meaningful insights. You will manage production data within multiple datasets, ensuring fault tolerance and redundancy, and provide Tier 3 technical support for deployed applications and dataflows. Collaboration with the data engineering team to design and launch new features will be key, as will coordinating and documenting dataflows and capabilities. Occasionally, you may be required to support off-hours deployments, including evenings or weekends.