Steampunk - McLean, VA

posted 4 months ago

Full-time - Senior
McLean, VA
Food Services and Drinking Places

About the position

In today's rapidly evolving technology landscape, an organization's data has never been a more important aspect in achieving mission and business goals. Our data exploitation experts work with our clients to support their mission and business goals by creating and executing a comprehensive data strategy using the best technology and techniques, given the challenge. At Steampunk, our goal is to build and execute a data strategy for our clients to coordinate data collection and generation, to align the organization and its data assets in support of the mission, and ultimately to realize mission goals with the strongest effectiveness possible. For our clients, data is a strategic asset. They are looking to become a facts-based, data-driven, customer-focused organization. To help realize this goal, they are leveraging visual analytics platforms to analyze, visualize, and share information. At Steampunk, you will design and develop solutions to high-impact, complex data problems, working with the best data practitioners around. Our data exploitation approach is tightly integrated with Human-Centered Design and DevSecOps. We are looking for a seasoned Data Solution Architect to work with our team and our clients to develop enterprise-grade data platforms, services, pipelines, data models, visualizations, and more! The Data Solution Architect needs to be a technologist with excellent communication and customer service skills and a passion for data and problem-solving. This role spans the spectrum of data capabilities, from data vision and strategy all the way through data science. The responsibilities include designing greenfield data solution stacks in the cloud or on-premises, using the latest data services, products, technology, and industry best practices. You will be architecting the migration of legacy data environments with performance and reliability. Data Architecture contributions include assessing and understanding data sources, data models and schemas, and data workflows. Data Engineering contributions include assessing, understanding, and designing ETL jobs, data pipelines, and workflows. BI and Data Visualization contributions include assessing, understanding, and designing reports, selecting BI tools, creating dynamic dashboards, and setting up data pipelines in support of dashboards and reports. Data Science contributions include assessing, understanding, and designing machine learning and AI applications, designing MLOps pipelines, and supporting data scientists. You will also address technical inquiries concerning customization, integration, enterprise architecture, and general feature/functionality of data products. Experience in crafting data lakehouse solutions in the cloud (preferably AWS, alternatively Azure, GCP) is essential, including relational databases, data warehouses, data lakes, and distributed data systems. A broad understanding of the data exploitation lifecycle and capabilities is a key requirement, along with support for an Agile software development lifecycle. You will contribute to the growth of our Data Exploitation Practice!

Responsibilities

  • Design and develop enterprise-grade data platforms, services, pipelines, data models, and visualizations.
  • Lead and manage development and architecture teams.
  • Interface regularly with key customer stakeholders to provide necessary guidance.
  • Architect migration of legacy data environments ensuring performance and reliability.
  • Assess and understand data sources, data models, schemas, and workflows.
  • Design ETL jobs, data pipelines, and workflows for data engineering.
  • Select BI tools, create dynamic dashboards, and set up data pipelines in support of reports.
  • Design machine learning and AI applications, including MLOps pipelines.
  • Address technical inquiries regarding customization, integration, and enterprise architecture.
  • Craft data lakehouse solutions in the cloud, including relational databases and data warehouses.

Requirements

  • 15 years of industry experience with a Bachelor's Degree or 12 years of industry experience with a Master's Degree.
  • Ability to hold a position of public trust with the US government.
  • Experience coding commercial software and a passion for solving complex problems.
  • 5+ years direct experience in Data Solutions with experience in big data tools such as Hadoop, Spark, Kafka.
  • Experience with relational SQL and NoSQL databases like Postgres, MySQL, MS SQL Server, Oracle, Mongo.
  • Experience with data pipeline and workflow management tools such as Airflow and NiFi.
  • Proficiency in AWS cloud services such as EC2, EMR, RDS, Redshift, Glue, SageMaker, or equivalents in Azure and GCP.
  • Experience with data streaming systems like Storm and Spark-Streaming.
  • Familiarity with data science tools/languages such as R, Python, and Databricks.
  • Advanced working SQL knowledge and experience with relational databases.

Nice-to-haves

  • Experience with DBOps and MLOps frameworks.
  • Exposure to DevOps practices.
  • Experience manipulating structured and unstructured data for analysis.
  • Experience constructing complex queries to analyze results using databases.

Benefits

  • Employee ownership structure promoting investment in employees.
  • Opportunities for professional growth and development.
  • A focus on shared accountability in solving mission challenges.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service