Aledade - Austin, TX

posted about 1 month ago

Full-time - Senior
Austin, TX
Professional, Scientific, and Technical Services

About the position

As a Staff Software Engineer, you will take us beyond traditional monolithic SQL engines and batch pipelines. You will build the next generation of distributed data storage and processing systems. Your role will involve creating systems that can scale indefinitely and surpass traditional query performance, while ensuring that the interfaces for that data are simple, expressive, and cleanly abstracted. These interfaces will support a broad array of data consumers, ranging from our web application to business analytics and artificial intelligence. In this position, you will be responsible for identifying and developing scalable and performant solutions. You will work across disciplines to shape product strategy and execution, ensuring that the foundations of code architecture and quality are robust. Mentoring and coaching engineers will be a key part of your role, as you set and uphold the standard for engineering processes to support high-quality engineering practices. Your contributions will be critical in driving the success of our data systems and enhancing the overall performance of our applications.

Responsibilities

  • Identify and develop scalable and performant solutions.
  • Work across disciplines to shape product strategy and execution.
  • Develop the foundations of code architecture and quality.
  • Mentor and coach engineers.
  • Set and uphold the standard for engineering processes to support high-quality engineering.

Requirements

  • BS/BTech (or higher) in Computer Science, Engineering or a related field required.
  • 8+ years of production-level experience as an engineer building highly scalable systems.
  • 4+ years of experience acting as a trusted technical decision-maker in a team setting, solving for short-term and long-term business value.
  • 4+ years of experience working with SQL or other database querying languages on large multi-table data sets.
  • Experience architecting, developing, and deploying large-scale distributed systems at scale.
  • Experience with cloud technologies, e.g., AWS, Azure, GCP.
  • Experience building continuous integration and continuous development (CI/CD) pipelines.
  • Strong familiarity with server-side web technologies (e.g., Java, Python, Scala, C#, C++, Go).

Nice-to-haves

  • Deep understanding of one or more tools like Apache Spark, SQL, and Python for data analysis, manipulation, and processing.
  • Familiarity with data technologies and architectures (e.g., event-based architecture, distributed computing, in-memory data processing).
  • Experience working with SQL and NoSQL databases (e.g., MySQL, PostgreSQL, Cassandra, MongoDB), focusing on high-performance querying and optimization for analytical workloads.
  • Experience in designing, deploying, and managing data warehouses (e.g., Snowflake, Amazon Redshift, etc.) for analytics and business intelligence applications.
  • Knowledge of data partitioning, sharding, and indexing strategies to ensure optimal performance in high-load environments.
  • Proficiency in designing data models that support analytical requirements, ensuring efficient data retrieval and storage.
  • Knowledge of data pipeline architecture, including ETL/ELT processes, batch, and real-time data processing.
  • Skilled in optimizing data pipelines for scalability and performance, ensuring efficient data ingestion, storage, and retrieval.
  • Ability to use caching strategies and indexing techniques to reduce query and processing times.
  • Knowledge of tools for data pipeline creation and orchestration such as Apache Airflow, AWS Glue, and Apache Kafka.
  • Knowledge of data security principles and ensuring compliance with regulations (e.g., GDPR, HIPAA) through proper data governance practices.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service