Design, build, and maintain trusted, scalable, and well-documented data models and pipelines using modern analytics engineering tools. This role ensures that analysts and business leaders can extract insights efficiently and reliably from clean data layers.
RESPONSIBILITIES
- Data Modeling & Transformation
- Develop and maintain dbt models with standardized naming, structure, and robust documentation.
- Create analytics-ready data marts by transforming raw data into clean, structured datasets.
- Write modular, well-tested, and efficient SQL code to power modeling layers.
- Data Pipeline Development
- Build reliable and scalable data pipelines for ingestion, transformation, and validation.
- Monitor and troubleshoot data pipeline performance, latency, and failures.
- Cross-functional Collaboration
- Work closely with analysts and business stakeholders to translate data requirements into technical solutions.
- Ensure data logic aligns with business definitions and operational context.
- Data Quality & Documentation
- Implement data validation checks and tests to ensure accuracy and trustworthiness.
- Maintain clear documentation of data models, transformation logic, and data lineage.
- Use Git for version control and collaboration best practices.
- Infrastructure Support
- Support data infrastructure maintenance in cloud environments (BigQuery, Redshift, Snowflake).
- Adapt to evolving infrastructure, tools, and processes in a fast-paced environment.
REQUIREMENTS
Bachelor’s degree in Engineering, Computer Science, or a related technical field.1–2 years of experience in analytics engineering, data engineering, or equivalent data-focused roles.Proficient in writing complex SQL queries.Experience with dbt or similar data modeling tools.Familiarity with Git (branching, commits, pull requests).Exposure to cloud data warehouses (e.g., BigQuery, Redshift, Snowflake).Understanding of data validation, quality testing, and documentation practices.Familiarity with cloud platforms (e.g., GCP, AWS) is a plus.