Senior Data Engineer

Posted 1 day ago

Senior Data Engineer

About the Role

Lead the design and implementation of our data infrastructure and pipelines as a Senior Data Engineer. You will be a key technical leader, driving architectural decisions, mentoring the team, and delivering scalable data solutions for critical business decisions.

Key Responsibilities

  • Architecture & Technical Leadership: Design, architect, and lead technical decisions for scalable, reliable, and cost-effective data infrastructure and pipelines. Define and champion data engineering best practices and drive architecture evolution.
  • Data Pipeline & Platform Development: Build and maintain complex, enterprise-scale data pipelines (processing terabytes). Implement robust ELT/ETL frameworks using DBT and Databricks, develop automated data quality and monitoring, optimize performance, and implement data governance/lineage.
  • Advanced Data Engineering: Write complex, performant SQL and PySpark for transformations. Build real-time and batch solutions, design data models for analytics/ML, create reusable frameworks, and ensure data security/compliance.
  • Leadership & Collaboration: Lead cross-functional projects. Partner with stakeholders to translate requirements into technical solutions. Drive incident response. Contribute to hiring and team building.
  • Strategy & Innovation: Influence the data engineering roadmap and technical strategy. Identify opportunities for automation and optimization. Champion data quality and reliability; stay current with industry trends.

Required Qualifications

  • Experience: 6-8+ years in data/software engineering, 3+ years building production data pipelines at scale. Extensive hands-on experience with DBT. Proven track record leading technical projects and mentoring engineers.
  • Technical Expertise: Expert proficiency in SQL, data modeling, and database optimization. Strong Python/Scala skills. Expertise with cloud data platforms (AWS/Azure/GCP) and services. Experience with orchestration (Airflow, Databricks Workflows) and streaming (Kafka). Strong understanding of data warehouse/lakehouse architectures and data quality frameworks.
  • Architecture & Design: Proven ability to design scalable, fault-tolerant data systems. Strong understanding of distributed systems, microservices/event-driven design, and data security/governance.
  • Leadership & Communication: Excellent communication and collaboration skills to influence technical and non-technical audiences. Experience leading technical designs/reviews. Ability to drive complex solutions in fast-paced environments.

Preferred Qualifications

Bachelor's/Master's in CS/Engineering. Knowledge of ML pipelines/MLOps, open-source contributions, experience with BI tools (Looker/Tableau), and knowledge of columnar storage (Parquet, Iceberg). Experience in regulated industries.

Good to have

  • Strong experience with Databricks, Spark, and distributed data processing (PySpark, Delta Lake). 
  • Proficiency with CI/CD and infrastructure-as-code (Terraform). 
  • Experience with data mesh architecture.

Job Description

Key Responsibilities

  • Architecture & Technical Leadership: Design, architect, and lead technical decisions for scalable, reliable, and cost-effective data infrastructure and pipelines. Define and champion data engineering best practices and drive architecture evolution.
  • Data Pipeline & Platform Development: Build and maintain complex, enterprise-scale data pipelines (processing terabytes). Implement robust ELT/ETL frameworks using DBT and Databricks, develop automated data quality and monitoring, optimize performance, and implement data governance/lineage.
  • Advanced Data Engineering: Write complex, performant SQL and PySpark for transformations. Build real-time and batch solutions, design data models for analytics/ML, create reusable frameworks, and ensure data security/compliance.
  • Leadership & Collaboration: Lead cross-functional projects. Partner with stakeholders to translate requirements into technical solutions. Drive incident response. Contribute to hiring and team building.
  • Strategy & Innovation: Influence the data engineering roadmap and technical strategy. Identify opportunities for automation and optimization. Champion data quality and reliability; stay current with industry trends.

Required Qualifications

  • Experience: 6-8+ years in data/software engineering, 3+ years building production data pipelines at scale. Extensive hands-on experience with DBT. Proven track record leading technical projects and mentoring engineers.
  • Technical Expertise: Expert proficiency in SQL, data modeling, and database optimization. Strong Python/Scala skills. Expertise with cloud data platforms (AWS/Azure/GCP) and services. Experience with orchestration (Airflow, Databricks Workflows) and streaming (Kafka). Strong understanding of data warehouse/lakehouse architectures and data quality frameworks.
  • Architecture & Design: Proven ability to design scalable, fault-tolerant data systems. Strong understanding of distributed systems, microservices/event-driven design, and data security/governance.
  • Leadership & Communication: Excellent communication and collaboration skills to influence technical and non-technical audiences. Experience leading technical designs/reviews. Ability to drive complex solutions in fast-paced environments.

Preferred Qualifications

Bachelor's/Master's in CS/Engineering. Knowledge of ML pipelines/MLOps, open-source contributions, experience with BI tools (Looker/Tableau), and knowledge of columnar storage (Parquet, Iceberg). Experience in regulated industries.

Good to have

  • Strong experience with Databricks, Spark, and distributed data processing (PySpark, Delta Lake). 
  • Proficiency with CI/CD and infrastructure-as-code (Terraform). 
  • Experience with data mesh architecture.

Job Summary

Bangalore Location
Full Time Permanent Job type
6 - 8 years Experience

Contact


Share