Plain Concepts

Databricks Engineer

🇧🇷 Remote, BR Remote IT Publié Avr 21, 2026
LieuRemote, BR
Mode de travailRemote
CatégorieIT
Catégorie ITIngénieur data
Publié21 avril 2026
Dernière vérification7 mai 2026

We’re looking for a hands-on Databricks Engineer to help design, build, and scale a modern data platform running on Apache Spark and Delta Lake. This role sits at the intersection of data engineering, platform architecture, and performance optimization. You’ll work closely with data scientists, analysts, and backend teams to ensure reliable, high-performance data pipelines and well-governed datasets.

Responsibilities

  • Design and implement end-to-end data pipelines using Databricks (Jobs, Workflows, Delta Live Tables)
  • Build and maintain scalable ETL/ELT processes leveraging Apache Spark (PySpark / Scala)
  • Develop data models using Delta Lake, including schema design, partitioning strategies, Z-ordering, and optimization techniques
  • Manage and optimize Databricks clusters (autoscaling, spot instances, instance pools, cluster policies)
  • Implement CI/CD pipelines for Databricks deployments (e.g., using Databricks Repos, Terraform, Azure DevOps / GitHub Actions)
  • Work with structured and semi-structured data (JSON, Parquet, Avro) at scale
  • Ensure data quality and reliability through validation frameworks, unit/integration testing, and monitoring
  • Implement data governance practices (Unity Catalog, access controls, lineage tracking, auditing)
  • Troubleshoot performance issues (job failures, skew, shuffle bottlenecks, memory pressure) and optimize Spark workloads
  • Integrate Databricks with cloud-native services (AWS S3, Azure Data Lake Storage, GCP BigQuery)
  • Collaborate with data consumers to define SLAs, data contracts, and service interfaces

Avant de partir

Laissez votre e-mail pour suivre cette offre et recevoir des alertes pertinentes. Vous pouvez aussi continuer sans le partager.