ciandt

[Job - 29349] Senior Data Developer (AWS), Brazil

🇧🇷 Brazil, BR, BR On-site IT Senior Posted May 15, 2026
WorkplaceOn-site
SenioritySenior
CategoryIT
IT CategoryData Engineer
LanguageEnglish
PostedMay 15, 2026
Last verifiedMay 15, 2026

Where this role is available

Collapsed by default to keep the job description easy to scan.

2 locations
Brazil
  • Brazil, BR
  • BR

JobGrid listing details

JobGrid.eu keeps the employer description in its original language and adds clear listing facts, freshness, and source context so candidates can evaluate the role before applying.

Key details
2 locations, IT, Data Engineer, On-site, Senior
Current openings
198 active jobs
Original language
English
Source and freshness
Collected from public career pages and reviewed through JobGrid.eu source availability checks. Last verified: May 15, 2026.
Apply path
JobGrid.eu sends candidates to the original application page and adds non-personal referral parameters.
We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions. With over 8,000 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality. CI&T is expanding its data development capabilities to support a greenfield platform initiative for a leading client in the agribusiness industry. This new product is being built from the ground up to deliver AI-powered agronomic analysis with georeferenced map visualizations — and the quality of its data foundation will determine everything that follows. This role sits at the core of that foundation. As a Senior Data Developer, you will work alongside the client's technical leadership to architect and build the data ecosystem that will power intelligent agronomic insights. Your work will directly enable AI applications and geospatial visualizations to function on reliable, well-structured data — making this position both technically demanding and strategically critical. If you thrive in ambiguous, high-ownership environments where you shape the data architecture rather than inherit it, this is your role.     Responsibilities Design and build end-to-end data pipelines across the RAW, Silver, and Gold layers of the Medallion Architecture, ensuring reliability, performance, and maintainability at each stage Architect data ingestion, transformation, standardization, and serving processes, structuring data flows from diverse and heterogeneous sources into a coherent analytical foundation Model data for analytical consumption following Data Warehouse best practices, including Star Schema design and dimensional modeling suited for business intelligence and AI-readiness Identify, evaluate, and consolidate new data sources relevant to agronomic business objectives, proactively engaging stakeholders to understand, obtain, and validate data availability and quality Interact with business stakeholders and client leadership to translate domain requirements into data architecture decisions, challenging assumptions and proposing solutions grounded in technical evidence Manipulate, optimize, and serve data in multiple formats — including Parquet, CSV, and geospatial datasets — tailored to the consumption needs of downstream AI applications and map-based visualizations Manage and configure cloud infrastructure end-to-end, including storage, compute, access control, serverless functions, data cataloging, and event-driven processing on AWS Own deployment and CI/CD practices for data pipelines — including repository management, branching strategy, test gates, and automated deploy workflows via GitLab Support the creation of the data layer that will feed AI/ML applications, ensuring data quality, structure, and availability meet the requirements of machine learning workflows — without directly developing the models themselves Operate as a proactive technical partner in a greenfield environment: question, propose, experiment, and iterate with the team rather than execute in isolation     Requirements English proficiency at B2 level or above — ability to explain technical flows, engage in discussions, ask clarifying questions, and collaborate effectively with international stakeholders (accent is not a barrier; communication clarity is) Solid hands-on experience with AWS, covering the full infrastructure spectrum: S3, IAM (permissions and security configuration), Redshift, Lambda (serverless use cases), and Glue (including Glue Catalog for metadata management); ability to evaluate trade-offs between services for different pipeline scenarios Experience with Terraform or equivalent Infrastructure-as-Code (IaC) tooling, applied recurrently in real data engineering projects — not just theoretical knowledge Proficiency with GitLab for source control, CI/CD pipeline configuration, deployment workflows, and test gate management — specifically GitLab, not just generic Git experience Strong proficiency in SQL, including complex query writing, analytical transformations, and performance tuning for data warehouse environments Strong proficiency in PySpark, applied to large-scale distributed data processing — including partitioning strategies (e.g., by day/month/year), volume handling (tens to hundreds of GB), and performance optimization Experience with Databricks, used in the context of data engineering pipelines and lakehouse architectures, including migration and deployment scenarios Analytical data modeling expertise, with solid knowledge of Star Schema and dimensional modeling applied to data warehousing and business intelligence environments Hands-on experience with the Medallion Architecture (RAW / Silver / Gold layers), including manipulation and optimization of Parquet and CSV files Experience integrating and consolidating data from multiple heterogeneous sources, ensuring consistency, traceability, and analytical readiness Mindset suited for greenfield projects: proactive, solution-oriented, comfortable with ambiguity, and able to contribute to architectural decisions — not just execute predefined tasks     Nice to Have Familiarity with SnapLogic or equivalent low-code/no-code ETL orchestration platforms (e.g., Pentaho, Airflow, Alteryx) — SnapLogic is the current standard at the client, with migration underway; hands-on experience with block/flow-based ETL logic is a differentiator Experience with geospatial data processing and analytical environments focused on map-based and geographic visualization Knowledge of DuckDB for in-process analytical queries Background in data projects applied to agribusiness or precision agriculture Exposure to predictive modeling workflows (e.g., gradient boosting, ensemble methods, or similar) — as a data provider to ML pipelines, not as a model developer #LI-JP3

Before you leave

Leave your email to track this opening and receive relevant alerts. You can also continue without sharing it.