The Senior Data Engineer will design and build scalable ETL/ELT solutions using Databricks and Azure. Responsibilities include optimizing data pipelines, implementing Lakehouse architecture, data quality checks, and leading technical guidance for the team.
Overview
We are looking for a Senior Data Engineer with strong expertise in Databricks, PySpark, Delta Lake, and cloud-based data pipelines. The ideal candidate will design and build scalable ETL/ELT solutions, implement Lakehouse/Medallion architectures, and integrate data from multiple internal and external systems. This role requires strong technical leadership and hands-on architecture experience.
Key Responsibilities
- Design, build, and optimize data ingestion and transformation pipelines using Databricks, PySpark, and Python.
- Implement Delta Lake and Medallion architecture for scalable enterprise data platforms.
- Develop ingestion frameworks for data from SFTP, REST APIs, SharePoint/Graph API, AWS, and Azure sources.
- Automate workflows using Databricks Workflows, ADF, Azure Functions, and CI/CD pipelines.
- Optimize Spark jobs for performance, reliability, and cost efficiency.
- Implement data validation, quality checks, and monitoring with automated alerts and retries.
- Design secure and governed datasets using Unity Catalog and cloud security best practices.
- Collaborate with analysts, business users, and cross-functional teams to deliver curated datasets for reporting and analytics.
- Provide technical leadership and guidance to junior team members.
Required Skills
- 5–8+ years of experience in Data Engineering.
- Strong hands-on experience with Databricks, PySpark, Delta Lake, SQL, Python.
- Experience with Azure Data Lake, ADF, Azure Functions, or AWS equivalents (S3, Lambda).
- Experience integrating data from APIs, SFTP servers, vendor data providers, and cloud storage.
- Knowledge of ETL/ELT concepts, Lakehouse/Meddalion architecture, and distributed processing.
- Strong experience with Git, Azure DevOps CI/CD, and YAML pipelines.
- Ability to optimize Spark workloads (partitioning, caching, Z-ordering, performance tuning).
Good to Have
- Exposure to Oil & Gas or trading analytics (SPARTA, KPLER, IIR, OPEC).
- Knowledge of Power BI or data visualization concepts.
- Familiarity with Terraform, Scala, or PostgreSQL.
- Experience with SharePoint development or .NET (optional).
Top Skills
Adf
Azure Data Lake
Azure Devops Ci/Cd
Azure Functions
Databricks
Delta Lake
Git
Pyspark
Python
SQL
Yaml
Unison Consulting Singapore Office
1 Changi Business Park Crescent, , Plaza 8 #03-06 Tower A, Singapore, , Singapore, 486025
Unison Consulting Singapore Office
#12-00, 63 Market Street, Bank of Singapore Center, Singapore, , Singapore, 048942
Similar Jobs
Artificial Intelligence • Insurance
Lead the design and implementation of technical solutions for insurtech products, managing project lifecycles, and collaborating with stakeholders for optimal delivery.
Top Skills:
APIsCloud InfrastructureConfluenceJIRAMicroservicesMs ProjectSaaSTrello
Fintech • Insurance • Software • Financial Services
Assist the legal team with drafting documents, conducting research, ensuring compliance, and maintaining legal records in a fast-paced environment.
Top Skills:
Google SheetsExcel
Fintech • Payments • Financial Services
Lead BI and Data Operations, ensuring quality and compliance in reporting, drive automation, optimize dashboards, and mentor the team.
Top Skills:
Bi ToolsData Visualization ToolsSQL
What you need to know about the Singapore Tech Scene
The digital revolution has driven a constant demand for tech professionals across industries like software development, data analytics and cybersecurity. In Singapore, one of the largest cities in Southeast Asia, the demand for tech talent is so high that the government continues to invest millions into programs designed to develop a talent pipeline directly from universities while also scaling efforts in pre-employment training and mid-career upskilling to expand and elevate its workforce.

