BTSE Group Logo

BTSE Group

(Sr.) Data Engineer

Posted 6 Days Ago
Be an Early Applicant
In-Office
2 Locations
Mid level
In-Office
2 Locations
Mid level
Design and maintain scalable data platforms on AWS and Databricks, optimize data processing, and collaborate with teams for reliable analytics.
The summary above was generated by AI
About BTSE:
DGTL SG is a specialized service provider dedicated to delivering a full spectrum of front-office and back-office support solutions, each of which are tailored to the unique needs of global financial technology firms. DGTL SG is engaged by BTSE Group to offer several key positions, enabling the delivery of cutting-edge technology and tailored solutions that meet the evolving demands of the fintech industry in a competitive global market. 

BTSE Group is a leading global fintech and blockchain company that is committed to building innovative technology and infrastructure. BTSE empowers businesses and corporate clients with the advanced tools they need to excel in a rapidly evolving and competitive market. BTSE has pioneered numerous trading technologies that have been widely adopted across the industry, setting new benchmarks for innovation, performance, and security in fintech. BTSE’s diverse business lines serve both retail (B2C) customers and institutional (B2B) clients, enabling them to launch, operate, and scale fintech businesses. BTSE is seeking ambitious, motivated professionals to join our B2C and B2B teams.

About the opportunity:
The Data Architecture team is responsible for designing and implementing scalable data platforms and systems on AWS to support enterprise data warehousing and pipeline infrastructure. As a Data Engineer on this team, you will design, build, and maintain a robust data platform that enables reliable data integration, processing, and analytics across the organization.

Responsibilities:

  • Design, build, and maintain scalable data platforms and systems on AWS and Databricks to support analytics, reporting, and data processing workloads.
  • Work closely with the infrastructure team to build and operate the foundational components such as compute, storage, and networking that support data pipeline execution at scale.
  • Develop and maintain a reusable data job framework to enable efficient and scalable orchestration of data pipelines using PySpark and Databricks Workflow.
  • Optimize performance of distributed data processing systems, including Spark tuning and resource configuration, to ensure high efficiency and reliability.
  • Define and implement monitoring, alerting, and observability for the data platform infrastructure to maintain system health and support proactive issue resolution.
  • Collaborate cross-functionally with data engineers, analysts, and DevOps teams to deliver governed, high-quality data with strong platform-level reliability.

Requirements:

  • 3+ years of experience in data engineering or a related field, with hands-on experience in building cloud-based data systems (preferably AWS).
  • Strong proficiency in PySpark, SQL, and Python for large-scale data processing and performance tuning.
  • Hands-on experience with Databricks and orchestration tools such as Workflow or Apache Airflow, with a proven track record of designing reusable frameworks to run and manage data workflows.
  • Familiarity with CI/CD practices and version control systems like GitLab.
  • Working knowledge of AWS services commonly used in data platforms, such as Amazon Glue, PostgreSQL (RDS or Aurora), and ElastiCache for Redis.

Nice to Haves:

  • Knowledge of data privacy, security, and compliance standards (e.g., GDPR, HIPAA).
  • Experience with real-time or streaming data platforms (e.g., Flink, Kafka, Kinesis) and OLAP databases (e.g., StarRocks, ClickHouse).
  • Exposure to containerization and orchestration tools (e.g., Docker, Kubernetes).

Perks & Benefits:

  • Competitive total compensation package
  • Various team-building programs and company events
  • Comprehensive healthcare schemes for employees and dependants
  • And many more! Apply and let us tell you more!

Top Skills

Amazon Glue
AWS
Databricks
Elasticache For Redis
Gitlab
Postgres
Pyspark
Python
SQL

Similar Jobs

11 Days Ago
In-Office
Taipei City, TWN
Senior level
Senior level
Cloud • Security • Software • Cybersecurity
The Sr. Staff Engineer for Data Security will design, implement, and enhance software while solving complex data security challenges for customers.
Top Skills: C++Ci/CdDockerDroneGoJenkinsKafkaKubernetesPythonSpark
21 Days Ago
Hybrid
Taipei City, TWN
Mid level
Mid level
Fintech • Financial Services • Cryptocurrency • NFT • Web3
Design and enhance data pipelines for finance reporting, maintain tools for sensitive data, collaborate with various teams to meet data requirements and ensure documentation is clear.
Top Skills: Apache AirflowAWSDockerGitKubernetesPythonSQL
10 Days Ago
In-Office
Taipei City, TWN
Senior level
Senior level
Cloud • Security • Software • Cybersecurity
As a Senior Staff Engineer on the Data Engineering Team, you'll design and build scalable data systems, using open source technologies for cloud and security analytics, impacting customer success.
Top Skills: Big QueryGoHadoopJavaKafkaKubernetesMySQLOpen Source DatabasesPython

What you need to know about the Singapore Tech Scene

The digital revolution has driven a constant demand for tech professionals across industries like software development, data analytics and cybersecurity. In Singapore, one of the largest cities in Southeast Asia, the demand for tech talent is so high that the government continues to invest millions into programs designed to develop a talent pipeline directly from universities while also scaling efforts in pre-employment training and mid-career upskilling to expand and elevate its workforce.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account