Responsible for developing algorithms and back-end data pipelines, optimizing performance, troubleshooting, and ensuring high data quality in cargo tracking.
At Kpler, we are dedicated to helping our clients navigate complex markets with ease. By simplifying global trade information and providing valuable insights, we empower organisations to make informed decisions in commodities, energy, and maritime sectors.
Since our founding in 2014, we have focused on delivering top-tier intelligence through user-friendly platforms. Our team of over 700 experts from 35+ countries works tirelessly to transform intricate data into actionable strategies, ensuring our clients stay ahead in a dynamic market landscape. Join us to leverage cutting-edge innovation for impactful results and experience unparalleled support on your journey to success.
The Cargo Models team is responsible for building highly accurate cargo tracking models to provide live trade insights for the global maritime transport industry. Using a range of advanced Operations Research and Data Science techniques, we process and transform live data and integrate it into several core data pipelines. Our team directly manages and ensures the quality of the data provided to clients, making us one of the key parts of Kpler.
Responsibilities:
- Working alongside data engineers, data scientists and product managers, take responsibility for the development and implementation of our core algorithms and back-end data pipelines, based on project requirements and design specifications.
- Help clients and internal users benefit from the highest cargo tracking data quality by adding new features and reviewing pipelines and integrations with our datastores.
- Help to optimise system performance, maintain features, troubleshoot issues, and ensure high availability.
- Demonstrate strong analytical and debugging skills with a proactive approach to learning.
Skills and Experience:
- Have hands-on experience with Python. Experience with Flask or SQLALchemy is a plus.
- Solid SQL skills for querying and managing relational databases.
- Knowledge of streaming and big data technologies (such as Kafka or Spark).
- Comfortable working with Git, code reviews, and Agile methodologies.
- Have an understanding of containerisation and orchestration tools (e.g., Docker, Kubernetes).
- Are eager to learn new languages and technologies.
Nice to have:
- Have worked with AWS (or another cloud provider), using Terraform.
- Have experience with Scala or other JVM Languages.
- Exposure to Elasticsearch.
We are a dynamic company dedicated to nurturing connections and innovating solutions to tackle market challenges head-on. If you thrive on customer satisfaction and turning ideas into reality, then you’ve found your ideal destination. Are you ready to embark on this exciting journey with us?
We make things happen
We act decisively and with purpose, going the extra mile.
We build
together
We foster relationships and develop creative solutions to address market challenges.
We are here to help
We are accessible and supportive to colleagues and clients with a friendly approach.
Our People Pledge
Don’t meet every single requirement? Research shows that women and people of color are less likely than others to apply if they feel like they don’t match 100% of the job requirements. Don’t let the confidence gap stand in your way, we’d love to hear from you! We understand that experience comes in many different forms and are dedicated to adding new perspectives to the team.
Kpler is committed to providing a fair, inclusive and diverse work-environment. We believe that different perspectives lead to better ideas, and better ideas allow us to better understand the needs and interests of our diverse, global community. We welcome people of different backgrounds, experiences, abilities and perspectives and are an equal opportunity employer.
By applying, I confirm that I have read and accept the Staff Privacy Notice
Top Skills
Agile Methodologies
AWS
Docker
Elasticsearch
Flask
Git
Kafka
Kubernetes
Python
Scala
Spark
SQL
Sqlalchemy
Terraform
Similar Jobs
Blockchain • Payments • Financial Services
As Tempo's first data engineer, you'll design and build data infrastructure, develop ETL pipelines, and optimize data systems for blockchain analytics.
Top Skills:
Cloud Data WarehousesData Pipeline FrameworksPythonSQL
Professional Services
The Freelance Data Engineer will analyze data, design databases, manage data operations, and collaborate on data-driven projects to optimize data infrastructure.
Top Skills:
BigQueryDbtFivetranGitLookerLookmlPower BIPythonSQLTableau
Cloud • Security • Software • Cybersecurity • Automation
Develop new features, review database changes, document best practices, and improve product performance in backend engineering focusing on data frameworks.
Top Skills:
KafkaNatsPostgresRubyRuby On Rails
What you need to know about the Singapore Tech Scene
The digital revolution has driven a constant demand for tech professionals across industries like software development, data analytics and cybersecurity. In Singapore, one of the largest cities in Southeast Asia, the demand for tech talent is so high that the government continues to invest millions into programs designed to develop a talent pipeline directly from universities while also scaling efforts in pre-employment training and mid-career upskilling to expand and elevate its workforce.



