Design, build, and maintain data processing systems for analytics and AI applications. Develop data pipelines, manage data storage, and mentor junior engineers.
Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.
Job Responsibilities :This role is required to design, build and maintain high volume big data processing systems that enable the organization to collect, manage, and convert raw data into usable information for data scientists and business analysts, and enabling the use of Artificial Intelligence (AI) capabilities. He / She is responsible for developing and maintaining data pipelines, data storage systems and cloud infrastructure, while working closely with data scientists, data analysts and internal stakeholders to utilize data for analytics and AI capabilities.Essential Duties and Responsibilities
- Owns and improve the data stack used in the team to enhance the data processing capabilities.
- Design, develop and maintain data systems and data pipelines that enable the organization to store, process, and analyze large volumes of data. This involves developing data pipelines, designing data storage systems, and ensuring that data is integrated effectively to support of AI applications.
- Manage data lakes and data warehouses by populating and operationalizing them. This involves creating and managing table schemas, views, materialized views, including tokenization and vectorization techniques for Gen AI.
- Monitor and troubleshoot data workflows, ensuring timely resolution of failures and rerunning failed jobs to ensure data completeness.
- Leverage modern build tools to enhance automation, data quality, testing, and deployment of data pipelines.
- Design and build AI Powered and GenAI applications collaboratively with data scientists, data analysts, product managers and business users.
- Develop and implement cloud infrastructure that are in line with company's security policies and practices, as well as cost optimization practices.
- Manage and scope projects that involve collaboration with data scientists, data analysts and business users to understand the data needs of various stakeholders across the organization to implement appropriate solutions.
- Mentor interns and junior engineers in the team
Qualifications and Skills
- Degree in computer science, engineering, mathematics or equivalent experience.
- 5+ years of relevant professional experience.
- Ability to scope projects and effectively lead and mentor more junior colleagues.
- Ability to write clean, maintainable, scalable and robust code using Python, SQL, Java.
- Proven experience in building and maintaining pipelines in production for advanced analytics uses cases.
- Experience with cloud data services such as AWS Redshift, Snowflake, BigQuery, or Azure Data Lake.
- Experience using Infrastructure As Code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes.
- Experience using orchestration tools like Airflow, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT).
- Experience with CI/CD pipelines for data engineering workflows.
- Experience with various data processing techniques (streaming, batch, event-based), managing and optimizing data storage (Data Lake, Data Warehouse, Vector Data Stores and Database, SQL, and NoSQL) is essential.
- Excellent problem-solving and analytical skills, with an understanding of AI technologies and their applications
- Excellent written and verbal communication skills for coordinating across teams.
Are you game?
Top Skills
Airflow
Aws Redshift
Azure Data Lake
BigQuery
Dask
Data Build Tool (Dbt)
Docker
Java
Kubernetes
Python
Snowflake
Spark
SQL
Terraform
Razer Singapore Office
1 One-north Cres, Singapore, 138538
Similar Jobs
Software
Senior Data Engineers at Thoughtworks develop, maintain, and test data applications, focusing on creating data solutions and pipelines.
Top Skills:
Aws RdsDagsterData LakeGlueNoSQLPythonQlikQuicksightRedshiftSagemakerSQLTalendTerraform
Artificial Intelligence • Fintech • Payments • Financial Services • Generative AI
As a Senior Data Platform Engineer, you'll optimize data processing and storage, develop distributed systems, and guide junior engineers.
Top Skills:
BigQueryData LakeDbtFlinkHadoopJavaKafkaKotlinPythonScalaSnowflakeSparkSpring BootSQL
Artificial Intelligence • Software
The Senior Data Engineer will manage patent data operations, analyze data requirements, optimize data frameworks, and collaborate on architecture upgrades.
Top Skills:
FlinkJavaLinuxPythonSpark
What you need to know about the Singapore Tech Scene
The digital revolution has driven a constant demand for tech professionals across industries like software development, data analytics and cybersecurity. In Singapore, one of the largest cities in Southeast Asia, the demand for tech talent is so high that the government continues to invest millions into programs designed to develop a talent pipeline directly from universities while also scaling efforts in pre-employment training and mid-career upskilling to expand and elevate its workforce.