Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.
Job Responsibilities :This role will focus on designing and developing advanced Reinforcement Learning (RL) driven solutions for game developers and players. The ideal candidate will have deep expertise in RL algorithms, agent-based modeling, and scalable training frameworks to create intelligent gameplay agents and adaptive in-game behaviors. This role involves working closely with AI engineers, game developers, and software engineers to build cutting-edge AI capabilities that enhance exploration and fast iteration in production environments.Essential Duties and Responsibilities
Design and deploy reinforcement learning (RL) agents in the gaming domain to support in-house AI services.
Research, prototype, and evaluate RL agents with different policies and learning methodologies.
Develop reward functions, learning strategies, and training environments aligned with gameplay objectives.
Optimize agent performance through hyperparameter tuning, reward shaping, and model architecture refinement.
Generalize RL agent solutions to scale across various game engines and games spanning multiple genres.
Incorporate state, event, and perception signals into RL observation spaces.
Evaluate agents using task success metrics, stability, sample efficiency, and convergence behavior.
Analyze agent failures and debug training instabilities or policy collapse.
Collaborate with cross-functional teams (engineers, developers, researchers) to integrate RL gameplay agents seamlessly into games.
Document training pipelines, assumptions, and evaluation results.
Qualifications
Proficiency in Python, experience in compiled languages like C++ / Rust is a plus.
Hands-on experience in PyTorch, MLX, TensorFlow, or similar reinforcement libraries.
Solid understanding of policy-based, value-based, and actor-critic methods.
Experience with reward design and exploration strategies.
Familiarity with simulation environments or gaming frameworks (e.g., OpenAI Gym, Unity, Unreal Engine).
Experience designing state representations, observation spaces, and action abstractions.
Strong analytical and problem-solving skills.
Excellent written and verbal communication skills across technical and non-technical teams.
Preferred
Experience with multi-agent RL, self-play, or adversarial training.
Exposure to offline RL, imitation learning, or hybrid approaches.
Experience with messaging and communication technologies such as RabbitMQ, gRPC, REST APIs for service integration.
Familiarity with experiment tracking, reproducibility, and evaluation tooling.
Education & Experience
Master’s or PhD in a relevant field (Computer Science, AI, Machine Learning, etc.).
2+ years of applied experience in reinforcement learning (academic or industry).
Travel Requirements
Role based in Singapore office, with occasional travel (up to 1 trip per year) for conferences, research collaborations, or business meetings.
Razer is proud to be an Equal Opportunity Employer. We believe that diverse teams drive better ideas, better products, and a stronger culture. We are committed to providing an inclusive, respectful, and fair workplace for every employee across all the countries we operate in. We do not discriminate on the basis of race, ethnicity, colour, nationality, ancestry, religion, age, sex, sexual orientation, gender identity or expression, disability, marital status, or any other characteristic protected under local laws. Where needed, we provide reasonable accommodations - including for disability or religious practices - to ensure every team member can perform and contribute at their best.
Are you game?
Top Skills
Razer Singapore Office
1 One-north Cres, Singapore, 138538


