Menlo Research Logo

Menlo Research

DevOps Engineer

Reposted 2 Days Ago
Be an Early Applicant
In-Office
Singapore, SGP
Senior level
In-Office
Singapore, SGP
Senior level
The Infrastructure Engineer will manage and scale Menlo's cloud platform, overseeing services like OpenStack and Kubernetes, building infrastructure for AI development, and ensuring system reliability and performance.
The summary above was generated by AI

About Menlo

Menlo Research is an Applied R&D lab building Asimov, an open-source humanoid robot platform, and the full software stack that powers it. Our mission is to make humanoid labor economically viable -- turning software into physical labor at scale. We build across the full stack: hardware architecture, locomotion, autonomy, simulation, and infrastructure. We move fast, ship to real robots, and open-source everything we can. If you want your work to matter beyond a paper or a demo, this is the place.

The Role

As an DevOps Engineer, you will own and evolve the platform that everything at Menlo runs on -- from inference serving, to training rigs, to the agentic coding infrastructure that powers day-to-day engineering. You will work deep in the stack across Kubernetes, networking, and where it matters bare metal, and help set the technical direction for how Menlo Cloud scales.

What You'll Do

  • Operate and evolve our Kubernetes platform across multiple clusters and environments (Prod, Dev, hybrid on-prem and public cloud), covering control plane operations, node lifecycle, upgrades, and autoscaling at every layer (Cluster Autoscaler, HPA, KEDA).
  • Architect and manage hybrid cloud infrastructure spanning on-premises and public clouds (GCP, AWS), including workload placement, cross-cloud networking, and unified resource management.
  • Own the CI/CD and GitOps experience end-to-end: container build pipelines, image optimization, and progressive delivery via ArgoCD / FluxCD.
  • Own the observability stack as a single pane of glass across all clusters: Grafana, Mimir, Tempo, Loki, Pyroscope, OnCall, Prometheus -- and help push toward agent-assisted SRE workflows.
  • Manage and improve our inference platform: vLLM serving and AIBrix for multi-model orchestration and autoscaling across a fleet of NVIDIA GPUs.
  • Operate platform services: Kafka, Redis, PostgreSQL, OpenSearch.
  • Manage identity and access via Keycloak integrated with Google Workspace; harden SSO, RBAC, and secrets management across the platform.
  • Harden network security across private load balancers, firewalls, and VPC segmentation; design and maintain hub-and-spoke / multi-AZ topologies.
  • Support training infrastructure: self-service VM provisioning, RunPod burst capacity, Weights and Biases integration.
  • Drive infrastructure reliability, cost efficiency, and capacity planning as the platform scales.

What We're Looking For

  • Kubernetes -- deep, hands-on. Strong production experience with Kubernetes, fluent in workloads and controllers, networking (Services, Ingress, CNI basics), storage (PV/PVC, CSI), RBAC, and the autoscaling story end-to-end (HPA, VPA, Cluster Autoscaler, KEDA). Cloud-managed Kubernetes (GKE, EKS, AKS) is fine; on-premises / self-managed Kubernetes (kubeadm, Cluster API, k3s, etc.) is a strong plus.
  • Networking -- design-level, not just operator-level. You have designed real network topologies at some point in your career -- hub-and-spoke, multi-AZ / multi-VPC, or an equivalent enterprise pattern -- and can defend the tradeoffs. Comfortable with VPCs, firewalls, load balancers, private cluster architecture, DNS, and routing. On-premises networking experience (VLANs, BGP, L2/L3 fabrics, pfSense / Fortinet / Palo Alto / Cisco) is a strong plus.
  • CI/CD and Docker -- concepts over tooling. You can build and optimize Dockerfiles (multi-stage builds, layer caching, small/secure base images) and have owned full CI/CD pipelines end-to-end. Tooling is flexible -- GitHub Actions, GitLab CI, Azure Pipelines, Jenkins, Argo Workflows, etc. -- but you should be able to clearly articulate the full lifecycle of a typical pipeline, and explain how CI/CD changes when the deployment target is Kubernetes (ArgoCD / FluxCD, GitOps patterns, progressive delivery).
  • Observability -- you have built this before. You have stood up a full observability stack from scratch and operated it in production -- metrics, logs, traces, alerting, on-call. Familiarity with the Grafana stack (Grafana, Mimir, Tempo, Loki, Pyroscope, OnCall, Prometheus) is a strong plus. Bonus points if you have experimented with agent-assisted SRE workflows or LLM-driven incident triage.
  • SSO and identity. When you bring a new tool into the platform, your instinct is to wire it into a central IdP rather than leave it on local accounts. Comfortable with OpenID Connect, SAML, and traditional directory services (LDAP / Active Directory), and you have integrated tools with an IdP like Keycloak, Okta, Azure AD, or equivalent.
  • Linux and automation fundamentals. Strong Linux proficiency (RHEL/Ubuntu or equivalent) including basic performance and networking debugging. Comfort with infrastructure-as-code (Terraform / Terragrunt / Pulumi or equivalent) and configuration management.
  • Ownership mindset. Comfortable operating in a high-ownership environment where you make architecture decisions, push them to production, and own the outcomes.
  • Optional but valuable: hands-on experience operating any of Kafka, Redis, PostgreSQL, OpenSearch -- at production scale, including HA, backup/restore, and upgrade planning.

Bonus points for:

  • Experience with OpenStack in production: Nova, Neutron, Cinder, Trove, Horizon, and CLI administration.
  • Experience with KVM virtualization and storage backends like Ceph or Rook-Ceph on Kubernetes.
  • Familiarity with vLLM internals: PagedAttention, continuous batching, tensor parallelism.
  • Background in AI/ML infrastructure or GPU cluster operations at scale.
  • Experience with KEDA or event-driven autoscaling patterns in anger.
  • Prior open-source contributions to Kubernetes, OpenStack, or adjacent projects.
  • Kernel-level Linux debugging and performance tuning.

Why Join Menlo?

Most infrastructure teams manage someone else's cloud. At Menlo, you own the metal. Menlo Cloud is a first-class investment built from the ground up, and it sits at the center of everything we do, from coding agents to humanoid robots. You will have genuine ownership over a platform that is technically ambitious, cost-conscious by design, and critical to the mission. If you want to build infrastructure that actually matters and have the autonomy to do it right, this is the place.

Menlo Research Singapore Office

Similar Jobs

12 Hours Ago
In-Office
Singapore, SGP
Mid level
Mid level
Information Technology
The DevOps Engineer will design and implement secure and scalable systems, automate development operations, and improve software delivery processes in a government digital services team.
Top Skills: Amazon AwsAnsibleAtlassian BambooChefGitJenkinsNode.jsPuppetReactShell-ScriptingTerraform
7 Days Ago
In-Office or Remote
Singapore, SGP
Mid level
Mid level
Artificial Intelligence • Fintech • Software • Financial Services
As a DevOps Engineer, you'll build and manage CI/CD pipelines, cloud infrastructure, and Kubernetes clusters while collaborating with engineers to improve system reliability and security.
Top Skills: AWSAzureBashCloudFormationDatadogDockerElkGCPGithub ActionsGitlab CiGrafanaJenkinsKubernetesPrometheusPulumiPythonTerraform
15 Days Ago
In-Office
Singapore, SGP
Mid level
Mid level
Fintech • Payments • Financial Services
The DevOps Engineer develops and maintains applications for Financial Markets, ensures application reliability, and collaborates with users for effective solutions.
Top Skills: DevOpsKdb+MosaicOwasp

What you need to know about the Singapore Tech Scene

The digital revolution has driven a constant demand for tech professionals across industries like software development, data analytics and cybersecurity. In Singapore, one of the largest cities in Southeast Asia, the demand for tech talent is so high that the government continues to invest millions into programs designed to develop a talent pipeline directly from universities while also scaling efforts in pre-employment training and mid-career upskilling to expand and elevate its workforce.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account