Spatial Support is hiring a Senior Backend & LLM Orchestration Lead to own the AI backend behind our 3D product experiences. You’ll level up our Python/FastAPI services and streaming-first LLM pipelines (LangChain/LangGraph or custom) to deliver sub-second, scene-aware answers. You’ll design secure, multi-tenant APIs and websockets, build on GCP (Cloud Run/Functions, Pub/Sub), and drive observability, CI/CD, and IaC. Expect async and GPU-heavy workloads, PostgreSQL + Redis, and tight collaboration with 3D/ML to expose object recognition and spatial reasoning via clean APIs. You bring 5+ years in production backends, strong auth/security chops, and a knack for debugging distributed systems. Bonus: RAG/agents, 3D/CAD exposure, Terraform/GitHub Actions. Remote-first, APAC-friendly. Help set the standard for instant, reliable, context-rich AI.
About Spatial SupportWe’re building the AI backend that makes complex 3D products feel simple. Our platform fuses LLMs with scene-aware 3D context so customers can ask questions and get instant, accurate answers—like chatting with a product expert who “sees” the CAD.
The RoleAs our Senior Backend & LLM Orchestration Lead, you’ll own the backend that powers this experience. Your first 90 days: assess and harden a Python/FastAPI orchestration pipeline (LangGraph/LangChain), push streaming-first responses toward sub-second latency, and enforce an auth-first, multi-tenant architecture on GCP. Your work is the foundation for our 2026 roadmap—near real-time, context-rich support for complex products.
What you’ll doDesign, ship, and maintain Python/FastAPI services that orchestrate multi-step LLM workflows and retrieve scene/3D context.
Optimize latency & throughput: async pipelines, streaming tokens, efficient GPU/compute usage.
Enforce auth-first design: integrate with our Clerk + GCP Cloud Functions auth layer; secure every API and websocket.
Own GCP operations: Cloud Run/Functions, Pub/Sub, Postgres, Redis; CI/CD; choose managed vs custom pragmatically.
Establish observability: structured logging, tracing, SLOs/alerts; build latency/error/throughput dashboards.
Partner with 3D/ML engineers to productize new models (object recognition, scene-aware logic) behind stable APIs.
5+ years building backend APIs/services in Python (FastAPI/Flask), with clean REST/streaming designs.
Real experience integrating ML/LLM or data-heavy backends; LangGraph/LangChain (or custom orchestration) is a plus.
Security fundamentals: OAuth2/JWT, token handling, RBAC/ABAC, multi-tenant safety.
Cloud production chops (preferably GCP): Docker, CI/CD, IaC familiarity.
Postgres + Redis mastery: modeling, query tuning, caching strategies for high traffic.
Debugging distributed systems with logs/traces/profilers in async environments.
Production LLM agent workflows (RAG, tool-use, decision trees).
3D/graphics or spatial data exposure (CAD/BIM, gaming engines, AR/VR).
DevOps automation (Terraform, GitHub Actions), Prometheus/Grafana/Stackdriver.
Mentoring/setting standards; OSS contributions in Python/AI orchestration.
Open-source impact (libs, frameworks, notable PRs).
Systems with standout reliability/latency (e.g., 10× traffic with minimal p95).
Public talks/blogs on scalable AI systems.
Fast but careful shipping record (concept → prod in weeks, iterated with users).
If the backend lags, the 3D+AI magic breaks. You’ll fortify the pipeline that takes us from ~65% to ~85%+ automation accuracy by mid-2026, delivering trustworthy, scene-aware answers that feel instantaneous.
Work setup & benefitsRemote-first across APAC; ≥4h daily overlap with GMT+8.
USD $90k–$125k + equity; flexible hours; high autonomy and ownership.
Application & screening questions
45-min chat (role/context)
Take-home (≈5 focused hours): minimal FastAPI service with auth + streaming LLM over provided context
Technical deep dive / systems interview
Final conversation (values, offer)
Top Skills
Spatial Support Singapore Office
1 Marina Bay, Singapore



