Job Description:
We are hiring a core partner to co-own our next-gen anomaly detection & attribution (ADA) platform that powers real-time monitoring, automated diagnosis, and decision support across multiple games/regions. You will lead end-to-end work—architecture, development, maintenance, and analytical impact—while advancing our LLM-Agent capabilities to turn noisy signals into clear, actionable narratives for business teams.
What You’ll Do
●Own the ADA platform lifecycle: design, implement, and maintain robust pipelines for T+1/near-real-time anomaly detection, multi-baseline benchmarking (global/region/country), and multi-source attribution (holidays, versions, events, migrations, user behavior).
●Advance detection & inference: productionize change-point/outlier methods, time-series features, causal/ablation checks, and automated “storyline” generation that explains what happened, why, and what to do next.
●Build AI Agents around the system: design tool-use and reasoning flows (ReAct/LangGraph or similar) to enable conversational drill-downs for ops/PMs.
●Productize insights: ship dashboards/alerts (email/Chat/WeCom) and concise decision memos; iterate with stakeholders in publishing, ops, marketing, and analytics.
●Collaboration: partner with DS/Eng/PM to scope, roadmap, and ship; document architecture, APIs, and runbooks.
Job Requirements:
Must-Have Qualifications:
●Technical core:
○Strong Python engineering (clean code, testing, packaging); solid SQL for large analytical workloads.
○Hands-on with time-series/anomaly detection (change-point, robust stats, seasonality/holiday adjustment, multivariate signals) and attribution logic.
○Practical exposure to LLM application patterns (tool calling, function calling, retrieval/RAG, Agent planning), and at least one framework/API (OpenAI/Claude/DeepSeek, LangChain/LangGraph, etc.).
●Systems/product mindset: ability to translate business pain points into measurable detection/attribution logic and ship reliable features on short cycles.
●Ownership & reliability: you build guardrails, monitors, and docs; you debug in production and prevent regressions.
Nice-to-Have / Preferred:
●Model eval & prompt engineering: rubric design, offline eval sets, golden tasks, prompt/test versioning, data flywheels.
●Causal & experimentation: diff-in-diff, CUPED, synthetic controls, online AB testing at scale.
●Dashboards & alerts: Superset/Tableau/Looker/Metabase; alerting via Slack/WeCom/Email with noise-reduction heuristics.
●Gaming analytics domain: retention funnels, reactivation, event/version rollout attribution,fraud/smurf detection.
●Infra & ops: Docker/K8s, CI/CD, IaC; cloud stacks (GCP/AWS/Azure); cost/perf tuning
Currently, there aren't any salaries for this role at Beyondsoft shared by other job seekers.
View more salaries from Beyondsoft →Achieve your dream job with our top-notch tools!
Resume Checker
Our free resume checker analyzes the job description and identifies important keywords and skills missing from your resume in just a minute!
AI InterviewPrep
Utilizing advanced AI, our tool generates tailored interview questions based on your industry, role, and experience. Practice and receive feedback on your answers in real time!
Resume Builder
Let us show you the differences between a bad, good, and great resume, and guide you in building a resume that helps you stand out to employers, ensuring you land your next position faster!