Scope
- This is a full-stack AI engineering role with a strong focus on building intelligent, multi-agent applications that power supply chain automation at scale. The ideal candidate brings hands-on experience with LLM orchestration frameworks, Retrieval-Augmented Generation (RAG) pipelines, context graphs, and the Azure AI ecosystem. You will collaborate with product engineers, data scientists, and platform architects to design and ship production-grade GenAI capabilities across Blue Yonder's supply chain platform.
What You’ll Do
- Designs, builds, and maintains LLM-powered application components adhering to clean code principles, Blue Yonder engineering standards, and effective test coverage.
- Develops and optimizes Retrieval-Augmented Generation (RAG) pipelines, including chunking strategies, vector store management, embedding tuning, and retrieval relevance evaluation.
- Builds and maintains multi-agent orchestration workflows using LangChain, LangGraph, or equivalent agentic frameworks — including inter-agent communication, tool use, memory, and state management.
- Implements context graph solutions to enable richer semantic understanding and cross-entity reasoning within supply chain domains.
- Designs and integrates Azure AI services including Azure OpenAI, Azure AI Search (Cognitive Search), Azure Machine Learning, and related Azure PaaS components.
- Participates actively in team ceremonies — backlog grooming, sprint planning, daily stand-ups, and retrospectives.
- Contributes to prompt engineering best practices, LLM evaluation frameworks, and guardrail strategies for hallucination mitigation.
- Understands how changes to GenAI components affect downstream agents, dependent services, and end-customer experiences.
- Incorporates information security, responsible AI, and data privacy considerations into all development work.
- Identifies production issues with LLM agents and escalates to the team with relevant context (traces, prompts, token usage).
- Autonomously plans and performs routine changes to agent configurations, prompt templates, and model version upgrades.
- Independently resolves incidents within a defined set of GenAI service functions, including model fallback and latency issues.
- Handles service requests related to agent onboarding, tool integration, and knowledge base updates.
What We Are Looking For
- 2–5 years of software engineering experience with at least 1–2 years focused on GenAI / LLM application development.
- Strong proficiency in Python and/or TypeScript/JavaScript for building AI-powered backend and frontend services.
- Hands-on experience with LangChain and/or LangGraph for agentic orchestration, tool use, and memory management.
- Solid understanding of RAG architecture: document ingestion, chunking, embedding models, vector databases (e.g., Azure AI Search, Weaviate, Pinecone), and reranking.
- Experience building or working with multi-agent systems — including agent routing, orchestrator/worker patterns, and inter-agent handoffs.
- Working knowledge of the Azure AI ecosystem: Azure OpenAI Service, Azure AI Search, Azure Functions, Azure Container Apps, and related services.
- Familiarity with context graphs, knowledge graphs, or graph-based retrieval for enhancing LLM reasoning.
- Experience with prompt engineering, system prompt design, few-shot prompting, and chain-of-thought techniques.
- Understanding of LLM evaluation strategies: hallucination detection, faithfulness scoring, and output quality metrics.
Our Values
If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.