Back to AI Research

AI Research

Scalable Inference Architectures for Compound AI Sy... | AI Research

Key Takeaways

  • Scalable Inference Architectures for Compound AI Systems: A Production Deployment Study Modern enterprise AI applications are shifting from using single, sta...
  • Modern enterprise AI applications increasingly rely on compound AI systems - architectures that compose multiple models, retrievers, and tools to accomplish complex tasks.
  • Deploying such systems in production demands inference infrastructure that can efficiently serve concurrent, heterogeneous model invocations while maintaining cost-effectiveness and low latency.
  • The system integrates serverless execution, dynamic autoscaling, and MLOps pipelines to deliver consistent low-latency inference across multi-component agent workflows.
  • We report production results demonstrating over 50% reduction in tail latency (P95), up to 3.9x throughput improvement, and 30 to 40% cost savings compared to prior static deployments.
Paper AbstractExpand

Modern enterprise AI applications increasingly rely on compound AI systems - architectures that compose multiple models, retrievers, and tools to accomplish complex tasks. Deploying such systems in production demands inference infrastructure that can efficiently serve concurrent, heterogeneous model invocations while maintaining cost-effectiveness and low latency. This paper presents a production deployment study of a modular, platform-agnostic inference architecture developed at Salesforce to support compound AI use cases including Agentforce (autonomous AI agents) and ApexGuru (AI-powered code analysis). The system integrates serverless execution, dynamic autoscaling, and MLOps pipelines to deliver consistent low-latency inference across multi-component agent workflows. We report production results demonstrating over 50% reduction in tail latency (P95), up to 3.9x throughput improvement, and 30 to 40% cost savings compared to prior static deployments. We further present a novel analysis of compound-system-specific challenges including multi-model fan-out overhead, cascading cold-start propagation, and heterogeneous scaling dynamics that emerge uniquely when serving agentic workloads. Through detailed case studies and operational lessons, we illustrate how the architecture enables compound AI systems to scale model invocations in parallel, handle bursty multi-agent workloads, and support rapid model iteration - capabilities essential for operationalizing agentic AI at enterprise scale.

Scalable Inference Architectures for Compound AI Systems: A Production Deployment Study
Modern enterprise AI applications are shifting from using single, standalone models to "compound AI systems"—architectures that combine multiple models, data retrievers, and external tools to perform complex tasks. While this approach improves performance, it creates significant infrastructure challenges, such as managing concurrent model calls, handling bursty traffic, and minimizing latency across multi-step workflows. This paper details a modular, platform-agnostic inference architecture developed at Salesforce to support these complex systems, providing a blueprint for scaling agentic AI in production environments.

Addressing Compound System Challenges

Traditional model-serving infrastructure is often designed for single-model workloads, making it ill-suited for the unique demands of compound AI. The authors identify three specific hurdles: "fan-out" overhead, where one user request triggers multiple simultaneous model calls; "cascading cold starts," where a delay in one component stalls the entire pipeline; and heterogeneous scaling, where different models in the same system require different amounts of resources. The proposed architecture addresses these by decoupling model hosting from orchestration, allowing each component to scale independently based on its specific traffic patterns rather than relying on a one-size-fits-all approach.

How the Architecture Works

The system utilizes a tiered design that includes a central "Prediction Service" to manage requests and an orchestration engine (the Atlas Reasoning Engine) to coordinate tasks. To maintain efficiency, the platform supports two deployment patterns: lightweight serverless functions for simple tasks and persistent microservices for heavier workloads. A key innovation is the "coordinated pre-warming" strategy, which proactively prepares downstream models in a pipeline when an initial request is received. This significantly reduces the time users spend waiting for models to initialize after periods of inactivity.

Key Performance Results

The deployment of this architecture at Salesforce has led to substantial operational improvements. Compared to previous static infrastructure, the system achieved over a 50% reduction in tail latency (P95) and up to 3.9x higher throughput. By moving to an autoscaling, pay-per-use model, the team also realized 30–40% cost savings by eliminating the need to pay for idle GPU resources. Furthermore, the architecture proved resilient under stress, maintaining stable performance even when traffic spiked by 10x, whereas older systems would have faced significant latency degradation or failure.

Operational Lessons for Practitioners

The study highlights that managing compound AI systems requires more than just optimizing individual models; it requires a focus on the infrastructure layer that connects them. The authors emphasize the importance of independent scaling for each model component, which prevents resource contention and ensures that high-demand models get the resources they need without over-provisioning others. Additionally, the authors note that while serverless architectures are highly cost-effective for variable workloads, teams should consider reserving dedicated capacity for consistently heavy, high-volume tasks to balance performance and cost.

Comments (0)

No comments yet

Be the first to share your thoughts!