C3 AI transforms untapped, intuitional enterprise knowledge into dynamic, reliable, and scalable agentic systems.

By Sina Pakazad, Vice President, Data Science, C3 AI, Ali Panahi, Lead Data Scientist, C3 AI, Ian Wu, Senior Data Scientist, C3 AI, Yang Song, Senior Data Scientist, C3 AI, John Abelt, Lead Product Manger Gen AI, C3 AI, Romain Juban, Vice President, Data Science, C3 AI, Henrik Ohlsson, Chief Data Scientist, Data Science, C3 AI


 

Many enterprise business processes stand to gain significantly from the implementation of agent-based and multi-agent workflows. While agentic workflows have found considerable traction in consumer applications, their adoption within enterprises has progressed more gradually due to a confluence of unique challenges. C3 AI’s enterprise-grade approach to creating agentic workflows, built on years of experience with generative and agentic AI systems, addresses these barriers head-on by delivering reliable, scalable, and secure agent-based automation tailored for real-world demands.

The Barriers to Enterprise-Scale Agentic Workflows

A primary hurdle in deploying agentic AI in the enterprise lies in the fact that numerous enterprise processes are deeply rooted in tacit institutional knowledge — knowledge that has historically proven difficult and time-consuming to extract and operationalize. Furthermore, enterprises operate with extensive private datasets spanning diverse modalities, alongside proprietary tools largely unseen by large language models (LLMs) during their training. This discrepancy poses challenges to ensuring the reliability and robustness of AI-driven workflows. Unlike consumer applications, where a degree of unpredictability might be acceptable, enterprises demand a high degree of reliability and repeatability. Consequently, achieving enterprise-readiness for agentic workflows necessitates greater time and effort, which can slow down deployment and broader adoption.

Moreover, the efficacy of even the most meticulously designed agentic workflows is intrinsically linked to the quality of the underlying tooling. If the tools an agent depends on are unreliable, overly complex, or lack essential functionality, the workflow will inevitably falter, irrespective of its design. This creates a significant challenge: enterprise development teams must concurrently maintain, refine, and tailor agentic workflows while also developing the novel tools these workflows require for business operations. Faced with potentially understaffed teams and protracted development cycles, this dual focus often leads to delays and compromises in quality across both domains.

Introducing STAFF: C3 AI’s Framework for Multi-Agent Automation

To directly address these challenges, we have developed STAFF (specification to tiny agent fine-tuning framework), an intelligent suite of agents engineered to automate the end-to-end creation and deployment of multi-agent workflows specifically for enterprise environments.

STAFF empowers business and process owners to articulate their requirements directly in natural language through a process specification. This eliminates the need for direct developer intervention in the initial stages and significantly simplifies the capture and formalization of crucial institutional knowledge.

Inside the Process Extraction Agent

At the core of STAFF lies a specialized process extraction agent. This intelligent agent iteratively engages with users, extracting a comprehensive process description and a clear articulation of requirements in natural language. This agent goes beyond mere information gathering; it intelligently translates the extracted information into a structured workflow while simultaneously enforcing essential logical and structural constraints. These embedded constraints are paramount in guaranteeing the robustness and reliability of the workflow upon deployment. By automating this critical extraction and structuring process, STAFF substantially reduces the time and effort traditionally associated with operationalizing complex business processes.

Once the workflow blueprint is generated, STAFF leverages a team of code agents to automatically translate these defined requirements into a fully functional multi-agent system. This translation process begins with decomposition of the generated workflow into logical modules, followed by the generation of code in a similarly modular fashion. This approach not only enhances the overall quality and resilience of the resulting system but also facilitates better adherence to established best practices in software development. This structured methodology also significantly improves the efficiency of the coding agents, thereby minimizing the reliance on dedicated development teams for routine implementation.

Key Enablers Behind STAFF

Three key advancements lay the framework for STAFF’s powerful workflow creation capabilities:

  1. Advanced Reasoning and Coding with LLMs and Agents: This enables the dynamic generation and iterative refinement of sophisticated workflows.
  2. Breakthroughs in Agentic Frameworks: These advancements facilitate improved coordination and collaboration among multiple agents to effectively execute complex, multi-step processes.
  3. Next-Generation C3 Agentic AI Platform Capabilities: This provides a robust suite of tools for the comprehensive management, real-time monitoring, and intelligent optimization of agentic workflows, including granular dependency tracking and seamless orchestration.

It’s important to note that as we witness further progress in these foundational areas, STAFF is architected to directly leverage these advancements, ensuring continuous improvement and enhanced capabilities over time.

By seamlessly automating both workflow generation and subsequent execution, STAFF significantly alleviates the operational burden on enterprise development teams. This allows these valuable teams to strategically refocus their efforts on building essential, higher-level tools that drive greater efficiency, broader automation, and deeper integration across critical business processes.

Beyond the direct automation of predefined workflows, STAFF possesses the remarkable capability to create entirely new agents and tools (within reasonable constraints) to overcome internal roadblocks. When STAFF identifies limitations in its available resources to achieve a specific objective, it can autonomously generate new agents or specialized tools to address these challenges, thereby expanding the ecosystem of readily available tooling. Subject to user validation, these newly created tools can also be made accessible for others within the organization to leverage, fostering a more adaptive and continuously evolving enterprise environment.

Furthermore, we believe STAFF represents a significant step towards a future where static applications evolve into dynamic, user-centric platforms. These dynamic applications can be intuitively tailored by users themselves to precisely match their specific processes and individual usage patterns. By empowering users with the tooling provided by the application, they gain the flexibility to refine, extend, and adapt their workflows, ensuring that automation remains tightly aligned with real-world business operations and evolves in tandem with the ever-changing needs of the enterprise.

Adapting and Scaling

While STAFF-generated workflows often leverage the power of large or commercially available LLMs, we recognize that enterprise constraints such as cost considerations, latency requirements, stringent security protocols, or the necessity for offline deployment may necessitate a more tailored approach. To effectively address these critical requirements, STAFF incorporates the tiny agent fine-tuning framework (TAFF). TAFF enables enterprises to distill the core reasoning capabilities of a complex workflow into a smaller, more efficient, and fine-tuned LLM.

The crucial training data for this smaller, specialized model is intelligently mined from existing enterprise data and the rich history of interactions with STAFF during workflow creation and subsequent refinement. Complementing this, STAFF employs advanced synthetic data generation techniques and sophisticated distillation recipes to further enhance the performance and efficiency of the distilled model. This comprehensive approach ensures that the resulting distilled model retains the essential reasoning capabilities required for the specific task while adhering strictly to critical enterprise operational requirements.

Empowering the Enterprise

Ultimately, STAFF empowers business process owners by placing control directly in their hands. This significantly reduces the traditional reliance on development teams for every automation initiative and effectively eliminates key bottlenecks that often hinder the widespread adoption of automation. By decentralizing the workflow creation process, development teams can strategically refocus their expertise on advancing core enterprise tooling, accelerating the overall pace of automation, and enabling deeper, more seamless integration across disparate business processes.

The inherently modular design of both STAFF and the workflows it generates further supports the implementation of robust built-in approval flows, comprehensive automated auditing, and clear interpretability — providing business owners with the confidence to autonomously deploy these generated workflows and enabling others to reliably reuse them as valuable tools within their own operational domains.

This shift from centralized development to empowered ownership sets the stage for the next critical phase: defining the workflow itself.

Figure 1: The Process Extraction Agent uses organizational metadata—including user roles, permissions, tools, and data access—to collaboratively extract workflow structures conversationally through natural language.

 
The first step in creating an agentic workflow for a given process is extracting a clear definition of that process. This is the responsibility of the process extraction agent within the STAFF suite. By interacting with users through natural language, this agent systematically captures the structure of the workflow. To achieve this, it leverages organizational metadata, which includes key details such as:

  • User roles and permissions
  • Data availability and access rights
  • Available tools or APIs (which may be static code-based or powered by other agents)

Using this contextual information, the agent translates the user’s process description — covering logical process steps, objectives, and dependencies — into an initial workflow draft. This draft is structured as a directed graph, where:

  • Each node represents a step in the process.
  • Each edge defines interactions between these steps.
  • The workflow may include cycles, accommodating iterative or feedback-driven processes.

Each step in the workflow utilizes an LLM, which can assume different roles such as:

  1. Generator: Producing initial outputs or refinements for a given step
  2. Critic: Reviewing and providing actionable feedback
  3. Tool-user: Leveraging enterprise APIs and tools to complete tasks

To ensure accuracy, the process extraction agent not only adheres to the user’s intent but also enforces logical and structural constraints that govern workflow execution. Additionally, it can identify gaps in tooling that prevent STAFF from creating the workflow for the underlying process. If such gaps exist, the agent informs the user that the workflow cannot be fully generated and highlights the specific missing tools or functionalities.

Once the initial workflow is generated, users can refine it through natural language interactions, dynamically modifying the structure as needed. Figure 1 provides a high-level overview of this process, while Figure 2 illustrates how user interactions can iteratively improve the workflow.

Figure 2: The user can provide feedback to the Process Extraction Agent through a chat interface to better align the extracted structure with user requirements.

 
This iterative interaction results in a finalized workflow, that not only defines its structure but also captures key execution details for each step, including:

  • Inter-step interactions: Input/output signatures and the nature of their dependencies.
  • Tools and APIs: The tools/APIs to be used within the step and requirements governing the usage
  • LLM instructions and personas: Defining the prompts and behaviors of LLM-powered steps

At this point in the process, the user has compiled the necessary information that enables automatic code generation to bring the workflow to life.

Building the System: Workflow Decomposition and Code Generation

With STAFF, we take a thoughtful and intentional approach to code generation, guided by two key objectives:

  • Creating high-quality, reusable atomic workflows and tools to enhance organizational efficiency and consistency.
  • Enabling modular code generation to improve performance, understanding, and provide greater opportunities for explainability and auditing.

Figure 3: Once the workflow structure is finalized, the workflow is first decomposed into logical components, code is generated for each component, and code for the overall workflow is generated.

 
To achieve these goals, we begin by decomposing the workflow based on its structure. If the workflow forms a directed acyclic graph (DAG), it suggests that the available tooling is sufficient. However, cycles in the graph may indicate gaps, requiring the creation of new agents, which STAFF can create. In such cases, we identify logical strong components that could correspond to these missing elements.

Once these components are identified, we prompt the user to determine their reusability. If the user confirms and has the necessary permissions, we generate the corresponding code. The user can then interact with and test the newly generated tools/agents, providing direct feedback in natural language to the coding agent. Once finalized, the code is either committed to the repository or prepared for code review. Upon merging, the tools are versioned and persisted, making them easily discoverable and reusable. This process helps track lineage and manage dependencies across agents and multi-agent workflows.

After integrating these new components, we revise the workflow graph, ensuring it now forms a DAG. At this stage, the final framework code is generated, allowing further user interaction and iterative refinement. Once the process is complete, the workflow is ready for merging and deployment. Figure 3 provides a high-level overview of this process.

STAFF In Action

Figure 4: STAFF can be effortlessly integrated into conversational sandbox environments, enabling business process owners to rapidly prototype workflows and automate their processes.

 
The STAFF suite of agents can be accessed seamlessly via natural language, enabling business users to rapidly prototype and automate workflows. Figure 4 demonstrates this capability within a simple sandbox chat interface. In this example, the user guides STAFF to build a marketing campaign workflow—iteratively refining its structure, generating executable code, and testing it through conversation. The user continues to collaborate with STAFF agents to enhance both the workflow logic and the underlying code.

STAFF is now available as a general generative AI capability within C3 Generative AI, leveraging the platform’s robust tooling, data access, and security infrastructure. This integration empowers business process owners to drive end-to-end automation at scale within their critical and highly contextual processes.

Figure 5: Integrated within C3 Generative AI, STAFF allows users to design workflows with full awareness of the underlying data model, data, and available tools.

 
Figure 5 illustrates STAFF in action, guiding the creation of a repeatable and approved process for anomaly detection and work order generation. Here, the user takes a step-by-step approach to constructing the workflow, with STAFF continuously aware of the underlying data model, data, and available tools—ensuring the generated workflow is both executable and testable in real-time.

Looking Ahead

As enterprises seek to scale automation and enhance operational agility, the need for robust, intelligent, and interpretable agentic workflows has never been greater. In this blog, we introduced STAFF, a suite of AI agents that empower business owners within enterprises, accelerate development cycles, and turn process knowledge into deployable, adaptable systems. STAFF represents a major leap forward in automating process extraction, code generation and distillation, and workflow deployment, dramatically reducing the barriers to adopting AI-driven workflows. In the next post in this series, we discuss our distillation process for unlocking greater efficiency and flexibility in the orchestration of such workflows, even in fully air-gapped environments.

Learn more about C3 AI’s recent advancements that make it easier for enterprises to adopt and scale agentic AI.

 

About the Authors

Henrik Ohlsson is the Vice President and Chief Data Scientist at C3 AI. Before joining C3 AI, he held academic positions at the University of California, Berkeley, the University of Cambridge, and Linköping University. With over 70 published papers and 30 issued patents, he is a recognized leader in the field of artificial intelligence, with a broad interest in AI and its industrial applications. He is also a member of the World Economic Forum, where he contributes to discussions on the global impact of AI and emerging technologies.

Sina Khoshfetrat Pakazad is the Vice President of Data Science at C3 AI, where he leads research and development in Generative AI, machine learning, and optimization. He holds a Ph.D. in Automatic Control from Linköping University and an M.Sc. in Systems, Control, and Mechatronics from Chalmers University of Technology. With experience at Ericsson, Waymo, and C3 AI, he has contributed to AI-driven solutions across healthcare, finance, automotive, robotics, aerospace, telecommunications, supply chain optimization, and process industries. His recent research has been published in leading venues such as ICLR and EMNLP, focusing on multimodal data generation, instruction-following and decoding from large language models, and distributed optimization. Beyond this, he has co-invented patents on enterprise AI architectures and predictive modeling for manufacturing processes, reflecting his impact on both theoretical advancements and real-world AI applications.

Ali Panahi is a Lead Data Scientist in Generative AI R&D at C3 AI, specializing in machine learning and deep learning systems. With over nine years of industry experience, he has led the design, development, and deployment of scalable, distributed AI platforms, notably including a scalable, low-latency inference platform serving large-scale generative AI applications. Ali earned his Ph.D. in Computer Science from Virginia Commonwealth University, focusing on improving the space efficiency of deep neural networks. His research on efficient transformer architectures and embedding techniques has been featured at leading AI conferences such as NeurIPS and ICLR, and he is a co-inventor on several patents related to machine learning model administration and pipeline optimization. Before joining C3 AI, Ali was a founding software engineer at Breezio, developing a real-time collaboration web application.

Ian Wu is a Senior Data Scientist at C3 AI and researcher on the C3 AI GenAI team. He is interested in developing novel methods for generating synthetic data, especially for instruction-following and reasoning, as well as methods for enabling language models to self-improve. Ian holds a Master’s degree in Machine Learning from UCL and an undergraduate degree in Natural Sciences from the University of Cambridge.

Yang Song received the Ph.D. degree in Electronic and Information Engineering from The Hong Kong Polytechnic University (Hong Kong, China) in 2014, specializing in space-time signal processing. From 2014 to 2016, he was a Post-Doctoral Research Associate at Universität Paderborn (Germany), where he worked on structure-revealing data fusion for neuroscience applications. He then joined Nanyang Technological University (Singapore) as a Senior Research Fellow, leading research in SLAM, robust deep learning and graph neural networks until 2022. Currently, he is a Senior Research Scientist at C3 AI, driving innovations in AI and large-scale data analytics.

John Abelt is the Lead Product Manager for C3 Generative AI. John has been with C3 AI for 7 years, and previously led ML Products for the C3 AI Platform. John holds a Master’s in Computer Science from University of Illinois at Urbana-Champaign and a Bachelor of Science in Systems Engineering from University of Virginia.

Romain Juban is a Vice President of Data Science at C3 AI, where he leads the data science development and implementation of Generative AI applications. His work focuses on leveraging cutting-edge technologies such as large language models, embedders, retrieval augmented generation, agentic frameworks, and fine-tuning. He received his Master’s Degree in Civil and Environmental Engineering from Stanford University and his Bachelor’s Degree in Mathematics and Computer Science from Ecole Polytechnique, France.