Preparing the AI-First Enterprise Workforce for the Age of Agents

As AI agents become more capable, how should the roles and responsibilities of the AI-first enterprise employees evolve? Which skills will matter at each level? How will technical and business teams collaborate to effectively leverage agents and agent-based systems? As the capabilities of agents advance, employees will move from executing tasks to supervising workflows, orchestrating agent collectives, and setting mission-level strategy. Preparing for this transformation requires a clear view of how roles evolve across the six levels of AI agents, and how responsibilities will shift among AI specialists, application users, and supporting teams.

Introduction

In previous posts, I outlined a spectrum of AI agents for the AI-first enterprise, describing the capabilities of agents at each level and how they impact the workforce. I also highlighted level-specific design patterns that AI-first enterprises can use when deploying agent-based systems. Yet an AI-first enterprise is not defined only by how advanced its software and robotic agents are. Its success depends equally on the humans who develop, interact with, guide, and orchestrate these agents.

This raises an urgent question: as AI agents become more capable, how should the roles and responsibilities of the AI-first enterprise employees evolve? Which skills will matter at each level, and how will technical and business teams collaborate to effectively leverage agents and agent-based systems? As the capabilities of agents advance, employees will move from executing tasks to supervising workflows, orchestrating agent collectives, and setting mission-level strategy. Preparing for this transformation has two requirements. First, a clear view of how roles evolve across the previously defined agent levels. Second, how responsibilities will shift among AI specialists, application users, and supporting teams.

Evolving Roles and System Architectures

At the center of this transition are four archetypes.

  • The model developer creates AI models from scratch. Such models include large, medium, or small language models (xLMs), vision-language action models (VLAMs), and more traditional classifiers. This developer functions as a system architect at the model level.
  • The model refiner adapts existing models using methods such as fine-tuning or RAG, acting as an adaptation-focused system architect.
  • The agent developer creates agents that incorporate or call models while adding reasoning, planning, memory, and orchestration logic. This developer functions as the agent-focused architect.
  • The application user interacts directly with agents or with business applications that embed them. The user performs tasks, validates outputs, and orchestrates multi-agent workflows.

Other enterprise personnel, from data scientists and engineers to IT staff and business domain experts, also play critical roles. Their responsibilities evolve alongside these archetypes to ensure data quality, infrastructure reliability, governance, and business alignment. The table below summarizes how these roles transform across the six levels of AI agents.

AI Employment by Agent Level

Agent Level Model Developer Model Refiner Agent Developer User Supporting Roles
L1 Minimal involvement Minimal involvement Develops rules & adds them into simple agents Operator: Invokes agent, monitors outcomes Data engineers: maintain pipelines; IT: monitors uptime
L2 Selects Foundation Models & builds LLMs Fine-tunes selected models to company needs Implements intent and manages dialog Guide: Guides agent behavior with prompts, validates outputs Data scientists: curate examples; business experts: provide prompts & guardrails
L3 Develops proprietary xLMs and components to augment model context   Tunes developed models for correct retrieval & dynamic context   Implements memory heuristics & context management for models used Supervisor: Oversees multi-turn workflows, escalates edge cases Data engineers: maintain data pipelines for xLM development and dynamic context
L4 Builds xLMs and xMMs specific to agent diversity Adjusts models to match agent capabilities Incorporates orchestration protocols into 1st & 3rd party agents, boosts observability Orchestrator: Coordinates multiple agents, sets objectives/ constraints IT: distributed heterogeneous systems MLOPs; Domain experts: define constraints
L5 Designs safe self-learning pipelines & architectures Curates self-learning datasets, monitors agent model drift Implements reflective subsystems, & exploration policies Mission Architect: Defines goal and learning objective, oversees agent evolution IT: manage the continuous learning infrastructure, agent ops, unclog bottlenecks
L6 Ensures dynamic updating of multiple diverse & proprietary xLMs, xMMs Oversees emergent behaviors, ensures ethics/ compliance Designs socio-technical ecosystems for collaboration of 1st and 3rd party L5 agents Mission Strategist: Sets mission, monitors agent performance, authorizes interventions IT: ensures systemic resilience and governance. Business units: ensure continuous strategic alignment

Three Observations

First, as agents advance, the user evolves from application operator to mission strategist. This transformation profoundly reshapes business unit personnel. Simply knowing how to operate applications such as Workday or Salesforce — even as they are augmented with AI — will no longer be enough. Instead, business employees must be able to define missions for multi-agent systems, monitor performance, adjust objectives dynamically, and reconfigure the agent mix as conditions change. This role resembles that of a military strategist, setting goals, constraints, and tactics while leaving execution to the system.

Second, over time, the model developer’s role changes. This is the result of enterprises gaining experience with foundation models, but also as model performance plateaus. In some companies, developers will build proprietary small or medium language and multimodal models (xLMs, xMMs). In others, the developer’s responsibilities will converge with those of the refiner and the agent developer. Some AI-first enterprises may even merge these roles entirely. Regardless of structure, explicit accountability must be defined:

  1. Who owns the models?
  2. Who owns the adaptation pipeline?
  3. Who is accountable for agent orchestration?
  4. Who authorizes the promotion of agents from testing to production?

Third, with the introduction of Level 4 agents, enterprises begin deploying multi-agent systems that combine first- and third-party agents. These agents incorporate proprietary and third-party models of varying types and sizes. Agent developers, business users, and IT teams must manage behaviors and interactions among heterogeneous agents. As a result, governance becomes essential. Multi-agent systems, especially those mixing internal and external components, introduce new risks. These include misaligned objectives, emergent behaviors, supply-chain vulnerabilities, and reasoning opacity that complicates root cause analysis. Enterprises should establish an Agent Governance Framework.  The framework addresses model health, fine-tuning quality, agent orchestration, and mission alignment. It includes deployment gates such as safe-fail modes, continuous observability and explainability, periodic red-team reviews of emergent behaviors, and escalation paths.

An Example

Imagine a large retailer piloting AI agent-based systems. With Level 2 agents, customer-service personnel guide conversational systems to resolve problems more quickly, while website search becomes easier and more effective through natural-language interaction. Inside the enterprise, software engineers use the same class of agents to boost their programming productivity. In the warehouse, semi-autonomous robots are supervised as they fulfill orders, shelve new arrivals, and re-stock returns. In the store, robots perform routine jobs such as cleaning, removing hazards, and providing security.

With Level 3 agents, customer support, software development, supply chain management, vendor disputes, and other business functions are handled largely autonomously, with only limited supervision. In the warehouse, robotic agents need little direction, though supervisors step in for multi-step cases where customer history or logistical constraints must be carefully considered.

At Level 4, software agents for logistics, fraud detection, and supplier communication coordinate with robotic agents that handle the physical movement of goods across warehouses. The Orchestrator user configures the agent set, defines objectives, such as minimizing return cycle time, and sets constraints on refund thresholds or supplier credits.

At Level 5, software agents analyze return patterns to refine fraud detection, while robotic agents adjust handling strategies based on wear-and-tear data. The user Architect defines reward signals, such as balancing customer satisfaction, cost control, and sustainability goals, and ensures safe learning loops. Supervisors and Orchestrators provide the evaluation data that feed these loops, while the Architect validates that learning improves mission outcomes without introducing bias or instability.

At Level 6, software and robotic agents operate as an emergent collective. They discover strategies that the enterprise did not pre-program. For example, the system may reassign warehouse robots to customer-service tasks during peak returns season or propose new supplier arrangements to reduce product defects. The Mission Strategist interprets these emergent behaviors, aligns them with corporate strategy, and decides whether to authorize systemic changes.

Business Leaders Action Plan

  1. Map current roles against the six-level spectrum and identify gaps in orchestration, governance, and learning pipelines.
  2. Assign clear ownership for model goals, adaptation protocols, and agent orchestration responsibilities.
  3. Launch pilots that progress from Level 2 to Level 4, with named Guides and Orchestrators for each stage and governance gates at every transition.
  4. Build a training curriculum for each archetype, including practical rotations such as orchestration shadowing, model-refinement labs, and safety playbooks.
  5. Instrument processes for end-to-end observability and governance.
  6. Create an Agent Governance Board with legal, security, business, and technical representatives, and define clear rules for third-party agent usage.

Conclusion

This post does not attempt to predict a single, uniform future. Each AI-first enterprise will adopt a different mix of software and robotic agents, and will progress through the spectrum of agent levels at its own pace. Those choices will reflect differences in risk tolerance, domain complexity, and competitive context. What matters is that they are intentional, the result of deliberate planning rather than hasty reactions to competitive pressure.

As the table makes clear, the structural shifts in the workforce are unavoidable: application users evolve into mission designers, technical roles move steadily toward system architecture, and support functions advance from maintenance to governance. Enterprises that understand these shifts, prepare their people for orchestration and stewardship, and align governance accordingly will be positioned to turn agent capability into sustainable strategic advantage rather than operational surprise.

Leave a Reply