Back to AI Research

AI Research

HAAS: A Policy-Aware Framework for Adaptive Task Al... | AI Research

Key Takeaways

  • Deciding how to divide work between humans and AI is a major challenge in modern organizations.
  • Deciding how to distribute work between humans and AI systems is a central challenge in organisational design.
  • Most approaches treat this as a binary choice, yet the operational reality is richer: humans and AI routinely share tasks or take complementary roles depending on context, fatigue, and the stakes involved.
  • Governing that distribution -- balancing efficiency, oversight, and human capability -- remains an open problem.
  • This paper presents Human-AI Adaptive Symbiosis (HAAS), an implemented framework for adaptive task allocation in software engineering and manufacturing.
Paper AbstractExpand

Deciding how to distribute work between humans and AI systems is a central challenge in organisational design. Most approaches treat this as a binary choice, yet the operational reality is richer: humans and AI routinely share tasks or take complementary roles depending on context, fatigue, and the stakes involved. Governing that distribution -- balancing efficiency, oversight, and human capability -- remains an open problem. This paper presents Human-AI Adaptive Symbiosis (HAAS), an implemented framework for adaptive task allocation in software engineering and manufacturing. HAAS combines two coupled components: a rule-based expert system that enforces governance constraints before any learning occurs, and a contextual-bandit learner that selects among feasible collaboration modes from outcome feedback. Task-agent fit is represented through five auditable cognitive dimensions and a five-mode autonomy spectrum -- from human-only to fully autonomous -- embedded in a reproducible benchmark spanning both domains. Three empirical findings emerge. First, governance is not a binary switch but a tunable design variable: tighter constraints predictably convert autonomous AI assignments into supervised collaborations, with domain-specific costs and benefits. Second, in manufacturing, stronger governance can improve operational performance and reduce fatigue simultaneously -- a workload-buffering effect that contradicts the usual framing of governance as pure overhead. Third, no single governance setting dominates across all contexts; moderate governance becomes increasingly competitive as the learner accumulates experience within the governed action space. Together, these findings position HAAS as a pre-deployment workbench for comparing and inspecting human--AI allocation policies before organisational commitment.

Deciding how to divide work between humans and AI is a major challenge in modern organizations. While many people view this as a simple choice—either a human does the job or an AI does—the reality is much more complex, involving varying levels of fatigue, trust, and the need for human oversight. The paper "HAAS: A Policy-Aware Framework for Adaptive Task Allocation Between Humans and Artificial Intelligence Systems" introduces a new framework called Human-AI Adaptive Symbiosis (HAAS) to help organizations manage this distribution more effectively. HAAS provides a structured way to test and implement different collaboration strategies, ensuring that efficiency and human capability are balanced before any work begins.

How the Framework Works

HAAS functions as a three-layer system designed to make task allocation both logical and auditable. First, it evaluates every subtask using five cognitive dimensions: repetitiveness, technical depth, creativity, ambiguity, and human interaction. This creates a "score" that determines how well-suited a task is for AI versus a human.
Second, the framework uses a two-part engine to make decisions. A "PolicyEngine" acts as a set of guardrails, enforcing organizational rules—such as safety requirements or mandatory human validation—before any work is assigned. Once these rules are applied, a learning component (a contextual-bandit learner) selects the best way to complete the task from five different collaboration modes. These modes range from "Human-Only" and "Copilot" to "Supervised" and "Fully Autonomous," allowing for a nuanced approach rather than a binary "on/off" switch for automation.

Key Findings

The researchers tested HAAS across software engineering and manufacturing domains and discovered three significant insights:

  • Governance is a Design Tool: Governance is not just a hurdle or an overhead cost; it is a flexible design variable. By tightening or loosening rules, organizations can predictably shift tasks from autonomous AI execution to supervised collaboration, allowing them to balance costs and benefits based on their specific needs.

  • The Workload-Buffering Effect: In manufacturing, the study found that stronger governance can actually improve performance while simultaneously reducing human fatigue. This contradicts the common assumption that governance always slows down operations.

  • No Single Strategy Wins Everywhere: The researchers found that no single governance setting is perfect for every situation. However, as the system gains experience, moderate governance becomes increasingly effective, proving that the best approach is one that adapts to the specific context of the work.

A Workbench for Organizations

Ultimately, HAAS serves as a pre-deployment "workbench." It allows organizations to simulate and compare different allocation policies in a controlled environment before committing to them in real-world workflows. By embedding human factors like fatigue, trust, and the potential for skill erosion directly into the system’s feedback loop, HAAS helps leaders design human-AI teams that are not only efficient but also sustainable and aligned with safety and regulatory standards.

Comments (0)

No comments yet

Be the first to share your thoughts!