The Co-Performance Imperative: Redesigning Performance Management for the Human-AI Era.

The Co-Performance Imperative — Random Thoughts

Random Thoughts

Analysis & Commentary
The Co-Performance Imperative
When AI Is No Longer a Tool but a Co-Performer — Redesigning Performance Management for the Human-AI Era

Random Thoughts  ·  Research & Practice  ·  May 2026  ·  SSRN Working Paper
This post is based on the author’s working paper: The Co-Performance Imperative: Redesigning Performance Management for the Human-AI Era. Full paper, citations, and references available at https://ssrn.com/abstract=6703358
The Problem
Every Performance Management System Was Built for a World That No Longer Exists

Spend any time advising organizations on AI adoption and a pattern becomes impossible to ignore. Executives invest millions in AI deployment. Productivity metrics improve. And yet, something is quietly going wrong beneath the surface — something the dashboards do not capture and the appraisal cycle does not measure.

The problem is not the AI. The problem is that organizations are running twenty-first century human-AI partnerships through twentieth century performance management systems — systems designed when the only performer was a human being working alone.

From Taylor’s scientific management to Drucker’s Management by Objectives to today’s OKR frameworks, every performance management system ever built rests on the same unspoken assumption: the performing agent is always human, always singular, always accountable in a volitional sense. AI violates this assumption not incrementally but structurally — and it does so across every dimension simultaneously.

“We are measuring the wrong unit, rewarding the wrong behaviors, and routing learning to the wrong parties — and the cost of this misalignment compounds silently with every passing quarter.”

The result is not merely inefficiency. It is a slow, invisible accumulation of organizational fragility — one that will not announce itself until a critical AI system fails, regresses, or is withdrawn, and the organization discovers that the human skills needed to cope have quietly atrophied.

What Breaks and Why
Six of the Seven Fractures Where AI Breaks Performance Management Irreparably

These are not hypothetical concerns. They are structural failure points that manifest daily in organizations deploying AI at scale. Four of the six shown here — the Appraisal Asymmetry, the Skill Atrophy Shadow, the Organizational Sociology Inversion, and the Capability Paradox — are original constructs formally named in the full paper for the first time and described in the definition boxes below.

01
Attribution Fracture
When humans and AI co-produce an outcome, causal attribution becomes epistemically indeterminate — not just practically difficult.
02
Asymmetric Agency
AI’s performance directly affects the human’s appraisal, yet the AI cannot receive feedback or bear any organizational consequence.
03
Appraisal Asymmetry ★
Only the human bears the psychological, financial, and career costs of shared failure — producing four predictable behavioral distortions.
04
Skill Atrophy Shadow ★
AI competence growth masks human competence decline. The organization appears stronger on every metric while becoming more fragile.
05
Org. Sociology Inversion ★
AI teams now possess the knowledge that dictates domain expert workflows — reversing the historic power relationship between IT and the business.
06
Capability Paradox ★
Maximum organizational confidence in AI aligns precisely with maximum organizational vulnerability to AI failure.

★ Original construct or phenomenon introduced in this paper.

Original Construct: The Appraisal AsymmetryThe structural emotional imbalance in human-AI dyads where only the human participant bears the psychological, financial, and career costs of shared failure — producing over-caution, blame externalization, learned helplessness, and excessive deference as predictable rational responses.
Original Phenomenon: The Skill Atrophy ShadowThe latent organizational vulnerability created when AI competence growth masks the decline of independent human competency — such that measured performance improves while the organization’s resilience quietly erodes.
Original Phenomenon: The Organizational Sociology InversionThe power reversal in AI-integrated organizations in which AI teams possess the knowledge and decision authority that domain experts lack — creating accountability paradoxes where managers must appraise employees whose work depends on systems the managers themselves cannot evaluate.
Original Phenomenon: The Capability ParadoxThe phenomenon whereby the more reliable an AI becomes, the less humans are expected to supervise it — yet the less capable they become of supervising it when it eventually, and inevitably, fails. Maximum confidence and maximum vulnerability arrive together.
The Central Contribution
The Co-Performance Dyad: Appraising the Right Unit

The first and most fundamental intervention is deceptively simple: stop appraising the individual and start appraising the human-AI pair. The Co-Performance Dyad is the minimal unit of performance analysis in AI-integrated organizations — the smallest unit that captures the actual causal structure of work.

A Co-Performance Dyad exists when task outcomes are materially different depending on which specific human-AI pair performs the work — when the human cannot achieve equivalent results by substituting a different AI without significant recalibration, and when the AI cannot achieve equivalent results with a different human without significant adaptation.

“The individual was never a metaphysical fact of organizational life — it was an administrative convenience. That convenience has reached its expiration date.”

Three dyad types carry distinct appraisal implications. In an Augmentation Dyad, AI extends human reach. In a Decision Dyad, attribution is genuinely shared. In an Autonomous Dyad, accountability is AI-primary for execution and human-primary for oversight quality. Crucially, dyad type is not fixed by job role — the same professional may operate across all three within a single working day.


Four New Metrics for the Dyad
Dyadic SurplusOutput quality above what either the human or the AI could achieve independently. The primary value creation metric of co-performance.
Calibrated Trust IndexHow accurately the human’s reliance on AI tracks the AI’s actual performance. Both over-reliance and under-reliance represent calibration failure.
Handoff EfficiencyThe smoothness of work transfer between human and AI — measured by rework frequency, override accuracy, and resolution speed.
Adaptation VelocityHow quickly the pair recovers to baseline performance after a failure, a task change, or an AI system update.
The Structural Injustice
The Appraisal Asymmetry: When Only One Partner Pays

Of all the fractures AI creates in performance management, the Appraisal Asymmetry is the most consequential for individual careers and organizational culture. When a human-AI team underperforms, only one member absorbs the consequences. The AI is quietly retrained with no organizational weight attached.

Original Construct: The Appraisal AsymmetryThe structural emotional imbalance in human-AI performance dyads in which only the human participant bears the psychological, financial, and career costs of shared failure — producing predictable behavioral distortions in the human partner.

This structural injustice generates four predictable behavioral responses: Over-caution — under-utilizing AI to protect individual records; Blame externalization — attributing failure entirely to the AI; Learned helplessness — motivation decay when skill cannot fully prevent AI-generated errors; and Excessive deference — accepting AI outputs without critical evaluation as a hedge against personal accountability.

These are not character flaws. They are rational responses to a structurally irrational appraisal environment. The Appraisal Asymmetry amplifies existing workplace inequalities, placing the heaviest burden on those with the least organizational capital to defend themselves.

The Solution
The Dynamic Accountability Gradient: Responsibility That Moves

Fixed accountability rules are systematically distorted by organizational politics and systematically unfair. What is needed is a framework that moves accountability in proportion to context.

The Dynamic Accountability Gradient (DAG)Accountability for a performance outcome is a function of four variables: (D) dyad type, (F) the task’s position on the human-AI capability frontier, (M) the AI system’s demonstrated maturity level, and (C) the human’s calibration history with this AI for this task type.

The DAG operates through four named archetypes: Human-Primacy (failure on the human-dominant side), AI-Primacy (hallucination, model drift, data bias the human could not detect), Shared (neither alone explains the outcome), and Escalation (root cause is an organizational design failure). Critically, the gradient determines where the learning response goes — not where blame goes.


Post-Failure Diagnostic — Five Questions
Question 1Was the critical failure on the human-dominant, AI-dominant, or genuinely co-dependent side of the capability frontier?
Question 2What was the AI system’s documented performance history on this task type at the time of the failure?
Question 3What was the human’s demonstrated calibration record with this AI for this task?
Question 4Were organizational oversight protocols adequate in terms of time, information, and authority?
Question 5Were time pressure, resource constraints, or psychological factors in play that plausibly prevented effective oversight?
What the Full Framework Covers
Beyond the Three Core Constructs

Learning Reciprocity proposes that every significant performance event should trigger a simultaneous inquiry into both human and AI learning needs as a coupled question. The AI Learning Plan mirrors the human Individual Development Plan, and the ALNI Process provides the structured method for identifying what the AI needs to learn next.

The Adaptive Appraisal Framework addresses the temporal problem: PM systems must have version numbers. As AI capability crosses defined thresholds, the appraisal criteria must automatically trigger a Review-of-Reviews, ensuring the system that measures performance keeps pace with the work it measures.

The AI Performance Partnership Function fills the governance vacuum. HR owns human performance but lacks AI literacy. The AI team owns model performance but lacks PM authority. A new standing organizational function is required to manage dyadic performance as an integrated system. The paper also proposes concrete redesigns across all five pillars of PM and identifies seven empirical research priorities.

“The choice is not whether to integrate AI. It is whether organizations have the institutional courage to manage the performance of the partnership they have already created.”
Working Paper — Available Now on SSRN
The Co-Performance Imperative: Redesigning Performance Management for the Human-AI Era
A Unified Framework for Appraisal, Accountability, and Learning When AI Is No Longer a Tool but a Co-Performer
Read the Full Paper on SSRN →
D Souza, Lucas (2026)  ·  SSRN Working Paper  ·  https://ssrn.com/abstract=6703358

Comments

Popular posts from this blog

The Transition Generation: How Gen Z Is Being Caught Between Degrees and AI Disruption

Synthetic Intimacy

Why Won’t They Just Click? Understanding Older Adults’ Hesitation Toward AI

Intelligent Inventory: How AI Is Replacing Traditional Optimization