The fundamental lesson of the original GitHub Copilot (i.e. completions) is that AI tooling is an endless sequence of divergence and re-convergence between the human and the AI. This is the “Co” in “Copilot”.
In all AI literature, an agent is fundamentally regarded as autonomous: able to make its own decisions, take its own actions – perhaps asking for human input, but essentially the agent is in charge. In the world of Copilot we must discard this thinking, and instead embrace a variation of the concept suited for humans: coagents.
What is a coagent? A coagent is a controllable, cooperative unwinding of the steps and decision points of an agent or AI-infused iterative workflow.
That is, if an autonomous agent has sequential internal autonomous progress A –> B –> C, the corresponding coagent is one that can:
- propose step A to the human
- receive confirmation of A and/or adjustment to take step A’
- propose step B’ to the human, based on taking A or A’
- receive confirmation of B’ and/or adjustment to take step B”
- propose step C” to the human, based on taking B’ or B”
- receive confirmation of C” and/or adjustment to take step C”’
etc.
The coagent can also propose selecting between multiple courses of action, propose questions to answer, propose topics to learn, options to choose. A coagent stays aligned with the human, a human stays aligned with the coagent. A coagent is an agent designed to have a human-in-the-loop, an agent pulled apart to its individual steps and offerings.
This applies to all human-meaningful, human-controllable steps and decision points in the operation of an agent, and applies to guiding coordination logic that selects and coordinates agents. It can also mean unwinding most or all “loops” or search strategies in agents – and determining points where human-control, observation, correction and alignment are needed. It also means humans can go back, refine choices and refine outputs, and work through the ramifications of doing that.
This is the thinking we applied to the design of Copilot Workspace. For example, the validation portion of Copilot Workspace extracts the build steps for a repository, in order to show them to the user, and allow the user control over them. Most steps of Copilot Workspace itself are like this – at the tips some steps are autonomous agents, but the overall flow is best characterised as a coagent, or co-agentic.
In the world of Copilot we should be talking primarily about Coagents. This applies to any extensibility model for AI-infused tools as well.