Towards Semi-automatic Agentic Performance Engineering

(This blog post is written in personal tone, but relates to our work at GitHub Next and may be moved to https://githubnext.com in future. A huge thank you to Peli de Halleux, Joe Zhou, Eddie Aftandilian, Russell Horton, Idan Gazit and many others at GitHub Next, and the GitHub platform leadership of Mario Rodriguez. I'm … Continue reading Towards Semi-automatic Agentic Performance Engineering

What Kind of Programming is Natural Language Programming?

In previous posts I've written about Natural Language Programming, Dijkstra's Ghost - the End of The Symbolic Supremacy and Ephemeral Editable Specifications (aka Extract, Edit, Apply). These touched on the topics of Natural Language Programming and the role of Specifications in AI-native programming. Today I'd like to step back and address an underlying question: what … Continue reading What Kind of Programming is Natural Language Programming?

On Natural Language Programming

Dijkstra's Ghost and the End of The Symbolic Supremacy. I recently found myself arguing with the ghost of Edsger Dijkstra on LinkedIn. This is not a comfortable position for a computer scientist to find themself in. More specifically, I was triggered by this LinkedIn post, which quoted Dijkstra's 1978 paper "On the foolishness of natural … Continue reading On Natural Language Programming

On Continuous AI for Test Improvement

Ever since we started working on "task-oriented programming" (aka vibe coding) in 2023, our group at GitHub Next have been throwing around ideas related to "continuous" tasks in software repositories: Continuous Code Cleanup, or Continuous Documentation and so on. This finally bubbled up as the Continuous AI project, locating it within the tradition of Continuous … Continue reading On Continuous AI for Test Improvement

GitHub Agentic Workflows

I'm excited to share our latest research demonstrator from GitHub Next - "GitHub Agentic Workflows - Natural Language Programming for GitHub Actions" .https://githubnext.com/projects/agentic-workflows/Agentic Workflows focuses on expressing repository‑level behaviors in natural language and running them on GitHub. Agentic Workflows is not a product and not even a technical preview; it's a vehicle for exploring the agentic design space, … Continue reading GitHub Agentic Workflows

Extract, Edit, Apply – a design pattern for AI

Sharing a write-up of one of our investigations at GitHub Next: Extract, Edit, Apply. Spec-oriented programming is usually seen as "Spec-first", with a compilation step to turn specs into code: Specs are permanent, and Code is ephemeral. This has many obvious problems, including: The instability of LLM code-generation under otherwise small or unimportant changes to … Continue reading Extract, Edit, Apply – a design pattern for AI

Copilot Workspace and the birth of Task-Oriented Programming

In 2023 we at GitHub Next invented an early form of task-oriented programming in a system called Copilot Workspace. Copilot Workspace was the world's first implementation of human-guided, task-oriented software development. It was the first interactive, structured AI-for-Code experience with the Task --> Specification --> Plan --> Code pathway. It had flaws, which I'll mention … Continue reading Copilot Workspace and the birth of Task-Oriented Programming

Origins of Copilot Workspace

Originally published at https://github.com/githubnext/copilot-workspace-user-manual/blob/main/origins.md, April 29, 2024 At GitHub Next we work in phases: ideation, build, ship, learn. Every phase is about learning. In May 2023, after launching Copilot-X, our ideation around the SpecLang project led to new explorations of how to incorporate natural language — and user edits to natural language — into the … Continue reading Origins of Copilot Workspace

Invited talk to Queens College Computer Science Student

Invited talk to Queens College Computer Science Student Society, 14 May 2023, by Don Syme Invitation by Conall Moss, Lochlann Baker, Dan WendonBlixrud, Andy Zhou A long, long time ago I wrote in assemblerand those opcodes used to make me smileI wrote my hello world program in 16kb of RAMNo function call no do or … Continue reading Invited talk to Queens College Computer Science Student

Augmenting GPT-4 with Calculational Code

GPT-4 and other LLMs (Large Language Models) are driving a tidal wave of innovation in applied AI. However used without augmentation they have very limited calculational capabilities and make mistakes calculating with numbers. In this project, we describe a simple, general technique to address this, apply it to some widely reported real-world failures of GPT-4-based … Continue reading Augmenting GPT-4 with Calculational Code

The Max-Abstraction Impulse, and Everything Else Wrong with Type-Level Genericity

These were my comments on RFC-1124 from F# 7.0, Interfaces With Static Abstract Methods, in the "Drawbacks Section". It forms an essay on everything wrong with this particular form of Statically Constrained Genericity, and many of the things wrong with all the other forms. Drawbacks This feature sits uncomfortably in F#. Its addition to the … Continue reading The Max-Abstraction Impulse, and Everything Else Wrong with Type-Level Genericity

My Position on Type Classes

This is the most thumbed-up suggestion in fslang-suggestions and is over 7 years old. Is there any hope this will ever happen? From https://github.com/fsharp/fslang-suggestions/issues/243#issuecomment-916079347 My position is pretty clear. I'll recap it here. The utility of type classes for the kind of "functions + data" coding we aim to support in F#, in the context … Continue reading My Position on Type Classes