Dijkstra’s Ghost and the End of The Symbolic Supremacy.
I recently found myself arguing with the ghost of Edsger Dijkstra on LinkedIn. This is not a comfortable position for a computer scientist to find themself in.
More specifically, I was triggered by this LinkedIn post, which quoted Dijkstra’s 1978 paper “On the foolishness of natural language programming” – de-contextualizing it to claim clear application to “the current moment”. The aim was to trigger, and triggered I was! Yay for social media! It’s 4AM and I’m arguing with Dijkstra’s Ghost!
In the paper, Dijkstra made several arguments – including appealing to the beauty and power of symbolic mathematics – to form the basis for a somewhat forceful position that symbolic encodings of information, following clear mathematical rules – and especially for anything resembling a program – are fundamentally better than other representations.
Now, the correct historical context of that paper was helpfully posted by the ever-wise Stephen Channell, and if you care for history the paper should be read with that in mind. However, the paper was deliberately being de-contextualised, and I found that interesting too. Decontextualising like it’s a holy text – a holy text of a belief system. And that interests me.
The belief system I’m referring to is what I’ll call “The Symbolic Supremacy”. This is a common and widespread belief system about programming, closely associated with logic, formalism, functional programming and other fields I’ve been involved with in my career, but to be honest most computer scientists in most faculties are adherents to some extent. To a first approximation, the tenets of this belief system are:
1. Programs should be expressed in symbolic notation following logical mathematical rules.
2. Programs must have a precise interpretation. The more precise, the better. Precision is the essence of programming.
3. Anything claiming to implement “natural language programming” is fundamentally invalid, wrong, foolish and/or deceptive.
There are many schisms and sub-belief systems within The Symbolic Supremacy: some emphasise a particular logic, some emphasise precision, some emphasise a type theory or a programming language or tool. But all of them share essentially the tenets above when it comes to programming. In this context, the paper above is being referred as a kind of shared gospel for this belief system, and one thing they can all agree on is that natural language programming is a fool’s errand.
However, there is a problem with this. As of August 2025, there is Serious Trouble in the kingdom of The Symbolic Supremacy. In truth, there has always been serious trouble – for example pesky imprecise constraint programming – but until recently The Symbolic Supremacy has at least been defensible as a system to guide adherents in their beliefs about what programming should aspire to be: precise, unambiguous and logical. But trouble there is, and that trouble is, of course, the return of the old demon, the one slayed so decisively by Dijkstra in 1978: Natural language programming.
First, let’s be clear: a program is instructions given to a machine to achieve results.
The situation today, repeatedly evidenced millions of times by people using LLM-based systems, is that well-written (and even badly written!) natural language is now sufficient to act as instructions for repeatedly guiding computers to achieve human-relevant tasks – for programming – sufficiently often to be very useful indeed. The execution costs are high, and only certain kinds of problems are suited to this, but natural language programming (that is, repeatedly giving instructions to a machine in natural language) is now routine enough that its existence is simply undeniable. There is now no logical or rational basis for continuing to hold to the tenets of The Symbolic Supremacy in an absolute way – as a set of values and virtues that apply to all programming all the time.
For me, this has become more real through our project GitHub Agentic Workflows, my first chance to be a joint “language designer” for a programming technology that includes natural language sections. Now, Agentic workflows are aboslutely programs – they are instructions given to machines, which run repeatedly. For example the Daily Test Coverage Improver I blogged about here, see sample outputs here. Today we give programs to computers such as “test the code in this repository looking for different bugs each day, open issues on the bugs you find, write tests to improve the coverage“. This is a program – I give instructions like these to a machine and I repeatedly get outputs and something useful happens. Yet this program has no symbolic representation, and no mathematical rules, and forcing the programmer to adopt these would not improve it at all.
Faced with this, a devout follower of The Symbolic Supremacy will resort to an appeal to precision – tenet (2). Programming is not programming unless it’s precise! The more precise the better! That’s the whole point of programming! However, precision of interpretation is not a necessary characteristic of all programming (that is, giving instructions to a machine). Precision of interpretation is simply a useful characteristic of most programming. As always, the quantifier words matter and is where most argument happens.
Indeed the ambiguity of natural language programs is often the point – the program is actually more useful because it is ambiguous, as it allows the natural language interpreter (e.g. Claude Code or Codex) flexibility in how the program is interpreted in a variety of contexts. If we go back to Agentic Workflows, the phrase “test the code in this repository looking for different bugs each day, open issues on the bugs you find, write tests to improve the coverage” is a program, yet it is both imprecise and very useful. Natural language programming is more a form of constraint programming with non-determinism than precise, repeatable programming.
So equating all programming (giving instructions to machines to achieve something useful) with precise programming (giving instructions to machines to achieve something precise) is a fundamental mistake for which there is no longer any rational basis.
Now, for those who fear a descent into the abyss of ubiquitous ambiguity – let’s be clear that this kind of “deliberately ambiguous natural language programming” rests on a bedrock of non-ambiguous, precise programming. It can also absolutely be combined with precise steps, for example by combining agentic steps with GitHub Action steps (e.g. to set up coverage runs in GitHub Agentic Workflows) – a win-win between the precise and the productively-ambiguous. You also see this in a hybrid system such as GenAIScript. It is possible to get the best of both worlds, in combination.
Dijkstra’s comments should be interpreted in his historical context (see above). However it is deeply mistaken – and indeed foolish – to teach them to our students as if they are gospel today – that is, to pretend that deliberately-ambiguous, non-symbolic, non-mathematical, imprecise natural language programming is not real, unimportant or somehow wrong. Natural language programming has a crucial place in the future careers of our students and we must, for their sake, embrace it to some degree.
Dijkstra was a believer in the Symbolic Supremacy if there ever was one. Like the rest of us, Dijkstra had no real inkling that, in the future, computers would be able to interpret ambiguous natural language programs as they can, and no real inkling that imprecise programs could be as useful as they are (this is something only slowly dawning on the world even today).
What should replace The Symbolic Supremacy? My suggestion is this: The Clarity Supremacy. That is, convey to our students a deep belief that, amongst all the incredible diversity of programming, we are right to seek clarity of intent with respect to the operation of the incredible machines we get to command. This is a belief system we might all be able to align on, whether programming in COBOL, or F#, or natural language, or writing a paper, or writing a blog.
Dijkstra has long been one of the secular saints of The Symbolic Supremacy, which excludes ambiguity, natural language and even human concerns from the pure world of programming. Held absolutely this position is incorrect and, for the sake of our students, must be moderated.
Another way to think about this is an ambiguous natural language statement may have many different precise meanings. An LLM will most likely pick the most probable. This frees the people for explaining in every possible detail their precise intent. A short general intent can be refined to a long probable one precise one.
LikeLiked by 1 person
I certainly agree with everything your wrote here. I’m curious whether you would (still) agree that the most important programs are the ones at the bottom of the stack — they’re used by everything else, and if they don’t work correctly (and have a precise definition of “correctly”), then everything else falls apart.
At Microsoft I worked within within the “Developer Division”. The tools DevDiv developed, like Visual Studio, were external products — profit centers. This meant that needs of external customers outweighed those of internal developers. Usually, though, the internal developers were working on more complicated systems than the external customers. So the needs of the “average user” were addressed more aggressively than those of the “power users”. There are more average users, but the code the power users work on is used by more people. Things that would have helped with the correctness of the complicated code (e.g., program verification) were consistently deprioritized in favor of features that increased productivity in developing “semi-important” code — code where the consequences of bugs are not that severe.
I see natural language programming an extension of that trend. You’d use it for things that don’t need to be very precise. I still hope, though, that we someday give some more help to the bottom-of-the-stack programmers. Rust, I guess, could be viewed as a step in this direction, but just a step.
LikeLike
Agreed 100%. The impetus of ‘natural language’ programming in 1978 is not natural language in 2025. Aside from the obvious technology scale and scope differences, there’s no where on this Earth where a symbolic representation is being removed and a natural language representation is being put directly in its place.
The human interaction layer we enjoy with code-aware LLMs today is just at a much, much higher level of abstraction than the symbolic representation that is *still* emitted to express a program. The symbolic representation is being inferred from a huge volume of supervised and unsupervised methods that amount to “semantic compression” of human knowledge about that symbolic representation. The best we could say today is that any software language – that is used as an output of an extemporaneous instruction or goal – is that it now represents an “intermediate representation” – but it’s no less critical today than it was in 1978.
The “Tower of Babel” is so much higher it’s hard to see the silicon substrate, again, very unlike 1978. I recently argued that if John Von Neumann was alive in this era he would have abandoned the architectural pattern named after him years ago. I imagine that Dijkstra would have grown similarly as the technology landscape (and the ability of it to support higher tiers of abstraction) has expanded.
LikeLike
Exactly (though I’m not sure Dijkstra would have grown – we all have our limits).
The replies on the original LinkedIn thread are a stream of people mistaking precise programming for programming. Again and again, often quite dogmatically, rudely, assertively. Computer scientists’ minds have become very closed about what “programming” can actually mean. In universities this has the potential to be tragic, condemning a generation of students to be taught by people who believe in dogmatically in a rigid form of precise programming as a virtue, without able to square that with the reality of where much of practical programming is headed.
Examples:
> The idea that natural language could be a substitute for structure, and not a complement to it, feels even more dangerous today with LLMs everywhere
“Dangerous”. Yes, they actually think natural language programming is inherently “dangerous” (because, well, not precise and presumably not trustworthy).
> All that people should need to look at is an average online post, such as any forum like Reddit or even StackOverflow, in which misunderstandings occur with frightening frequency. If “natural language” was error-free and precise enough for programming, those misunderstandings would not occur.
Yes, but it’s not about precision. You may not like it, but you can usefully instruct machines imprecisely
> Natural language was imprecise back then, it still is today.
Again, the OP mistaking precise programmming for programming.
> Of course it does. I don’t know how many times i’ve commented about the issues with the ambiguity of language and english in particular.
Again, mistaking precise programmming for programming.
> ‘The nonsense of which is not obvious’ – nailed it.
The cult in action
> Yes, this is one of the most fundamental ideas in our field. It is demonstrated to most computing students (even in high school) almost on ‘day one’.
Great, let’s teach the next generation to be blind to the very power that will define their working lives.
> As computing professionals we are not required to believe in this dogmatically. We are required to understand it keenly so that we can explain how it’s true and relate it to our work.
Ahh… Now we must “understand it and explain how it’s true”.
> Every time someone posts that prompting LLM chatbots is “just another layer of abstraction on top of our programming languages” it’s an opportunity to paraphrase Dijkstra without writing as Dijkstra would have :}
This is a more interesting comment. It’s true some people do think of programming in terms of “layers of abstraction” and that NL is “just a higher-level language”. But that doesn’t get to how NL programming is really a kind of contraint programming or constraint engineering. It’s also commonly a negotiated form of programming – the machine can participate in the negotiation about its task. So it’s not just higher-level – it’s also different in kind.
> All of the coping mechanisms around LLMs (“prompt engineering”, “context engineering” etc) are attempts to defy this and prove it wrong by example. Unfortunately the counterexamples remain innumerable and have their own additional coping mechanisms: “you need to provide it a 5000 word prompt to steer it, try this…”
Sure, if you’re trying to do precise programming with NL you’re probably not going to succeed.
> The same is true of music score. Why not write music in Natural language? Nope, musicians use music score because it is more precise.
LOL conductors also use waving a stick to program their orchestra
> Now just think about the fact that most of our laws are written using natural language…
A more interesting comment. Laws must navigate many ambiguous situations. And CS movements to mathematize laws have failed. But there are interesting middle grounds like Oracle Intelligent Advisor.
> One can see the wisdom of Dijkstra pretty early when one graduates from arithmetic to even the simplest of abstract algebras, to say nothing of higher branches of mathematics.
Again, the mistake to think that just because mathematics works well for some things, that means a _new_ kind of programming can’t exist for _different_ things.
> On legal writings, well, they come closest to precision of mathematics and logic, but as we know, a lot of nonsense comes out of even legal writings (probably expected, as humans are way too complex to describe in the formal language of mathematics
OMG he understands that “humans are way too complex” but no, he can’t quite make the leap to understand that maybe, just maybe machines can now also be instructed in the very ways that humans use to navigate that complexity – deliberately ambiguous natural language and every other form of human communication.
LikeLike
I have MUCH to say about music and math, as I studied (and received a paid grant) to unpack the Schillinger System of Music Composition. (Joseph Schillinger was both a cellist and a physicist, born in Kiev) Setting that aside for the moment, people do not understand the RIOTS that were caused by the “Mannheim Crescendo” when it was introduced. There was no notational form for it when it was invented. Today, all of the things that are communicated by a modern conductor is now well established “on the page” and the conductor role is itself a performative indicator more than a necessary guide (speaking from experience of being a film and TV composer that conducted ensembles for soundtracks).
“Back in the day” the conductors used to have a giant staff with a metal end to bang on the podium to keep everyone at tempo. There was no time for gesturing the level of crescendo, let alone a language to express it on the page. You can even see it in the percussion orchestrations from that period being very “light” in early ensemble scores, because WHY BOTHER when the main guy up front is banging on like a giant metronome? But again – that’s another example of context that’s missing “from the page”. I could go on and on about the Basso Continuo form, and how the notational system told the improvised bass everything they needed to know to both be inventive and stay on the beam. They still use it today (notionally) in lead charts for jazz. “Nasheville numbers” in session player parlance is even closer.
The Schillinger System is what gave rise to what is now the Berklee School of Music, and yet his system never gained the notoriety that many who know of it think it deserved. His tutelage of composers gave us George Gershwin (among many others) and the derivative work of Schillinger gave us Nicholas Slonimisky’s “Thesaurus of Scales and Melodic Patterns” which was a straight-up rip-off of Schillinger’s “Kaleidophone” that ended up inspiring the likes of Leonard Bernstein, Charlie Parker and Chick Corea.
My point: all of them were moved by the result of Schillinger’s work, but none of them stared at the math.
LikeLike
Very interesting stuff. When I first started writing engineering specifications, I was always too direct for fear that I wouldn’t get the outcome that I envisioned, but it was a missed opportunity as I restricted the creativity of the implementation team. Since then I’ve been trying to strike the right balance, and it’s been fun applying this to LLMs.
LikeLike