I’ve been writing Python for a while now, but it wasn’t until last week — halfway through a video about assembly language — that I realized I had no idea what was actually underneath the code I write every day. I knew the words. High-level. Low-level. Object-oriented. Compiled. I’d been using them for years the way most people use “horsepower” — as a vague measure of something I’d never bothered to trace back to its origin. When I finally did, the mental model I came away with reframed everything I thought I knew about software.
The entire history of programming languages turns out to be one long project of selective amnesia. Every major milestone amounts to someone saying: I don’t want to think about that part anymore.
The ladder
At the very bottom, there’s machine code — raw binary, ones and zeros, representing specific voltages on specific wires inside a specific chip. Assembly language, which appeared in the 1950s, replaced those binary patterns with human-readable mnemonics like MOV and ADD, but it was still a one-to-one translation. Every instruction mapped to exactly one operation on a particular processor. You were thinking in the machine’s native tongue, managing individual registers, counting memory addresses by hand.
Then came what’s called a procedural language — Fortran in 1957, C in 1972 — and this was the first genuine cognitive leap. For the first time, you could write x = a + b and a compiler would figure out which registers to use, how to shuttle data around, how to optimize the whole thing for your specific hardware. You stopped reasoning about the CPU and started reasoning about the problem itself. The mental model was sequential: you write a series of instructions that execute top to bottom, calling reusable functions along the way, and those functions operate on data that sits separately in memory. Data here, functions there. You pass one into the other.
This works beautifully until your program gets large. And then it becomes a nightmare, because tracking which functions are allowed to touch which data — and what happens when they do it in the wrong order — grows combinatorially. The code becomes fragile in ways that are hard to see and harder to fix.
That’s the problem object-oriented programming solved. The insight, when you strip away the jargon, is deceptively simple: bundle the data and the functions that operate on it into a single unit. A Patient object doesn’t just store a name and date of birth — it also carries the methods like scheduleAppointment() and calculateCopay(). The data and the behavior travel together, and you can hide the internal details so outside code can’t reach in and break things. That’s it. That’s what “object-oriented” means. The machine doesn’t care — it all compiles down to the same binary. The distinction is entirely about how you, the programmer, organize your thinking.
Languages like Python and JavaScript pushed even further: forget memory management, forget type declarations, forget the compilation step entirely. Just describe the logic. And now, with large language models, we’ve reached something like the logical ceiling — describing what we want in natural language and getting working software back. Each layer is an act of forgetting what the layer beneath it forced you to remember.
Nothing is built from scratch
Here’s what I didn’t expect: almost no language is built from the ground up. They sit on top of each other like geological strata. Python’s interpreter is written in C. Java’s virtual machine was built in C and C++. JavaScript’s V8 engine is C++. Rust’s compiler was originally written in OCaml, then rewritten in Rust itself — a rite of passage the community calls “self-hosting.” When you write Python, your code is interpreted by a program written in C, which was compiled by a tool chain that traces back through earlier C compilers all the way to assembly. It’s turtles all the way down, and they all eventually hit the same bedrock: the instruction set etched into silicon.
But languages also inherit ideas, and the intellectual lineage is often completely separate from the mechanical one. Java was built with C and C++, but its design borrowed Smalltalk’s object philosophy, Lisp’s garbage collection, and C’s curly-brace syntax. Python descends mechanically through C to assembly, but its conceptual DNA comes from ABC, Modula-3, and Lisp — languages most programmers have never heard of. The language that shaped how you think and the language that makes it physically run are often strangers to each other.
The cracks in the foundation
There’s a punchline to all of this, and it’s unsettling. C, the procedural language from 1972 that sits beneath nearly everything — Linux, Windows, Python, the database engines, the browser engines — is still producing the same category of bug it has enabled for over fifty years. Microsoft reports that roughly 70% of the security vulnerabilities they assign a CVE each year are memory safety issues: buffer overflows, use-after-free errors, dangling pointers. Google reports the same proportion for Chrome. The NSA has called memory safety vulnerabilities “the most readily exploitable category of software flaws.” In 2024, the White House formally urged the industry to transition toward memory-safe languages. National security agencies telling programmers which languages to write in — that’s how serious the problem has become.
This is precisely why Rust exists. Its compiler enforces memory safety at compile time through an ownership model that tracks which part of the code is responsible for each piece of data. If the compiler can’t prove your code is safe, it refuses to build. The tradeoff is a notoriously demanding development experience — the compiler will reject code that would work correctly, because it can’t prove it will. But after half a century of evidence that human vigilance alone doesn’t solve the problem at scale, the case for mechanical enforcement is difficult to argue against.
The irony is almost poetic: the very quality that made C revolutionary — direct, unguarded access to memory, total control, no safety net — is the same quality that makes it a perpetual source of vulnerabilities. Every abstraction layer since has been, in some sense, an attempt to save programmers from the consequences of that freedom.
The direction
Each abstraction layer has enabled more categories of software than the last. Assembly gave us a handful — military computation, cryptography, scientific modeling. Procedural languages unlocked airline reservations, banking systems, hospital records, and payroll. Object-oriented programming and the graphical user interface made possible the ERP, the spreadsheet, the content management system. The web created e-commerce, search, and social networks. Mobile unlocked ride-sharing, real-time messaging, and the gig economy.
AI-assisted code generation may be the first layer where the number of possible software categories becomes effectively limitless — because the barrier to creating a new one is converging on zero.
And underneath all of it, there’s still C, still converting human logic into something a chip can execute. The geology doesn’t disappear because we’ve built taller structures on top of it. It just becomes easier to forget it’s there — which, if you think about it, was the whole point of every layer all along.