In 1973, engineers at General Dynamics rolled out a prototype for a new fighter jet. It was called the YF-16. It resembled a standard combat plane. It had cropped-delta wings. It had a massive engine. It had a sleek cockpit. But it harbored a deeply counterintuitive secret. It was intentionally designed to be impossible to fly.
Prior to the F-16, a pilot was physically connected to the airplane. When a pilot pulled the stick back, mechanical cables physically moved the control surfaces. But the F-16 was built with negative stability. It was so aerodynamically twitchy that a human pilot could never react fast enough to keep it in the air. If they tried, the plane would crash in seconds.
The Computer Intermediary
So, the engineers did something radical. They severed the physical connection. They placecd a computer between the pilot and the wings. When the pilot pulled the stick, he was no longer moving the plane. He was sending a request to a computer. The computer made the dozens of micro-adjustments per second required to keep the jet aloft.
The pilot was no longer a craftsman of the sky. He was a manager of a highly complex system.
The OpenAI Experiment
Recently,
They were not allowed to write any code.
For five months, human beings did not type a single line of manual code. Instead, they relied entirely on an artificial intelligence agent named Codex. The humans provided the instructions. The AI did the typing.
The Mythology of Code
So, why do we find this so unsettling? Why do we cling to the idea that typing equals engineering?
The conventional wisdom about software development is that it is fundamentally an act of manual creation. We picture a brilliant programmer sitting in a dark room. We imagine them staring at a screen, typing complex logic, and wrestling with the machine until the program works. We believe that good code comes from individual human brilliance.
But there is a problem with that theory.
Rethinking Engineering
It turns out that isn’t true at all. The real work of software engineering was never about typing. It was never about the speed of your fingers or the perfection of your syntax. It was about understanding the problem. It was about decomposing complexity into manageable pieces.
It was about making thousands of small decisions—trade-offs between performance and maintainability, between simplicity and flexibility, and between what works today and what scales tomorrow. Typing was simply the vehicle through which those decisions were expressed.
When we watch a programmer code, what we’re actually watching is thought made visible. The keystrokes are just the medium. And if an AI can capture those thoughts, if it can understand the intent behind the instructions and translate them into working code, then the bottleneck was never the typing. The bottleneck was always the thinking.
The Uncomfortable Truth
This is why the OpenAI experiment feels so destabilizing. It forces us to confront an uncomfortable truth: we’ve been measuring engineering by the wrong metric. We’ve valorized the craft of code-writing when we should have been valorizing the craft of problem-solving. And once you separate those two things, once you let a machine handle the execution while humans focus on the intent, something shifts.
The engineer becomes what the F-16 pilot became—not a craftsman disappearing into their work, but a director orchestrating a system far more capable than themselves. It’s a different kind of skill. It’s not necessarily easier. But it demands we reimagine what engineering actually is.
Harness Engineering: A New Paradigm
When the OpenAI team removed hands-on coding from their daily routine, the work did not disappear. It shifted. It evolved into something entirely different. They discovered a new paradigm. They called it Harness Engineering.
Harness Engineering is not about writing software. It is about building the environment where software can be written. It is the art of creating boundaries.
Think about how an AI agent operates. An agent is incredibly fast. It is endlessly eager. It is capable of writing thousands of lines of code in seconds. But without strict limits, an agent will create absolute chaos. It will wander off track. It will build sprawling, incoherent systems.
The humans at OpenAI realized that their job was no longer to micromanage the implementation. Their job was to build the scaffolding. They wrote exhaustive design documents. They created custom tools that automatically checked the architecture. They enforced strict, mechanical rules about how different parts of the software were allowed to interact. They built a rigid track for the AI to run on.
Progressive Disclosure: Guiding the Agent
They also utilized a concept called Progressive Disclosure. They did not hand the agent a massive, complex project all at once. That would be overwhelming. Instead, they gave the agent a small, stable entry point. They asked it to build a tiny block. Then they taught it where to look next. They guided it, step by step, deeper into the system.
The Modern Engineer’s Desk
Zoom in on this new workflow. Picture the desk of a modern software engineer at OpenAI. There are no cascading windows of colorful syntax. There is no furious clacking of a mechanical keyboard deep into the night. Instead, a human engineer types a simple task in plain English. They hit enter. The agent takes over. The agent writes the code. The agent reviews its own changes. The agent even pings other AI models for code review in the cloud. It iterates in a relentless, automated loop until the tests pass. The human simply watches the system work.
He didn’t just step back. He let go completely. And the product shipped faster than anyone expected.
Debugging in a New Age
When a bug appeared, the instinct was no longer to dive into the codebase. In the old world, when something failed, the fix was to try harder. In the world of Harness Engineering, trying harder does not work. If the agent fails, it means the environment is flawed. The human must fix the prompt. They must tighten the boundaries. They must improve the scaffolding.
The Macro Shift
Zoom out, and you see the macro-level shift. We are witnessing the end of the software artisan. The future of engineering is not about typing. It is about systems. It is about leverage. It is about creating predictable structures that allow artificial intelligence to operate safely.
The New Purpose
We often worry that AI will replace human ingenuity. We fear that by handing over the tools of creation, we will lose our purpose. But the lesson of the F-16, and the lesson of OpenAI, is entirely the opposite.
When you stop trying to control the wings manually, you do not become obsolete. You become essential in an entirely new way. You become the architect of the system. If we can learn to embrace Harness Engineering, we might find that stepping away from the keyboard is the most powerful thing a builder can do. We stop fighting the wind. We let the machine fly.
