Embody Yourself in Whatever You Want to Do Well
The effortless embodiment in the physical world and in the abstract systems themselves, is the unique value that human engineers and knowledge workers can bring in this AI assisted world.
Every now and then, I would encounter a great software engineer with deep insights and great intuition of the system or field that they work on. As my observation accumulated, I realized that their such ability doesn’t necessarily come from an advanced degree, or their tenure or innate talent in the field; it comes more from their relentless pursuit of a holistic understanding, and their devotion to making their understanding as natural as breathing. Through such unscalable efforts, they build good mental models of an abstract system and an ability to effortlessly embody themselves into the system to navigate, just like how our biological body can effortlessly navigate in the physical world.
One example is Arthur (pseudonym), a software engineer I recently crossed paths with. With a bachelor’s degree in computer science and being still early in his career, Arthur has been on an extremely fast promotion trajectory since graduation and is now a very senior IC in one of the frontier AI labs.
When I reached out to find out his secret of success, Arthur showed me a list of gigantic documents that he created over the past few years, each of which documents his investigation of the broader system that he was working on. The documents investigated different aspects and layers of the system, with charts, drawings and funny memes. Many of the documents are hundreds of pages long, with every character and every pixel hand written or hand drawn by himself. “I want to make sure that I understand these systems from first principles, and if I can’t write it down myself, I can’t be sure if I truly understand,” he explained to me.
By deeply investigating the broader system his work is part of, Arthur builds a robust mental model of the environment of his project. And by hand writing and drawing through his own lens what he learns, he has virtually embodied himself into the environment to experience it.
Why should we build a mental model that one can use to simulate and experience? A famous story written in 2012 by Rob Pike, the co-inventor of the Go language, provides a great answer for the pre-AI world.
A year or two after I’d joined the Labs, I was pair programming with Ken Thompson on an on-the-fly compiler for a little interactive graphics language designed by Gerard Holzmann. I was the faster typist, so I was at the keyboard and Ken was standing behind me as we programmed. We were working fast, and things broke, often visibly—it was a graphics language, after all. When something went wrong, I’d reflexively start to dig into the problem, examining stack traces, sticking in print statements, invoking a debugger, and so on. But Ken would just stand and think, ignoring me and the code we’d just written. After a while I noticed a pattern: Ken would often understand the problem before I would, and would suddenly announce, “I know what’s wrong.” He was usually correct. I realized that Ken was building a mental model of the code and when something broke it was an error in the model. By thinking about *how* that problem could happen, he’d intuit where the model was wrong or where our code must not be satisfying the model.
Ken taught me that thinking before debugging is extremely important. If you dive into the bug, you tend to fix the local issue in the code, but if you think about the bug first, how the bug came to be, you often find and correct a higher-level problem in the code that will improve the design and prevent further bugs.
I recognize this is largely a matter of style. Some people insist on line-by-line tool-driven debugging for everything. But I now believe that thinking—without looking at the code—is the best debugging tool of all, because it leads to better software.
“Better software” was Pike’s argument why a mental model driven, top-down approach is the better debugging / engineering approach, but in this AI assisted era, one additional question that deserves an answer is, does the approach offer some unique human value which AI doesn’t have?
Anecdotally, the answer seems to be “yes”: AI appears to be so poor at top-down, mental model driven approach that their most “thoughtful” (or “thinking-token-ful” to be more accurate) solutions often turn out to be an outrageous hack. Meanwhile, decades of progress in cognitive science might have provided additional scientific arguments.
There has long been a misconception that humans’ mathematical ability stems from our ability of language; that misconception has been robustly debunked. On one hand, mammals, birds and human infants have been shown to possess abstract number senses. On the other hand, brain scans of professional mathematicians have found (source) that high-level mathematical thinking makes minimal use of language areas and instead recruits circuits initially involved in spatial reasoning and approximate quantity counting in the physical world. (The Number Sense: How the Mind Creates Mathematics is a great book that covers this topic extensively)
In general, the human brain uses the same neurons for navigating “similar” settings in the physical world and in the abstract concept world. The famous “bird space” study in 2016 showed that the cells used by animals to locate its position in a physical space such as a room (grid cells) are also used in the human brain to organize multi-dimensional knowledge (source). We talk about “taking a step back” to look at a problem, “bypass” an obstacle, or two ideas being “far apart”, we aren’t just being poetic; we are literally describing how our brain is processing the information.
All this evidence suggests that mathematical models, software systems, etc — including AI tools — are not just abstractions of the physical world that we build and connect to the physical world; from the brain’s perspective, they are the physical world. But just like babies need to learn to wire neurons in their frontal cortex such that they can use their innate spatial and number sense to make sense of and navigate the physical world, adults will need lots of reading, writing, imagination, trial and error to wire our neurons such that we can see and navigate in those abstract worlds. The more you do those exercises, the better you can embody yourself into those worlds: you can more easily zoom in and zoom out; you can more clearly see connections of different components, missing pieces, and consequences of adding, changing and moving the components.
That effortless embodiment, in the physical world that an abstract system is part of, and in the abstract systems themselves, is probably the unique value that human engineers and knowledge workers can bring in this AI assisted world, and it is the status that one should relentlessly pursue, if they become very good at something.


