“Vibe Coding” as Empathetic Practice

The major criticism of “vibe coding”, or essentially just telling some AI-agent what kind of app/tool/software product you want, is that it provides the user functional code only “to a point”. After this point, where the capability of the model, stretched over an unwieldy breadth of context, hits a wall, and the user is left adrift with a pile of code they possess little understanding of. 

I think this is certainly true at the present moment, but I don’t think even clumsy requests to a model to perform some complex software task is altogether a dead end, it is just more similar to the role of a product manager, or someone commissioning a work of art: who knows what they want, and just needs to evaluate with high specificity until they get it.

I think vibe coding, or working in a somewhat non-deterministic dialogue with a language model, will become an increasingly essential paradigm that will break free from its present “triviality” in the coming months/years. This is because AI models, and the agents they operate, will become an unavoidable force in the world, operating a larger and larger share of the GDP on their autonomous machinations alone (rather than a mere enhancer of existing human knowledge work). 

This is due to put humans in a strange position, as it has been so far with certain specific capabilities of language models, where a great, unknown utility exists “within” the model, but can only be summoned with the right combination of prompt, context, and scaffolding/tool calling that sets it on a fortuitous path. I believe the true nature of this paradigm is lost on many people, whose instincts with respect to interacting with software is affixed to the fundamental nature of how software has been for the last 50 years: deterministic and close-ended.

The strange discontinuity of this emerging reality will only become more acute as the utility of LLM’s exceeds that of human experts in broader and broader domains. From that point on, the most useful skill in the universe is a sort of cognitive empathetic practice: an understanding of how models think, react, and what their true capabilities are in the short breaths of seeming consciousness they are provided in the tokenized range of their context window.