The Missing Layer(yagmin.com)
44 points by lubujackson 6 hours ago | 42 comments
- sunir 1 hour agoI find these throws of passionate despondency similar to the 1980s personal computing revolution. Oh dear. Giving mere mortals the power of computing?! How many people would abandon their computers or phones.
It’s not like it changes our industry’s overall flavour.
How many SaaS apps are excel spreadsheets made production grade?
It’s like every engineer forgets that humans have been building a Tower of Babel for 300000 years. And somehow there is always work to do.
People like vibe coding and will do more of it. Then make money fixing the problems the world will still have when you wake up in the morning.
[-]- lubujackson 21 minutes agoI am not against vibe coding at all, I just don't think people understand how shaky the foundation is. Software wants to be modified. With enough modifications the disconnect between the code as it is imagined and the code in reality becomes too arduous of a distance to bridge.
The current solution is to simply reroll the whole project and let the LLM rebuild everything with new knowledge. This is fine until you have real data, users and processes built on top of your project.
Maybe you can get away with doing that for a while, but tech debt needs to be paid down one way or another. Either someone makes sense of the code, or you build so much natural language scaffolding to keep the ship afloat that you end up putting in more human effort than just having someone codify it.
We are definitely headed toward a future where we have lots of these Frankenstein projects in the wild, pulling down millions in ARR but teetering in the breeze. You can definitely do this, but "a codebase always pays its debts."
- cootsnuck 39 minutes agoBut this time is different! For reasons!
Yea, the more things change the more they stay the same. This latest AI hype cycle seems to be no different. Which I think will become more widely accepted over the next couple of years as creating deployable, production-ready, maintainable, sellable, profitable software remains difficult for all the reasons besides the hands-to-keyboard writing of code.
- xnorswap 4 hours ago> but no matter how small you make the steps, the area never changes
Sorry, this is a bit off-topic, but I have to call this out.
The area absolutely does change, you can see this in the trivial example from the first to second step in https://yagmin.com/blog/content/images/2026/02/blocks_cuttin...
The corners are literally cut away.
What doesn't change is the length of the edges, which is a kind of manhattan distance.
The length of the edge has a limit of the straight line, but does not actually approach the limit.
The area however absolutely does approach the limit, as in fact you remove half the "remaining" area each iteration.
- aditgupta 4 hours agoJim nailed the core problem. I've been building exactly this "missing layer" for past few months. The challenge isn't just connecting product decisions to code. It's that product context lives in a format that's optimized for human communication, not machine consumption. When engineers feed this to LLMs, they spend massive effort "re-contextualizing" what stakeholders already decided. I built TypMo (https://typmo.com) around two structured formats that serve as this context layer: PTL (Product Thinking Language)- Structures product decisions (personas, objectives, constraints, requirements) in a format both humans can read/edit and LLMs can parse precisely. Think YAML for product thinking. and Interface Structure Language (ISL) - Defines wireframes and component hierarchies in structured syntax that compiles into visual mockups and production-ready prompts. LLMs don't need more context, they need structured context. The workflow Jim describes (stakeholder meeting → manager aggregates → engineer re-contextualizes for LLM) becomes: stakeholder meeting → PTL compilation → IA generation → production prompts.
LEt's see where it goes!
[-]- 4b11b4 3 hours agoNice and I'm thinking along similar lines but not DSLs.
My intuition off reading what you wrote is... Nobody is gonna want to write PTLs and ISLs.
[-]- aditgupta 2 hours agoExactly right, and that's the core point. Users don't write PTL or ISLs. Let's say you have customer interactions (fetched from Zoom) or product/research notes. The AI structures that into PTL automatically. You see clean, editable notes and visual wireframes + high-fidelity prototypes and prompts. The structured formats exist in the background for token efficiency and interoperability.[-]
- 4b11b4 6 minutes agoAh I see. When I say write.. I also mean review and revise.
- asim 5 hours agoWe need a language and a transpiler. Honestly the LLM has many uses. Agents have many uses. And we are narrowing down how to make them deterministic and predictable for programming machines and software. But that also means we need something beyond natural language for the actual implementation. Yes we've moved a level up, but engineers are not product managers, so as much as we can define the scope and outline a project like a 2 week sprint using scrum or kanban, the reality is deterministic input for deterministic output is still the way to go. Just as compilers and higher level languages opened the doors to the next phase, the LLM manages this translation and compilation, but it's missing a sort of intermediary language, a format that's going to be much better processed and compiled directly down to machine code. We're talking about LLVM. Why are asking LLMs to write Go code or Python, when we could much better translate an intermediary language to something far more efficient and performant. So I think there's still work to be done.[-]
- wtetzner 3 hours agoAm I understanding what you're saying correctly?
* We need a deterministic input language
* The LLM generates machine code
Isn't that just a compiler? Why do we need the LLM at that point?
[-]- CuriouslyC 3 hours agoIf the compiler only gets you 80% of the way there, but what it does is sufficient to put the LLM on rails, like programming language mad libs, I'd say that's a win.[-]
- wtetzner 1 hour agoI feel like I'm still not understanding something. How does making the output from the LLM lower level help?[-]
- CuriouslyC 1 hour agoConcrete example: Next/Turborepo. These tools make your life easier if you drink some kool aid. Rather than have the agent scaffold the app you have the agent use a tool that scaffolds. Agents write specs to manage tools, and those tools scaffold the code, then the agents just sprinkle in business logic that is too bespoke for codegen.
- 4b11b4 3 hours agoYup, that's the idea. Mad libs are still constrained
- einrealist 4 hours agoI am curious to know what he has in mind. This 'process engineering' could be a solution to problems that BPM and COBOL are trying to solve. He might end up with another formalized layer (with rules and constraints for everyone to learn) of indirection that integrates better with LLM interactions (which are also evolving rapidly).
I like the idea that 'code is truth' (as opposed to 'correct'). An AI should be able to use this truth and mutate it according to a specification. If the output of an LLM is incorrect, it is unclear whether the specification is incorrect or if the model itself is incapable (training issue, biases). This is something that 'process engineering' simply cannot solve.
[-]- reg_dunlop 2 hours agoI'm also curious about what a process engineering abstraction layer looks like. Though the final section does hint at it; more integration of more stakeholders closer to the construction of code.
Though I have to push back on the idea of "code as truth". Thinking about all the layers of abstraction and indirection....hasn't data and the database layer typically been the source of truth?
Maybe I'm missing something in this iteration of the industry where code becomes something other than what it's always been: an intermediary between business and data.
[-]- einrealist 2 hours agoYes, the database layer and the data itself are also sources of truth. Code (including code run inside the database, such as SQL, triggers, stored procedures and other native modules) defines behaviour. The data influences behaviour. This is why we can only test code with data that is as close to reality as possible, or even production data.
- helloplanets 5 hours ago> Let's say your organization wants to add "dark mode" to your site. How does that happen? A site-wide feature usually requires several people to hash out the concerns and explore costs vs. benefits. Does the UI theming support dark mode already? Where will users go to toggle dark mode? What should the default be? If we change the background color we will need to swap the font colors. What about borders and dividers? What about images? What about the company blog, and the FAQ area, which look integrated but run on a different frontend? What about that third-party widget with a static white background?
Only one or two of those questions are actually related to programming. (Even though most developers wear multiple hats.) If an organization has the resources to have a six person meeting for adding dark mode, I'd sure hope at least one of them is a designer and knowledgeable on UX. Because most of those questions are ones that they should bring up and have an answer for.
- shuss 2 hours agoThere are many impediments to scaling vibe coding. We built a tool to internally scale vibe coding to vibe engineering: https://mfbt.ai/blog/vibe-coding-vs-vibe-engineering/
- CuriouslyC 3 hours agoSpec driven development is great in theory, but it has a lot of issues, I rant about them here: https://sibylline.dev/articles/2026-01-28-problems-with-spec...
I'm working on a tool that uses structured specs as a single source of truth for automated documentation and code generation. Think the good parts of SpecKit + Beads + Obsidian (it's actually vault compatible) + Backstage, in a reasonably sized typescript codebase that leverages existing tools. The interface is almost finalized, I'm working on polishing up the CLI/squashing bugs and getting good docs ready to do a real launch but if anyone's curious they can poke around the github in the meantime.
One neat trick I'm leveraging to keep the nice human ergonomics of folders + markdown while enforcing structure and type safety is to have a CUE intermediate representation, then serialize to a folder of markdown files, with all object attributes besides name and description being thrown in front matter. It's the same pattern used by Obsidian vaults, you can even open the vaults it creates in Obsidian if you want.
This structure lets you lint your specs, and do code generation via template pattern matching to automatically generate code + tests + docs from your project vault, so you have one source of truth that's very human accessible.
[-]- Jarwain 2 hours agoThis sounds quite interesting!
I've been exploring spec driven workflows, but from a different angle.
I've been thinking about how to describe systems with a standard format, recursively. Instead of one 10 page doc, you might get 10 one pagers, starting from the highest level of abstraction and recursing down into the parts and subsystems, all following the same format. Building out this Graph of domains provides certain reusable nodes/bits of context.
This then extends to any given bit of software, which is a system in itself composed of the intersections of a lot of different domains and subsystems.
Cue looks interesting and is something I'll be digging into more
[-]- CuriouslyC 2 hours agoThis is my approach. I use the C4 software model, it's pretty general. Entities can be represented either by markdown files, or folders with README.md files (sort of like an index.js or __init__.py). Folder hierarchy gives you a basic project object model.
- 4b11b4 3 hours agoArbiter is looking nice. I like the composability. Pretty tough for anything except a developer to author it though, which was maybe an accepted tradeoff.[-]
- CuriouslyC 2 hours agoThanks. My philosophy with it is to be minimal and adaptable to people's existing workflows, I went through a lot of iterations to land on something that was both expressive and "human." The Obsidian compatibility was a sign for me that I was on the right track.
- schmuhblaster 2 hours agoI've been recently experimenting with using a Prolog-based DSL [0] as the missing layer: Start with a markdown document, "compile" it into the DSL, so that you obtain an "executable spec". Execution still involves LLMs, so it's not entirely deterministic, but it's probably more reliable than hoping your markdown instructions get interpreted in the right way.
- epolanski 2 hours ago> Documentation is hard to maintain because it has no connection to the code. Having an LLM tweak the documentation after every merge is "vibe documenting."
I'm not sure I agree, you don't need to vibe document at all.
What I do in general is: - write two separate business requirements and later implementation markdown files - keep refining the first and second one as the work progresses, stakeholders provide feedback
Before merging I have /docs updated based on requirements and implementation files. New business logic gets included in business docs (what and why), new rules/patterns get merged in architectural/code docs.
Works great and better at every new pr and iteration.
- conartist6 4 hours agoTHE CODE IS THAT LAYER.
If your code does a shit job of capturing the requirements, no amount of markdown will improve your predicament until the code itself is concise enough to be a spec.
Of course you're free to ignore this advice. Lots of the world's code is spaghetti code. You're free to go that direction and reap the reward. Just don't expect to reach any further than mediocrity before your house of cards comes tumbling down, because it turns out "you don't need strong foundations to build tall things anymore" is just abjectly untrue
[-]- 4b11b4 3 hours agoThis is fundamentally flawed. Code cannot always capture the requirements, nor the reasoning or scenario which led to these requirements.[-]
- conartist6 3 hours agoNevertheless, it captures what the system does. That makes it the true spec even if it's hard to read or figure out the intent behind. Writing an intent document is basically having a fap if the system doesn't do what you intend.[-]
- 4b11b4 2 hours agoWell sure but the discussion isn't "is the code is correct".[-]
- conartist6 2 hours agoRight it's whether your aspirations need to be grounded in reality.
Do you actually need to build strong foundations, or can you just throw away all your work every time and start over from scratch?
What confounds me is why I am on Hacker News having to defend the idea of software architecture being a kind of capital. That's not supposed to make some sort of countercultural revolutionary. Where did everyone go!!
[-]- conartist6 2 hours agoJust to remind you what Paul Graham said when he defined what the "Hacker" in Hacker News means:
> What and how should not be kept too separate. You're asking for trouble if you try to decide what to do without understanding how to do it. But hacking can certainly be more than just deciding how to implement some spec. At its best, it's creating the spec-- though it turns out the best way to do that is to implement it.
- anupamchugh 4 hours agoDocumentation debt happens when docs and code are decoupled. One fix is to make specs stateful artifacts with change detection. In Shadowbook (disclosure: I built it), specs are files with hashes; when a spec changes, linked issues get flagged and can’t be closed until someone acknowledges the drift. That creates a feedback loop between docs and implementation without “vibe documenting.” It doesn’t solve everything, but it makes contradictions visible and forces a review gate when context shifts.[-]
- gf000 33 minutes agoI have been thinking of something similar for quite some time. Though my idea was more like making comments "first class citizens", and in certain formats they can link to each other/external documents, tracking inconsistent changes.
This might also extend to runtime checks (e.g. some business invariant in the form of an assert that has such a "dependency-tracked" comment)
- 4b11b4 3 hours agoShadowbook is a fork of beads?
- ryanackley 4 hours agoCreating massive amounts of semi-structured data is the missing layer? I can see an argument for that if you're a non-programmer who wants to create something. Although, at some point, it's a form of programming.
As a developer, I would rather just write the code and let AI write the semi-structured data that explains it. Creating reams of flow charts and stories just so an AI can build something properly sounds like hell to me.
[-]- sublinear 4 hours ago> Creating reams of flow charts and stories just so an AI can build something properly sounds like hell to me.
Well yeah, that's why businesses have all those other employees. :)
I'm still trying to understand what this whole thread and blog post are about. Is HN finally seeing the light that AI doesn't replace people? Sure if you're determined enough you can run a business all by yourself, but this was always true. I guess AI can make information more accessible, but so does a search engine, and before that so did books.
- vivzkestrel 3 hours ago- one of the reasons why i am working on a semi deterministic production grade typescript application generator.
- The lowest layers are the most deterministic and the highest layers are the most vibe coded
- tooling and configuration is at the lowest layers and features are at the highest layer
[-]- CuriouslyC 3 hours agoGood call. This is a hobby of mine as well. What's your approach?
- groestl 4 hours agoMaybe I'm missing something, or we do it differently here, but I think "spec" is defined to narrowly in that article. Start writing the first part of that document in that meeting, and everything ties together neatly.
- mrbluecoat 5 hours agoUpvoted for that animated gif alone. Best visual I've seen of AI coding results.[-]
- nurettin 4 hours agoIt looks like a trackmania shortcut.