Ask HN: AI productivity gains – do you fire devs or build better products?
47 points by Bleiglanz 8 hours ago | 73 comments
- maccard 44 minutes ago> try it, you might actually be amazed.
I keep being told this and the tools keep falling at the first hurdle. This morning I asked Claude to use a library to load a toml file in .net and print a value. It immediately explained how it was an easy file format to parse and didn’t need a library. I undid, went back to plan mode and it picked a library, added it and claimed it was done. Except the code didn’t compile.
Three iterations later of trying to get Claude to make it compile (it changed random lines around the clear problematic line) I fixed it by following the example in the readme, and told Claude.
I then asked Claude to parse the rest of the toml file, whereby it blew away the compile fix I had made..
This isn’t an isolated experience - I hit these fundamental blocking issues with pretty much every attempt to use these tools that isn’t “implement a web page”, and even when it does that it’s not long before it gets tangled up in something or other…
[-]- krastanov 35 minutes agoThis is fascinating to me. I completely believe you and I will not bother you with all the common "but did you try to tell it this or that" responses, but this is such a different experience from mine. I did the exact same task with claude in the Julia language last week, and everything worked perfectly. I am now in the habit of adding "keep it simple, use only public interfaces, do not use internals, be elegant and extremely minimal in your changes" to all my requests or SKILL.md or AGENTS.md files (because of the occasional failure like the one you described). But generally speaking, such complete failures have been so very rare for me, that it is amazing to see that others have had such a completely different experience.[-]
- shafyy 24 minutes agoIt's almost like.... LLMs are non-deterministic and hallucinating... Oh wait?!
- DougN7 22 minutes agoI have similar experiences. It has worked about half the time, but the code has to be pretty simple. I’ve many experiences, where we work on something complicated for an hour, and it is good compiling code, but then an edge case comes to mind that I ask about and Claude tells me the whole approach is doomed and will never work for that case. It has even apologized a few times for misleading me :) I feel like it’s this weird mix of brilliant moron. But yeah, ask for a simple HTML page with a few fields and it rocks.
- wrs 22 minutes agoI’m honestly baffled by this. I don’t want to tell you “you’re holding it wrong” but if this is your normal experience there’s something weird happening.
Friday afternoon I made a new directory and told Claude Code I wanted to make a Go proxy so I could have a request/callback HTTP API for a 3rd party service whose official API is only persistent websocket connections. I had it read the service’s API docs, engage in some back and forth to establish the architecture and library choices, and save out a phased implementation plan in plan mode. It implemented it in four phases with passing tests for each, then did live tests against the service in which it debugged its protocol mistakes using curl. Finally I had it do two rounds of code review with fresh context, and it fixed a race condition and made a few things cleaner. Total time, two hours.
I have noticed some people I work with have more trouble, and my vague intuition is it happens when they give Claude too much autonomy. It works better when you tell it what to do, rather than letting it decide. That can be at a pretty high level, though. Basically reduce the problem to a set of well-established subproblems that it’s familiar with. Same as you’d do with a junior developer, really.
- roncesvalles 15 minutes agoDid you use the best model available to you (Opus 4.6)? There is a world of difference between using the highest model vs the fast one. The fast ones are basically useless and it's a shame that all these tools default to it.
- HoyaSaxa 45 minutes agoI think most public companies will take the short term profits and startups will be given a huge opportunity to take market share as a result.
At my company, we are maintaining our hiring plan (I'm the decision maker). We have never been more excited at our permission to win against the incumbents in our market. At the same time, I've never been more concerned about other startups giving us a real run. I think we will see a bit of an arms race for the best talent as a result.
Productivity without clear vision, strategy and user feedback loops is meaningless. But those startups that are able to harness the productivity gains to deliver more complete and polished solutions that solve real problems for their users will be unstoppable.
We've always seen big gains by taking a team of say 8 and splitting it into 2 teams of 4. I think the major difference is that now we will probably split teams of 4 into 2 teams of 2 with clearer remits. I don't want them to necessarily delivery more features. But I do want them to deliver features with far fewer caveats at a higher quality and then iterate more on those.
Humans that consume the software will become the bottlenecks of change!
- rsynnott 4 hours ago> boilerplate
Ruby on Rails and its imitators blew away tons of boilerplate. Despite some hype at the time about a productivity revolution, it didn’t _really_ change that much.
> , libraries, build-tools,
Ensure what you mean by this; what bearing do our friends the magic robots have on these?
> and refactoring
Again, IntelliJ did not really cause a productivity revolution by making refactoring trivial about 20 years ago. Also, refactoring is kind of a solved problem, due to IntelliJ et al; what’s an LLM getting you there that decent deterministic tooling doesn’t?
[-]- rileymichael 59 minutes agocouldn't have said it better. all of the people clamoring on about eliminating the boilerplate they've been writing + enabling refactoring have had their heads in the sand for the past two decades. so yeah, i'm sure it does seem revolutionary to them![-]
- maccard 41 minutes agoThere have been a handful of leaps - copilot was able to look at open files and stub out a new service in my custom framework, including adding tests. It’s not a multiplier but it certainly helps[-]
- rileymichael 36 minutes agomost frameworks have CLIs / IDE plugins that do the same (plus models, database integration, etc.) deterministically. i've built many in house versions for internal frameworks over the years. if you were writing a ton of boilerplate prior to LLMs, that was on you[-]
- maccard 31 minutes agoHabe they? I’ve used tools that mostly do it, but they require manually writing templates for the frameworks. In internal apps my experience has been these get left behind as the service implementations change, and it ends up with “copy your favourite service that you know works”.[-]
- rileymichael 18 minutes ago> they require manually writing templates for the frameworks
the ones i've used come with defaults that you can then customize. here are some of the better ones:
- https://guides.rubyonrails.org/command_line.html#generating-...
- https://hexdocs.pm/phoenix/Mix.Tasks.Phx.Gen.html
- https://laravel.com/docs/13.x/artisan#stub-customization
- https://learn.microsoft.com/en-us/aspnet/core/fundamentals/t...
> my experience has been these get left behind as the service implementations change
yeah i've definitely seen this, ultimately it comes down to your culture / ensuring time is invested in devex. an approach that helps avoid drift is generating directly from an _actual_ project instead of using something like yeoman, but that's quite involved
- epicureanideal 55 minutes agoAnd for a lot of AI transformation tasks, for a long time I've been using even clever regex search/replace, and with a few minutes of small adjustment afterward I have a 100% deterministic (or 95% deterministic and 5% manually human reviewed and edited) process for transforming code. Although of course I haven't tried that cross-language, etc.
And of course, we didn't see a massive layoff after the introduction of say, StackOverflow, or DreamWeaver, or jQuery vs raw JS, Twitter Bootstrap, etc.
[-]- elvis10ten 43 minutes agoI just had a relevant experience. I asked Claude to add “trace(‘$FunctionName’) {}” to all Composable functions in my app. Claude spent some time doing something. In between, I was like shoot, I could just do a deterministic regex match and replace.
- rileymichael 28 minutes agostructural search and replace in intellij is a superpower (within a single repo). for polyrepo setups, openrewrite is great. add in an orchestrator (simple enough to build one like sourcegraph's batch changes) and you can manage hundreds of repositories in a deterministic, testable way.
- jjmarr 18 minutes agoIt depends on the relative value of experience/skill to your team.
If your team is "throw juniors into the enterprise boilerplate coal mine" and you expect talent to eventually quit, then laying people off might be the right move.
If your team is "highly skilled devs try to invent new products", then you should focus on shipping more.
- hirako2000 49 minutes agoAssuming you are primarily selling software.
Situation a/ llm increase developer's productivity: you hire more developers as you cash profit. If you don't your competitor will.
b/ llm doesn't increase productivity, you keep cruising. You rejoice seeing some competitors lay off.
Reality shows dissonance with these only possible scenarios. Absurd decision making, a mistake? No mistake. Many tech companies are facing difficulties, they need to lose weight to remain profitable, and appease the shareholders demand for bigger margins.
How to do this without a backlash? Ai is replacing developers, Anthropic's CEO said engineers don't write code anymore, role obsolete in 6 months. It naturally makes sense we have to let some of them go. If the prophecy doesn't turn true, nobody ever get fired for buying IBM.
- lateforwork 48 minutes agoYou still need humans to manage the whole lifecycle, including monitoring the live site, being on-call, handling incidents, triaging bugs, deploying fixes, supporting users and so on.
For greenfield development you don't need as many software engineers. Some developers (the top 10%) are still needed to guide AI and make architectural decisions, but the remaining 90% will work on the lifecycle management task mentioned above.
The productivity gains can be used to produce more software, and if you are able to sell the software you produce should result in a revenue boost. But if you produce more than you can sell then some people will be laid off.
- animal531 3 hours agoI use it near daily and there is definitely a positive there, BUT its nothing like what the OP statement would make it up to be.
If it is writing both the code and the tests then you're going to find that its tests are remarkable, they just work. At least until you deploy to a live state and start testing for yourself, then you'll notice that its mostly only testing the exact code that it wrote, its not confrontational or trying to find errors and it already assumes that its going to work. It won't ever come up with the majority of breaking cases that a developer will by itself, you will need to guide it. Also while fixing those the odds of introducing other breaking changes are decent, and after enough prompts you are going to lose coherency no matter what you do.
It definitely makes a lot of boilerplate code easier, but what you don't notice is that its just moving the difficult to find problems into hidden new areas. That fancy code that it wrote maybe doesn't take any building blocks, lower levels such as database optimization etc. into account. Even for a simple application a half-decent developer can create something that will run quite a bit faster. If you start bringing these problems to it then it might be able to optimize them, but the amount of time that's going to take is non-negligible.
It takes developers time to sit on code, learn it along with the problem space and how to tie them together effectively. If you take that away there is no learning, you're just the monkey copy-pasting the produced output from the black box and hoping that you get a result that works. Even worse is that every step you take doesn't bring you any closer to the solution, its pretty much random.
So what is it good for? It can both read, "understand", translate, write and explain things to a sufficient degree much faster than us humans. But if you are (at the moment) trusting it at anything past the method level for code then you're just shooting yourself in the foot, you're just not feeling the pain until later. In a day you can have it generate for example a whole website, backend, db etc. for your new business idea but that's not a "product", it might as well be a promotional video that you throw away once you've used it to impress the investors. For now that might still work, but people are already catching on and beginning to wise up.
[-]- anon7725 38 minutes agoThis is the most insightful comment in the thread.
- aurareturn 8 hours agoIn certain industries, increasing productivity by 90% does not mean 90% increase in profit. This is because growth depends on market TAM and growth rate.
Another way of increasing profit is to simply reduce your headcount by 90% while keeping the same profit.*
Hence, I think some companies will keep downsizing. Some companies will hire. It depends a lot.
*Assuming 90% productivity increase.
[-]- muzani 5 hours agoIn companies like the oil industry, doubling productivity would mean reducing the expected life span. Costs tend to catch up with profits, especially due to taxes. Layoffs happen inevitably.
Is it the same with tech? Facebook has 3 billion monthly active users. No amount of tech will bring that up to 6 billion. If you were to double the amount of time someone spends on Facebook, or double the ads they see or double the click through rate, what does that really mean?
[-]- hirako2000 34 minutes agoThat's assuming a company is constrained, to not expand its portfolio.
Taking the example of Facebook. They are in social media, messaging, AI, VR/AR hardware and software, a few other things, meta universe whatever that was, now left with the name. Facebook isn't delivering or successful on all its ventures, it knows that, it keeps investing in other segments.
More productivity would mean at least diversifying, they have some of the best engineers, it would make no sense to not simply attempt to hit the jackpot by playing more machines.
What fewer people talks about is that the entire tech industry is tertiary services. Ads, entertainment, communication, etc. If/when hard industries take a hit, tertiary takes a hit. If it isn't clear to you that the overall economy has already started to take some irreversible dents, and that those will accelerate, know that the capital is well aware.
Or we can continue wishful thinking and seek comfort that monetary tightening is just temporary, investments will flow more into tangent ventures and growth is around the corner, the U.S still is and will remain the world's strongest economy.
- aurareturn 54 minutes agoYeah, even if Meta engineers gain 90% increase in productivity, I doubt Meta can increase revenue by 90% more than previous environment. There’s just a cap to how much time people want to spend on social media.
I think most companies are making the right call by downsizing instead of staying same size. Let people go to where there is more potential for growth.
- stephen_cagle 14 minutes ago> you start off checking every diff like a hawk, expecting it to break things, but honestly, soon you see it's not necessary most of the time.
My own experience...
I've tried approaching vibe coding in at least 3 different ways. At first I wrote a system that had specs (markdown files) where there is a 1 to 1 mapping between each spec to a matching python module. I only ever edited the spec, treating the code itself as an opaque thing that I ignore (though defined the intrefaces for). It kind of worked, though I realized how distinct the difference between a spec that communicates intent and a spec that specifies detail really is.
From this, I felt that maybe I need to stay closer to the code, but just use the LLM as a bicycle of the mind. So I tried "write the code itself, and integrate an LLM into emacs so that you can have a discussion with the LLM about individual code, but you use it for criticism and guidance, not to actually generate code". It also worked (though I never wrote anything more then small snippets of Elisp with it). I learned more doing things this way, though I have the nagging suspicion that I was actually moving slower than I theoretically could have. I think this is another valid way.
I'm currently experimenting with a 100% vibe coded project (https://boltread.com). I mostly just drive it through interaction on the terminal, with "specs" that kind of just act as intent (not specifications). I find the temptation to get out of the outside critic mode and into just looking at the code is quite strong. I have resisted it to date (I want to experiment with what it feels like to be a vibe coder who cannot program), to judge if I realistically need to be concerned about it. Just like LLM generated things in general, the project seems to get closer and closer to what I want, but it is like shaping mud, you can put detail into something, but it won't stay that way over time; its sharp detail will be reduced to smooth curves as you then switch to putting detail elsewhere. I am not 100% sure on how to deal with that issue.
My current thoughts is that we have failed to actually find a good way of switching from the "macro" (vibbed) to the "micro" (hand coded) view of LLM development. It's almost like we need modules (blast chambers?) for different parts of any software project. Where we can switch to doing things by hand (or at least with more intent) when necessary, and doing things by vibe when not. Striking the balance between those things that nets the greater output is quite challenging, and it may not even be that their is an optimal intersection, but simply that you are exchanging immediate change for future flexibility to the software?
- matt_s 5 hours agoDevelopers are going to be more productive, just not how you think. If history is going to rhyme, then the software industry will enter into a self-serving productivity craze building all sorts of software tooling, frameworks, ralph wiggum loop variants, MCPs, etc. much like the surge in JS frameworks and variants in the past. Most of those things will not have any business value. Software devs, myself included, love to do things "because I can" and not necessarily because they should.
Smart organizations will not just deliver better products but likely start products that they were hesitant to start before because the cost of starting is a lot closer to zero. Smart engineering leadership will encourage developers into delivering value and not self-serving, endless iterations of tooling enhancements, etc.
If I was a CTO and my competitor Y fired 90% of their devs, I'd try to secure funding to hire their top talent and retain them. The vitriol alone could fuel some interesting creations and when competitor Y realizes things later, their top talent will have moved on.
[-]- MichaelRo 4 hours ago>> then the software industry will enter into a self-serving productivity craze building all sorts of software tooling, frameworks
>> Smart organizations will not just deliver better products but likely start products [...]
This is not the 90s anymore when low hanging fruit was everywhere ready to be picked. We have everything under the sun now and more.
The problem with bullshit apps is not that it took you 5 months to build. What you build now in 5 minutes it's still bullshit. Most of the remaining work is bullshit jobs. Spinning useless "features" and frameworks that nobody needs and shove them down the throat of customers that never asked for them. Now it's possible to dig holes and fill them back (do pointless work) at much improved pace thanks to AI.
- ashwinnair99 47 minutes agoThe companies quietly doing the firing will say they're doing the building. The answer you get depends entirely on who you ask and what they're trying to justify.
- dare944 4 hours agoFalse dichotomy. If your company is at the point of firing lots of people "to save a buck" its way past the point of caring about delivering a better product.
- KellyCriterion 26 minutes agoThird option:
Not hiring someone?
- jaen 5 hours agoThat's assuming every developer can get the same AI efficiency boost and contribute meaningfully to any feature, which is unfortunately not really the case.
Seniors can adjust, but eg. junior frontend-only devs might be doomed in both situations, as they might not be able to contribute enough to business-critical features to justify their costs and most frontend-related tasks will be taken over by the "10x" seniors.
- giantg2 5 hours agoI feel like the ideas have always been the tough part. Finding novel ideas with a good return is extremely tough.
- marcyb5st 4 hours agoIf I were to run the company potentially mostly focus on better products with the exception of firing those that don't adopt the technology.
If it is a big company the answer is and will always be: whatever makes the stock price rise the most.
[-]- elvis10ten 30 minutes ago> with the exception of firing those that don't adopt the technology.
This is a crazy take. Even if said people are matching or exceeding the outcome of those using the technology?
I’m not in this group. But the closest analog to what you are saying is firing people for not using a specific IDE.
- conartist6 5 hours agoGotta fire everyone, or else "too many cooks" will mean that even those temporary productivity gains go up in smoke.
Remember sometimes the most productive thing to have is not money or people but time with your ideas.
[-]- conartist6 5 hours agoWhat's so sad to see is people excited about making something selling away the time they would have with their ideas. They're paying money to not use their brain to contemplate the thing they should be the foremost expert in the world on, and they're excited to be paying more and more for each modicum of ignorance and mediocrity dispensed.
- GenerWork 4 hours ago>do you fire devs or build better products?
I think that it's more along the lines of "do you fire people" instead of just "do you fire devs". Fewer devs means less of a need for PMs, so they can be let go as well, and maybe with the rise of AI assisted design tools, you don't need as many UX people, so you let some of them go as well.
As for building better products, I feel like that's a completely different topic than using AI for productivity gains, but only because at the end of the day you need buy in from upper management in order to build the features/redo existing features/both that will make the product better. I should also mention I'm viewing this from the position of someone who works at an established company and not a startup, so it may differ.
- gamblor956 1 hour agoAI tooling does not provide productivity gains unless you consider it productive to skip the boilerplate portion of software development, which you can already do by using a framework, or you never plan to get past the MVP stage of a product, as refactoring the AI spaghetti would take several magnitudes more work than doing it with humans from the beginning.
Amazon has demonstrated that it takes just as longer, or longer, to have senior devs review LLM output than it would to just have the senior devs do the programming in the first place. But now your senior devs are wasted on reviewing instead of developing or engineering. Amazon, Microsoft, Google, Salesforce, and Palantir have all suffered multiple losses in the tens of millions (or more) due to AI output issues. Now that Microsoft has finally realized how bad LLMs really are at generating useful output, they've begun removing AI functionality from Windows.
Product quality matters more than time to market. Especially in tech, the first-to-market is almost never the company that dominates, so it's truly bizarre that VCs are always so focused on their investments trying to be first to market instead of best to market.
If Competitor Y just fired 90% of their developers, I would have a toast with my entire human team. And a few months later, we'd own the market with our superior product.
[-]- bendmorris 30 minutes agoIt's disappointing that this is clearly being downvoted due to disagreement - it's a valid perspective. We have very little evidence of the overall impact of aggressively generating code "in the wild" and plenty of bad examples. No one knows what this ends up looking like as it continues to meet reality but plenty are taking a large productivity improvement as a given.
- lordkrandel 7 hours agoWhy do people keep ralking about AI as it actually worked? I still don't see ANY proof that it doesn't generate a total unmaintainable unsecure mess, that since you didn't develop, you don't know how to fix. Like running a F1 Ferrari on a countryside road: useless and dangerous[-]
- tyleo 5 hours agoBecause it's working for a lot of people. There are people getting value from these products right now. I'm getting value myself and I know several other folks at work who are getting value.
I'm not sure what your circumstances are but even if it's not true for you, it's true for many other people.
[-]- pydry 5 hours agoIt's interesting that the people IRL I encounter who "get the most value" tend to be the devs who couldnt distinguish well written code from slop in the first place.
People online with identical views to them all assure me that theyre all highly skilled though.
Meanwhile I've been experimenting using AI for shopping and all of them so far are horrendous. Cant handle basic queries without tripping over themselves.
[-]- hermannj314 31 minutes agoIf you are a 2600 chess player, a bot that plays 1800 chess is a horrendous chess player.
But you can understand why all the 1700 and below chess players say it is good and it is making them better using it for eval?
Don't worry, AI will replace you one day, you are just smarter than most of us so you don't see it yet.
- AStrangeMorrow 3 hours agoIdk basically everyone is my org has seen some good value out of it. We have people complaining about limitations, but would still rather have that tooling than not.
For me the main difference is now some people can explain what their code does. While some other only what it wants to achieve
- tyleo 4 hours ago> I've been experimenting using AI for shopping
This is an interesting choice for a first experiment. I wouldn't personally base AI's utility for all other things on its utility for shopping.
[-]- pydry 4 hours agoIt's not a first experiment it's experiment 50 or 60 and, a reaction to AI gaslighting.
Most people dont really understand coding but shopping is a far simpler task and so it's easier to see how and where it fails (i.e. with even mildly complex instructions).
[-]- tyleo 4 hours agoDo you mind sharing examples of the prompts you are using?
- giantg2 5 hours agoI see more value on the business side than the tech side. Ask the AI to transcribe images, write an email, parse some excel data, create a prototype, etc. Some of which you might have hired a tech resource to write a script for.
On the tech side I see it saving some time with stuff like mock data creation, writing boiler plate, etc. You still have to review it like it's a junior. You still have to think about the requirements and design to provide a detailed understanding to them (AI or junior).
I don't think either of these will provide 90% productivity gains. Maybe 25-50% depending on the job.
- AStrangeMorrow 3 hours agoFor me, the main thing is to never have it write anything based on the goal (what the end result should look like and how it should behave). And only on the implementation details (and coding practices that I like).
Sure it is not as fast to understand as code I wrote. But at least I mostly need to confirm it followed how it implemented what I asked. Not figuring out WHAT it even decided to implement in the first place.
And in my org, people move around projects quite a bit. Hasn’t been uncommon for me to jump in projects with 50k+ lines of code a few times a year to help implement a tricky feature, or help optimize things when it runs too slow. Lots of code to understand then. Depending on who wrote it, sometimes it is simple: one or two files to understand, clean code. Sometimes it is an interconnected mess and imho often way less organized that Ai generated code.
And same thing for the review process, lots of having to understand new code. At least with AI you are fed the changes a a slower pace.
- alexjplant 3 hours ago> Why do people keep ralking about AI as it actually worked?
Because it does.
> I still don't see ANY proof that it doesn't generate a total unmaintainable unsecure mess, that since you didn't develop, you don't know how to fix.
I wouldn't know since it's been years since I've tried but I'd imagine that Claude Code would indeed generate a half-baked Next.js monstrosity if one-shot and left to its own devices. Being the learned software engineer I am, however, I provide it plenty of context about architecture and conventions in a bootstrapped codebase and it (mostly) obeys them. It still makes mistakes frequently but it's not an exaggeration to say that I can give it a list of fields with validation rules and query patterns and it'll build me CRUD pages in a fraction of the time it'd take me to do so.
I can also give it a list of sundry small improvements to make and it'll do the same, e.g. I can iterate on domain stuff while it fixes a bunch of tiny UX bugs. It's great.
- thefounder 5 hours agoYou can launch a new product in one month instead of 12 months. I think this works best for startups where the risk tolerance is high but works less than ideal for companies such Amazon where system failure has high costs[-]
- dominotw 5 hours agoso where are these one man products lauched in a month?
not talking about toys or vibecoded crap no one uses.
[-]- deaux 4 hours agoThey're here, I made one. Not a toy or vibecoded crap, people got immediate value. Not planning to doxx myself by linking it. This was more than a year ago when models weren't even as good yet. A year later it has thousands of consistent monthly users, and it only keeps growing. It's nothing compared to VC startups but for a solo dev, made in a month? Again, it's not a toy, it offers new functionality that simply didn't exist yet and it improves people's lives. The reality is that there's no chance I would've done it without LLMs.
- UqWBcuFx6NV4r 3 hours agoThis is delusional. Are you someone that tends to have your finger on the pulse of every piece of software ever released, as it’s being released, with knowledge about how it’s built? You’re not.
Nobody is.
Perhaps nobody cares to “convince you” and “win you over”, because…why? Why do we all have to spoon feed this one to you while you kick and scream every step of the way?
If you don’t believe it, so be it.
[-]- dominotw 2 hours ago[flagged]
- fragmede 5 hours agoAside from the things people have launched that don't count because I'm right, where are the things that have been launched?[-]
- dominotw 4 hours agoapps with actual users is too high of a bar now?[-]
- Throaway1975123 4 hours agoprobs look in the game space to see a bunch of muggles making money off of "vibe coded" software
- lukev 4 hours agoI agree, but absence of evidence is not evidence of absence, and we currently have a lot of developers who feel very productive right now.
We are very much in need of an actual way to measure real economic impact of AI-assisted coding, over both shorter and longer time horizons.
There's been an absolute rash of vibecoded startups. Are we seeing better success rates or sales across the industry?
[-]- ThrowawayR2 4 hours ago> "absence of evidence is not evidence of absence"
That's the same false argument that the religious have offered for their beliefs and was debunked by Bertrand Russell's teapot argument: https://en.wikipedia.org/wiki/Russell%27s_teapot
- fbrncci 4 hours agoIt works. You’re just not doing it right if it doesn’t work for you. It’s hard to convince me otherwise at this point.
- UqWBcuFx6NV4r 3 hours agoPerson that can’t use a hammer “hasn’t seen any proof” that hammers work.
- Kim_Bruning 5 hours agoConsider it this way as a reasoning step: We've invented a cross compiler that can handle the natural languages too. That's definitely useful; but it's still GIGO so you still need your brain.
- K0balt 3 hours agoI’ve been using it to develop firmware in c++. Typically around 10-20 KLOC. Current projects use Sensors, wire protocols, RF systems , swarm networks, that kind of stuff integrated into the firmware.
If you use it correctly, you can get better quality, more maintainable code than 75% of devs will turn in on a PR. The “one weird trick” seems to be to specify, specify, specify. First you use the LLM to help you write a spec (document, if it’s pre existing). Make sure the spec is correct and matches the user story and edge cases. The LLM is good at helping here too. Then break down separations of concerns, APIs, and interfaces. Have it build a dependency graph. After each step, have it reevaluate the entire stack to make sure it is clear, clean, and self consistent.
Every step of this is basically the AI doing the whole thing, just with guidance and feedback.
Once you’ve got the documentation needed to build an actual plan for implementation, have it do that. Each step, you go back as far as relevant to reevaluate. Compare the spec to the implementation plan, close the circle. Then have it write the bones, all the files and interfaces, without actual implementations. Then have it reevaluate the dependency graph and the plan and the file structure together. Then start implementing the plan, building testing jigs along the way.
You just build software the way you used to, but you use the LLM to do most of the work along the way. Every so often, you’ll run into something that doesn’t pass the smell test and you’ll give it a nudge in the right direction.
Think of it as a junior dev that graduated top of every class ever, and types 1000wpm.
Even after all of that, I’m turning out better code, better documentation, and better products, and doing what used to take 2 devs a month, in 3 or 4 days on my own.
On the app development side of our business, the productivity gain also strong. I can’t really speak to code quality there, but I can say we get updates in hours instead of days, and there are less bugs in the implementations. They say the code is better documented and easier to follow , because they’re not under pressure to ship hacky prototype code as if it were production.
On the current project, our team size is 1/2 the size it would have been last year, and we are moving about 4x as fast. What doesn’t seem to scale for us is size. If we doubled our team size I think the gains would be very small compared to the costs. Velocity seems to be throttled more by external factors.
I really don’t understand where people are coming from saying it doesn’t work. I’m not sure if it’s because they haven’t tried a real workflow, or maybe tried it at all, or they are definitely “holding it wrong.” It works. But you still need seasoned engineers to manage it and catch the occasional bad judgment or deviation from the intention.
If you just let it, it will definitely go off the rails and you’ll end up with a twisted mess that no one can debug. But use a system of writing the code incrementally through a specification - evaluation loop as you descend the abstraction from idea to implementation you’ll end up winning.
As a side note, and this is a little strange and I might be wrong because it’s hard to quantify and all vibes, but:
I have the AI keep a journal about its observations and general impressions, sort of the “meta” without the technical details. I frame this to it as a continuation of “awareness “ for new sessions.
I have a short set of “onboarding“ documents that describe the vision, ethos, and goals of the project. I have it read the journal and the onboarding docs at the beginning of each session.
I frame my work with the AI as working with it as a “collaborator” rather than a tool. At the end of the day, I remind it to update its journal of reflections about the days work. It’s total anthropomorphism, obviously, but it seems to inspire “trust” in the relationship, and it really seems to up-level the effort that the AI puts in. It kinda makes sense, LLMs being modelled on human activity.
FWIW, I’m not asserting anything here about the nature of machine intelligence, I’m targeting what seems to create the best result. Eventually we will have to grapple with this I imagine, but that’s not today.
When I have forgotten to warm-start the session, I find that I am rejecting much more of the work. I think this would be worth someone doing an actual study to see if it is real or some kind of irresistible cognitive bias.
I find that the work produced is much less prone to going off the rails or taking shortcuts when I have this in the context, and by reading the journal I get ideas on where and how to do a better job of steering and nudging to get better results. It’s like a review system for my prompting. The onboarding docs seem to help keep the model working towards the big picture? Idk.
This “system” with the journal and onboarding only seems to work with some models. GPT5 for example doesn’t seem to benefit from the journal and sometimes gets into a very creepy vibe. I think it might be optimized for creating some kind of “relationship” with the user.
[-]- systemsweird 3 minutes agoThis is one of the best descriptions of using AI effectively I’ve read. It becomes clear that using AI effectively is about planning, architecture, and directing another intelligent agent. It’s essential to get things right at each high level step before drilling in deeper as you clearly outlined.
I suspect you either already were or would’ve been great at leading real human developers not just AI agents. Directing an AI towards good results is shockingly similar to directing people. I think that’s a big thing separating those getting great results with AI from those claiming it simply does not work. Not everyone is good at doing high level panning, architecture, and directing others. But those that already had those skills basically just hit the ground running with AI.
There are many people working as software engineers who are just really great at writing code, but may be lacking in the other skills needed to effectively use AI. They’re the angry ones lamenting the loss of craft, and rightfully so, but their experience with AI doesn’t change the shift that’s happening.
- the_real_cher 6 hours agoYeah its great but as the OP said you have to watch every P.R. like a hawk[-]
- FromTheFirstIn 4 hours agoOP said you stop doing that at some point
- _wire_ 8 hours ago"I just got my third coffee and I'm feeling really good about the quality of this code. I don't even bother to look at it. Keeps my tests simple to not test at all. You know, 'in theory'. Of course when the architecture gets genuinely tangled... Just keep your IDE open so the code knows where to go. Whoa, too much caffeine, super sleepy..."
Terafab is suddenly making so much sense!
[-]- Throaway1975123 4 hours ago"ugh, what a hard day at work, I had to prompt the LLM for 8 hours!!"
- iExploder 7 hours agoyes you fire if you are a burned out company that only needs to maintain its product and slowly die...
you hire more if you are growth and have new ideas just never had the chance to implement them as they were not practical of feasible at that level of tech (non-assisted humans clicking code and taking sick leaves)
- gedy 3 hours agoProductivity really means nothing across companies and the software industry. So we code faster, but honestly problem I see is companies ask us to code stupid or worthless things in many cases.
CTO is rewriting company platform (by himself with AI) and is convinced it's 100x productivity. But when you step back and look at the broader picture, he's rewriting what something like Rails, .NET, or Spring gave us 15-20 years ago? It's just in languages and code styles he is (only) familiar with. That's not 100x for the business, sorry...
- j45 4 hours agoYou have your devs be engineering managers over the tools.
- NotGMan 4 hours agoWhy not both?
- ryguz 2 hours ago[dead]
- Remi_Etien 7 hours ago[dead]
- cochinescu 6 hours ago[dead]