OpenAI's H1 2025: $4.3B in income, $13.5B in loss(techinasia.com)
543 points by breadsniffer 3 days ago | 683 comments
- nojs 3 days agoI think people are massively underestimating the money they will come from ads in the future.
They generated $4.3B in revenue without any advertising program to monetise their 700 million weekly active users, most of whom use the free product.
Google earns essentially all of its revenue from ads, $264B in 2024. ChatGPT has more consumer trust than Google at this point, and numerous ways of inserting sponsored results, which they’re starting to experiment with with the recent announcement of direct checkout.
The biggest concern IMO is how good the open weight models coming out of China are, on consumer hardware. But as long as OpenAI remains the go-to for the average consumer, they’ll be fine.
[-]- asa400 2 days agoWhat is OpenAI's competitive moat? There's no product stickiness here.
What prevents people from just using Google, who can build AI stuff into their existing massive search/ads/video/email/browser infrastructure?
Normal, non-technical users can't tell the difference between these models at all, so their usage numbers are highly dependent on marketing. Google has massive distribution with world-wide brands that people already know, trust, and pay for, especially in enterprise.
Google doesn't have to go to the private markets to raise capital, they can spend as much of their own money as they like to market the living hell out of this stuff, just like they did with Chrome. The clock is running on OpenAI. At some point OpenAI's investors are going to want their money back.
I'm not saying Google is going to win, but if I had to bet on which company's money runs out faster, I'm not betting against Google.
[-]- rajaman0 2 days agoConsumer brand quality is so massively underrated by tech people.
ChatGPT has a phenomenal brand. That's worth 100x more than "product stickiness". They have 700 million weekly users and growing much faster than Google.
I think your points on Google being well positioned are apt for capitalization reasons, but only one company has consumer mindshare on "AI" and its the one with "ai" in its name.
[-]- sharkweek 2 days agoI’ve got “normie” friends who I’d bet don’t even know that what Google has at the top of their search results is “AI” results and instead assume it’s just some extension of the normal search results we’ve all gotten used to (knowledge graph)
Every one of them refers to using “ChatGPT” when talking about AI.
How likely is it to stay that way? No idea, but OpenAI has clearly captured a notable amount of mindshare in this new era.
[-]- bashtoni 2 days agoIn the UK, everyone refers to a vacuum as a 'hoover'. They are not the dominant vacuum brand there despite the massive name recognition.[-]
- amonith 2 days agoSame with "Pampers" in Poland. Everyone says "pampersy" when referring to just generic diapers. Almost nobody buys the literal "Pampers" brand.
- InsideOutSanta 2 days agoI'm not sure if physical products are analogous to internet services. If all it took to vacuum your house was typing "Hoover" into a browser, and everyone called vacuums "a Hoover," then I would expect Hoover to have 90% of the vacuum market share.
But since buying a vacuum usually involves going to a store, looking at available devices, and paying for them, the value of a brand name is less significant.
[-]- abustamam 2 days agoPre-pandemic, at least in my social circles, "Skype" was the term for video calling. "Hey, wanna Skype?" and we'd hop on a discord call.
Post-pandemic, at work and such, "Zoom" has become synonymous for work call. Whether it's via Slack or Google Meet, or even Zoom, we use the term Zoom.
I don't know what the market share is on Skype (Pre-pandemic) or Zoom, but these common terms appear to exist for software.
- lkramer 2 days agotheir own fault :) https://en.wikipedia.org/wiki/Hoover_free_flights_promotion
- MrDisposable 2 days ago'Pampers' and 'Xerox' in Russia.
- cwsx 2 days agoBAND-AID is another one[-]
- card_zero 2 days agoAnd "generic trademark" is the Wikipedia article.
https://en.wikipedia.org/wiki/Generic_trademark
Huh, bubble wrap, even.
- jb3689 2 days agoOn the other hand, no one cares about Velcro or Tupperware[-]
- Naracion 1 day agohttps://www.youtube.com/watch?v=rRi8LptvFZY
Video description, from the Velcro brand YouTube channel:
Our Velcro Brand Companies legal team decided to clear a few things up about using the VELCRO® trademark correctly – because they’re lawyers and that’s what they do. When you use “velcro” as a noun or a verb (e.g., velcro shoes), you diminish the importance of our brand and our lawyers lose their insert fastening sound. So please, do not say “velcro shoes” (or “velcro wallet” or “velcro gloves”) - we repeat “velcro” is not a noun or a verb. VELCRO® is our brand. #dontsayvelcro
- philipallstar 2 days agoTannoy another.
- chii 2 days agowhen people started referring to searching the internet as googling, they know their brand has made it.
It is the same with chatGPT.
[-]- wltr 2 days agoEven I often tell I chatgeepeeteed the result, in the same fashion when I continue saying I googled the result, while actually I used Duck Duck Go. I could ask another LLM provider, but I have no idea how to communicate that properly to a non-technical folks. Heck, I don’t want to communicate that _properly_ to tech peers either. I don’t like these pedantic phrases ‘well, actually … that wasn’t Google, I used DDG for that.’ Sometimes I can say ‘web search,’ but ‘I googled that’ is just more natural thing to say.
Same here. I tried saying ‘I asked LLM’ or ‘I asked AI’ but that doesn’t sound right for me. So, in most conversations I say ‘I asked Chat GPT’ and in most of these situations, it feels like the exact provider does not matter, since essentially they are very similar in their nature.
[-]- abustamam 2 days agoI cheekily refer to it as Al (like, short for Albert) because Google seems to love to shove Al's overviews in my search results.
But when I'm being more serious I'd usually just say "I asked GPT"
I have a colleague who just refers to AI as "Chat" which I think is kinda cute, but people also use the term "chat" to refer to... Like, people, or "yall". Or to their stream chat.
- vinhcognito 2 days agoI like to go with 'I asked my bot|chatbot'
- jandy 2 days agoYep, this. I’ve switched to Claude for a while (because I can’t afford max plans for both) and nobody in the real world has any idea what it is I’m talking about. “Oh it’s like ChatGPT?”[-]
- gopheryourshelf 2 days agoClaude is also difficult to consistently pronounce for a non-English speaker. Sometimes people dont say that because it can get misinterpreted. ChatGPT is something easy on the the tongue and very difficult to mis-pronounce.[-]
- nasmorn 2 days agoI know a lot of people who refer to it as ChatGTP which I assume stands for German treebrained performers
- suspended_state 2 days agohttps://chat.mistral.ai/chat/12da2e83-f3f1-4a47-b432-753cac2...
I suspect they chose that name because of the proximity with the word "cloud".
- viking123 2 days agoThe CEO is also more puritan than the pope himself considering the amount of censorship it has. Not sure if they are even interested in marketing to normies though.[-]
- sieve 2 days ago> The CEO is also more puritan than the pope himself considering the amount of censorship it has.
In that case, you should try OpenAI's gpt-oss!
Both models are pretty fast for their size and I wanted to use them to summarize stories and try out translation. But it keeps checking everything against "policy" all the time! I created a jailbreak that works around this, but it still wastes a few hundred tokens talking about policy before it produces useful output.
[-]- gmerc 2 days agoSurely someone has abliterated it by now
- MediumOwl 2 days agoAh yes, the Latin-originating French name that has a variant at least in every Latin language is hard to pronounce for non-English users.
- prism56 2 days agoWhen I talk about any AI usage I do. I just say chatgpt to any friends.
- parineum 2 days ago> They have 700 million weekly users and growing much faster than Google.
Years old company growing faster than decades old company!
2.5 billion people use Gmail. I assume people check their mail (and, more importantly, receive mail) much more often than weekly.
ChatGPT has a lot of growing to do to catch up, even if it's faster
[-]- boston_clone 2 days agoI read that as OpenAI’s WAU is showing a steeper increase than Google ever did. Not saying it’s factually accurate, just that it’s not a fixed point-in-time comparison :)[-]
- jobigoud 2 days agoThere are 2 billion more humans living now than in 2000 though, and the world is much more technology oriented.
- parineum 2 days agoThe focus on WAU is a tell though. How much data can OpenAI use for advertising if I interact with it weekly?
When I ask about which toaster is best, is it going to show me ads for a motorcycle because that's what I asked about last week?
[-]- voakbasda 2 days agoMy wife asked for information about a product, and ChatGPT fed her a handful of blatant product ads. She told the AI never to do that again, and that was the last time she saw that format of output.
I would wager that she was part of an A/B testing group, so her instruction may not have any real effect. However, we were both appalled by that output and immediately discussed alternative AI options, should such a change become permanent.
This isn’t the rise of Google, where they have a vastly superior product and can boil us frogs by slowly serving us more and more ads. We are already boiling mad from having become hypersensitive to products wholly tainted by ads.
We ain’t gunna take it anymore.
- aleph_minus_one 2 days ago> ChatGPT has a phenomenal brand.
My observation is different: ChatGPT may be well-known, but does not have a really good reputation anymore (I'd claim that it is in average of equal dubious reputation as Google) in particular in consideration of
- a lot of public statements and actions of Sam Altman (in particular his involvement into Worldcoin (iris scanning) makes him untolerable for being the CEO of a company that is concerned about its reputation)
- the attempts to overthrow Sam Altman's throne
- most people know that OpenAI at least in the past collaborated a lot with Microsoft (not a company that is well-regarded). But the really bad thing is that the A"I" features that Microsoft introduced into basically every product are hated by users. Since people know that these at least originated in ChatGPT products, this stained OpenAI's reputation a lot. Lesson: choose carefully who you collaborate with.
[-]- cristea 2 days agoYou massively overestimate what people actually know and read about. If you are in the tech sphere these things might be obvious to you, but I assure you regular people are not keeping track as closely.
I bet at most 10 % of people in the West can name the CEO of OpenAI.
- A4ET8a8uTh0_v2 2 days agoEh. Altman is not Musk in terms of negative coverage or average sentiment on the net. That might change in the future, but my personal guess is that your perception may be based on spending too much time in a specific echo chamber. I personally like to use people who don't use llms at all for a proper grounding. In those cases, Altman name does not exist, while Musk barely registers.[-]
- aleph_minus_one 2 days ago> Altman is not Musk in terms of negative coverage or average sentiment on the net.
I can assure you that in Germany (where people are very sensitive with respect to privacy topics), Sam Altman (in particular because of his involvement with Worldcoin ("iris scanning" -> surveillance)) has a very bad reputation by many people.
- kelvinjps10 2 days agoMost normal people don't know about these things they don't even know who Sam Altman is, for example my family that are not Americans they know about chat gpt but they don't know who Sam Altman is
- jobigoud 2 days agoNormal people that use ChatGPT have never heard of Sam Altman, especially outside the US. These points are only in tech and financial circles.
- t_sawyer 2 days agoWhile I agree, we saw this play out with Dropbox too.
- danaris 2 days agoSure, "ChatGPT" has entered the common consciousness as the name of LLM chatbots as a product.
But does that mean that all of the people who talk about "asking ChatGPT" are actually asking ChatGPT, from OpenAI?
How many of them are actually asking Claude? Or Gemini? Or some other LLM?
That's the trouble when your brand name gets genericized.
- otabdeveloper4 2 days ago> ChatGPT has a phenomenal brand.
If by "phenomenal" you mean "the premier slop and spam provider", then yes.
[-]- dns_snek 2 days agoSadly that's not how the wider public sees it.[-]
- otabdeveloper4 2 days agoThat's exactly how the wider public sees it.
It just turns out that the wider public loves peddling slop. (Not so much though when on the receiving end.)
[-]- Yiin 4 hours agomy mom sees it as a nice internet bloke that helps her with writing emails. She once asked why it can't change background of her image from white to red if it can generate all that amazing art, and was genuinely disappointed that she can't get it to understand what she wants. You have skewed view on public perception on llms - they don't think about it, they just use it.
- viking123 2 days agoGoogle has to be shitting its pants. No one knows what is "gemini", probably some stupid nerd thing. Normies knows ChatGPT and that is what matters.[-]
- A4ET8a8uTh0_v2 2 days agoThey might be. Google has been getting mildly 'aggressive' in their emails pleading with me to use gemini and I have yet to try it ( and that is despite being mildly interested ). There is a reason first mover's advantage is a real thing. People stick with what they think they know.
- tomaszsobota 2 days ago> ChatGPT has a phenomenal brand. That's worth 100x more than "product stickiness". They have 700 million weekly users
I don't think majority of those 700m people use the product because of the brand. Products are a non-trivial contributor to the brand.
Also, if it were phenomenal, they wouldn't be called ClosedAI ;)
- jofzar 2 days ago> What is OpenAI's competitive moat? There's no product stickiness here.
Would have agreed with you untill I saw the meltdown of people losing their "friend" when chatgpt 5 was released. Somehow openai has fallen into a "sticky" userbase.
[-]- tartuffe78 2 days agoWill people accept ads from their “computer friend”? Might feel like the Truman Show when your friend starts giving you promo codes in casual conversation[-]
- shmeeed 2 days agoWait, your friends don't casually always give you promo codes?
- NBJack 2 days agoWrong kind of stickiness, I'm afraid.
- mrheosuper 2 days ago>non-technical users can't tell the difference between these models at all
My non-tech friend said she prefer ChatGPT more than Gemini, most due to its tone.
So non-tech people may not know the different in technical detail, but they sure can have bias.
[-]- ryukoposting 2 days agoI have a non-techy friend who used 4o for that exact reason. Compared to most readily available chatbots, 4o just provides more engaging answers to non-techy questions. He likes to have extended conversations about philosophy and consciousness with it. I showed him R1, and he was fascinated by the reasoning process. Makes sense, given the sorts of questions he likes to ask it.
I think OpenAI is pursuing a different market from Google right now. ChatGPT is a companion, Gemini is a tool. That's a totally arbitrary divide, though. Change out the system prompts and the web frontend. Ta-daa, you're in a different market segment now.
- returnInfinity 2 days agoChatGpt has won. I talk to all teens living nearby and they all use chatgpt and not Google.
The teens, they don't know what is OpenAI, they don't know what is Gemini. They sure know what is ChatGPT.
[-]- MountDoom 2 days agoAll of these teens use Google Docs instead of OpenAI Docs, Google Meet instead of OpenAI Meet, Gmail instead of OpenAI Mail, etc.
I'm sure that far fewer people to go gemini.google.com than to chatgpt.com, but Google has LLMs seamlessly integrated in each of these products, and it's a part of people's workflows at school and at work.
For a while, I was convinced that OpenAI had won and that Google won't be able to recover, but this lack of vertical integration is becoming a liability. It's probably why OpenAI is trying to branch into weird stuff, like running a walled-garden TikTok clone.
Also keep in mind that unlike OpenAI, Google isn't under pressure to monetize AI products any time soon. They can keep subsidizing them until OpenAI runs out of other people's money. I'm not saying OpenAI has no path forward, but it's not all that clear-cut.
[-]- og_kalu 2 days ago>All of these teens use Google Docs instead of OpenAI Docs, Google Meet instead of OpenAI Meet, Gmail instead of OpenAI Mail, etc.
Billions of people use Meta apps and products. Meta AI is all over all those apps. Why is usage minuscule compared to ChatGPT or even Gemini ? Google has billions of users, many using devices operating their own OS, in which Gemini is now the default AI assistant, so why does ChatGPT usage still dwarf Gemini's ?
People need to understand that just because you have users of product x, that doesn't mean you can just swoop in and convert them to product y even if you stuff it in their faces. Yes it's better than starting from scratch but that's about it. In the consumer LLM space, Open AI have by far the biggest brand and these mega conglomerates need to beat that and not the other way around. AI features in Google mail is not going to make people stop using GPT anymore than Edge being bundled in Windows will made people stop using Chrome.
[-]- MountDoom 2 days agoNah. No one is using Meta AI because it's shoehorned into contexts where you don't actually need it. And that's because these happen to be the only surfaces that Meta controls. They know full well they won't win there, which is probably why they're so desperate for a "hail Mary" in the VR / AR space.
For the average person, what's the most serious / valuable use of ChatGPT right now? It's stuff like writing essays, composing emails, planning tasks. This is precisely the context in which Google has a foothold. You don't need to open ChatGPT and then copy-and-paste if you have an AI button directly in the text editor or in the email app.
[-]- og_kalu 2 days ago>No one is using Meta AI because it's shoehorned into contexts where you don't actually need it.
What's shoehorned about LLMs in a messaging app? This kind of casual conversation is a significant amount of LLM usage? Open AI says non-work queries account for about 70% of ChatGPT usage. They say that '“Practical Guidance,” “Seeking Information,” and “Writing”' are the 3 mot common topics, so really, how is it shoehorned to place this in Facebook ? [0]
>For the average person, what's the most serious / valuable use of ChatGPT right now? It's stuff like writing essays, composing emails, planning tasks. This is precisely the context in which Google has a foothold. You don't need to open ChatGPT and then copy-and-paste if you have an AI button directly in the text editor or in the email app.
Lol I don't know what else to tell you but that really doesn't matter, but it's not like you have to take my word for it. Copilot is baked in the Microsoft Office Suite. The Microsoft Office Suite dwarfs Google Docs, Sheets etc (yes even for students) in terms of usage. What impact has this had on Open AI and chatGPT ? Absolutely nothing.
- raw_anon_1111 2 days agoAnd Meta is making billions in profits using AI for ad targeting. They have a real business model.
- hn_throwaway_99 2 days ago> All of these teens use Google Docs instead of OpenAI Docs, Google Meet instead of OpenAI Meet, Gmail instead of OpenAI Mail, etc.
Google Docs, Google Meet and Gmail provide a tiny fraction of Google's overall revenue. And they're hardly integrated in with Google's humongous money maker, search, in a way that matters (Gmail has ads but my guess is that its direct revenue is tiny compared to search - the bigger value is the personalization of ads that Google can do by knowing more about you).
> I'm sure that far fewer people to go gemini.google.com than to chatgpt.com, but Google has LLMs seamlessly integrated in each of these products, and it's a part of people's workflows at school and at work.
But the product isn't "LLMs", the product is really "where do people go to find information", because that is where the money to be made in ads is.
I definitely don't think that OpenAI "winning" means Google is going anywhere soon, but I do agree with the comments that OpenAI has a huge amount of advertising potential, and that for a lot of people, especially younger people, "ChatGPT" is how they think of gen AI, and it's there first go-to resource when they want to look something up online.
[-]- MountDoom 2 days ago> Google Docs, Google Meet and Gmail provide a tiny fraction of Google's overall revenue.
I don't understand your argument here. Like Chrome and Android, these products exist to establish foothold, precisely so that Microsoft or OpenAI can't take Google's lunch.
My point is that brand recognition doesn't matter: if you can get equivalent functionality the easy way (a click of a button in Docs), you're not going to open a separate app and copy-and-paste stuff.
All of this will make it harder for OpenAI to maintain moat and stop burning money. Especially when their path to making money is to make LLMs worse (i.e., product placement / ads), while Google has more than enough income to let people enjoy untainted AI products for a very long time.
Even for search, right now, I'm pretty sure there are orders of magnitude more people relying on Google Search AI snippets than on ChatGPT. As these snippets get better and cover more queries, the reasons to talk to a chatbot disappear.
I'm not saying it's a forgone conclusion, but I think that OpenAI is at a pretty significant disadvantage.
[-]- hn_throwaway_99 2 days ago> My point is that brand recognition doesn't matter: if you can get equivalent functionality the easy way (a click of a button in Docs), you're not going to open a separate app and copy-and-paste stuff.
I couldn't disagree more with this statement. So far I've seen companies trying to shoehorn AI into all these existing apps and lots of us hate it. I want Docs to be Docs - even if I'm writing some sort of research paper on a topic, I still don't want to do my research in Docs, because they're two completely separate mental tasks for me. There have been legions of failed attempts to make "everything and the kitchen sink" apps, and they usually suck.
> Even for search, right now, I'm pretty sure there are orders of magnitude more people relying on Google Search AI snippets than on ChatGPT. As these snippets get better and cover more queries, the reasons to talk to a chatbot disappear.
I'm sure that's true for older people, where Google is "the default", but just look at all the comments in this thread about where younger people/teenagers go first for information. For a lot of these folks ChatGPT is "the default", as as that is Google's big fear, that they will lose a generation of folks who associate "ChatGPT" with "AI" just like a previous generation associated "Google" with "search".
[-]- sagarm 22 hours agoYou're absolutely right about ChatGPT's consumer mindshare, and I think a lot of people undervalue that.
Having Gemini in docs is useful, though. You can ask questions about the document without copying back and forth and context switching. Plus, it has access to the company's entire corpus, and so can understand company-specific concepts and acronyms.
Hell, I had a manager jokingly ask it for a status update meeting for another related project. According to someone actually involved with that project, it actually gave a good answer.
- andyferris 2 days agoI think the beneficiary is wrong here. Those teens will grow up to work for organizaitons using Azure AD, Windows, Office and OneDrive/SharePoint/Teams.
If any company is going to get the windfall of "AI provider by default" it is going to be Microsoft. Possibly powered by OpenAI models running on Azure.
Google could make a "better" (basically - more sublime) advertising platform but little to attract new users. Perhaps Android usage would rise - Apple _is_ behind on AI after all. On the other hand, users will either use the AI integrated into Excel, Word, PowerPoint, Teams, Edge and more, or else users' AI of choice (ChatGPT) will learn to as competently drive the Windows and Web UIs as Claude Code drives bash, giving a productive experience with your desktop (and cloud) apps.
Once you use _that_ tool, its now where you start asking questions, not google.com. I am constantly asking ChatGPT and Claude about things I might be purchasing, making comparisons, etc (amongst many other things I might possibly google). Microsoft has an existing interest in advertising, and OpenAI is currently exploring how best go about it. My bet isn't on Google right now.
[-]- MountDoom 2 days agoPossibly, but I don't think that Microsoft apps have the kind of a foothold in the corporate world that they used to have.
Sure, if you join a bank or a government agency, or a big company that's been around for 40+ years, you're probably gonna be using Microsoft products. But the bulk of startups, schools, and small businesses use Google products nowadays.
Judging by their MX record, OpenAI is a Google shop... so is Perplexity... so is Anthropic... so is Mistral.
- hiatus 2 days ago> I think the beneficiary is wrong here. Those teens will grow up to work for organizaitons (sic) using Azure AD, Windows, Office and OneDrive/SharePoint/Teams.
Idk, younger companies like Anthropic and OpenAI are using google.
[-]- kelvinjps10 2 days agoIf young people are mostly using google docs, sheets etc companies might adapt to use these, actually I already seen it happen
- ineedasername 2 days ago>Google Docs instead of…
All of these teens use Microsoft Word instead of Google Word, Microsoft NetMeeting instead of Google NetMeeting, Microsoft Hotmail instead of Google Mail, etc.
I’m sure far fewer people go to MSN Search than to Google.com, but Microsoft has Windows integrated into all of these products, and it’s part of people’s workflows at school and at work.
[-]- lostlogin 2 days agoWhen you say ‘Word’, do you mean the app, the web app, or the Teams app? They don’t work well together and leave documents looking truly awful on whichever variant you aren’t currently using.
That this bonfire is an industry standard has to be embarrassing for Microsoft.
- siva7 2 days agoNot really. Those teens use Microsoft products, not google ones because that's what schools provide
- asa400 2 days agoI'm not saying you're wrong, but people said the same thing about Yahoo, Excite, Lycos, etc. in 1999. Interesting times, then and now.[-]
- nilsbunger 2 days agoI think the biggest risk to ChatGPT as a consumer brand is that they don’t own the device surface. Google / Microsoft / Apple could make great AI that’s infused in the OS / browser, eliminating the need to go to ChatGPT.[-]
- eucyclos 2 days agoSince microsoft kinda sorta owns or is merging with openai it's probably already close to that... copilot is constantly down for me at least, but I assume that's not a hard thing to fix on Microsoft's end if it wants to start paying the server costs...
- netdevphoenix 2 days agoWhat is Google's competitive moat? There's no product stickiness here. What prevents people from just using Altavista/Yahoo/[any other search engine].
You vastly underestimate the power of habit and branding combined together. Just like then, the vast majority of people equate ChatGPT with AI chatbot, there is no concept of alternative AI chatbot. Sure people might have seen some AI looking thing called Copilot and some weird widget in the Google Search results but so far ChatGPT is winning the marketing game even if the offerings from rivals might be the same or even superior sometimes
[-]- bpt3 2 days agoGoogle's competitive moat 20-25 years ago was being a significantly better search engine than the alternatives. That remained true for decades.
You can't say the same about ChatGPT. And Google wasn't spending $4 to make $1 almost 10 years after its founding, which will become an issue at some point.
- AlecSchueler 2 days ago> What prevents people from just using Altavista/Yahoo/[any other search engine].
Google shows the results you're looking for. At least this was true when they were in competition with the engines you mentioned, they had genuine quality advantage.
- tobias3 2 days agoGoogle has defaults as their huge moat. They have Chrome and Android under their control and pay Apple and Mozilla to be the default search engine.
Here in Europe this is mitigated by them having to show a browser/search engine selection screen, but in the US you seem to be more accepting of the monopoly power. Or it seems the Judge in Calfornia seems to think that OpenAI actually has a change of winning this. It doesn't in my estimation.
On the other side Google has a monopoly on Ads. When OpenAI somehow starts displaying ads, they'd have to build their own Ad network and then entice companies and brands to use it. Good luck with that.
- jstummbillig 2 days agoChatGPT (and all the competitors) are trivially sticky products: I have a lot of ongoing conversations in there, that I pick up all the time. Add more long term memory stuff — a direction I am sure they will keep pushing — and all of the sudden there is a lot of personal data that you rely on it having, that make the product better and that most people will never care to replicate/transfer. Just being the product that people use makes you the product that people will use. "the other app doesn't know me" is the moat. The data that people put in it is the moat.[-]
- A4ET8a8uTh0_v2 2 days agoThis. I am not sure why or how this is missed, but because you cannot easily port context ( maybe yet ), the stickiness increases with every conversation assuming your questions are not encyclopedia type questions that don't need follow up.[-]
- sagarm 22 hours agoDo you actually curate your contexts? I did in the early days, but I just create new ones now.
- raincole 2 days agoBrand is the second most important moat (only shy to regulation capture).
If normal people start saying "ChatGPT" to refer to AI they win, just like how google became a verb for search.
It seems to be the case.
[-]- taurath 2 days agoAs a counter, you can buy a hell of a lot of brand for $8 billion dollars though.
You can give your most active 50,000 users $160,000 each, for example.
You can run campaign ads in every billboard, radio, tv station and every facebook feed tarring and feathering ChatGPT
Hell, for only $200m you could just get the current admin to force ChatGPT to sell to Larry Ellison, and deport Sam Altman to Abu Dahbi like Nermal from Garfield.
So many options!
[-]- matwood 2 days agoAccording to Google, Coca Cola spent over $5B on advertising in 2024 and most of the world already knows who they are. I think $8B (or the $2B OpenAI spent) buys a lot less branding than you think.[-]
- lostlogin 2 days agoDo you want to be sold sugar water for the rest of your life or come with me and see AI change the world?”
- raincole 2 days agoYes, and it's what they're doing. Buying brand.
> OpenAI also spent US$2 billion on sales and ad
- eru 2 days ago> What is OpenAI's competitive moat? There's no product stickiness here.
Doesn't look worse than Google Search's moat to me? And that worked really well for a long time.
- loandbehold 2 days agoUsers' chat history is the moat. The more you use it, the more it knows about you and can help you in ways that are customized to particular user. That makes it sticky, more so than web search. Also brand recognition, ChatGPT is the default general purpose LLM choice for most people. Everyone and their mom is using it.[-]
- what 2 days agoGemini is probably the default general purpose LLM since its answers are inserted into every google result.
- pnutjam 2 days ago"At some point OpenAI's investors are going to want their money back."
They do now, that's why they are using a shell game to pump up the stock value.
- kristopolous 2 days agoIt's a distinctive brand, pleasant user experience, and a trustworthy product, like every other commodified technology on the planet.
That's all that matters now. We've passed the "good enough" bar for llms for the majority of consumer use cases.
From here out it's like selling cellphones and laptops
[-]- viking123 2 days agoYeah most people genuinely cannot tell the difference in quality between those top models. People here jerk off to some benchmarks but in real life that crap is completely meaningless
- SamPatt 2 days agoThere is definitely stickiness if you're a frequent user. It has a history of hundreds of conversations. It knows a lot about me.
Switching would be like coding with a brand new dev environment. Can I do it? Sure, but I don't want to.
- wiseowise 2 days ago> Normal, non-technical users can't tell the difference between these models at all, so their usage numbers are highly dependent on marketing.
When I think as a “normal” user, I can definitely see difference between them all.
[-]- louiskottmann 2 days agoAs history showed us numerous times, it doesn't even have to be the best to win. It rarely is, really. See the most pervasive programming languages for that.
- 01100011 2 days agoI'm saying Google is going to win. They're not beholden to their current architecture as much as other shovelmakers and can pivot their TPU to offer the best inference perf/$. They also hold about as much personal data as anyone else and have plenty of stuff to train on in-house. I work for a competitor and even I think there's a good chance google "wins"(there's never a winner because the race never ends).[-]
- matwood 2 days agoThe problem is that Google is horrible at product. They have been so spot on at search it's covered up all the other issues around products. YT is great, but they bought that. The Pixel should the Android phone, but they do a poor job marketing. They should be leading AI, but stumbled multiple times in the rollout. They normally get the tech right, and then fumble the productizing and marketing.[-]
- mike_hearn 2 days agoPixel being undermarketed is deliberate, Android is an alliance and they don't want to compete against Samsung too hard.
But Google have other weaknesses. In the most valuable market (the USA) Google is very politically exposed. The left don't like them because they're big rich techbro capitalists, the Democrats tried to break them up. The right hate them because of their ongoing censorship, social engineering and cancellation of the right. They're rapidly running out of friends.
Just compare:
https://www.google.com/search?q=conservative+ai
https://www.bing.com/search?q=conservative+ai
The Google SERP is a trash fire, and it must be deliberate. It's almost like the search engine is broken. Not a single conservative chat bot ranks. On Bing the results are full of what the searcher is looking for. ChatGPT isn't perfect but it's a lot less biased than Google is. Its search results come from Bing which is more politically neutral. Also Altman is a fresh face who hasn't antagonized the right in the same way Google has. For ~half the population Gemini is still branded as "the bot that drew black nazis and popes", ChatGPT isn't. That's an own goal they didn't need.
- nwellinghoff 2 days agoI think we are all forgetting that Google is a massive bureaucracy that has to move out of its own way to get anything done. The younger companies have a distinct advantage here. Hence the cycle of company growth and collapse.I think openai and the like have a very good chance here.
- fooker 2 days agoIs there publicly available evidence that TPU perf/$ is better than Blackwell ?
I know it seems intuitively true but was surprised to not really find evidence for it.
[-]- gigatexal 2 days agoyeah ... poly market and other makers seem to be betting that Google by year's end or sometime next year or so will have teh best gen ai models on the market ... but I've been using Claude sonnet 4.5 with GitHub Copilot and swear by it.
anyways would be nice to really see some apples-to-apples benchmarks of the TPU vs Nvidia hardware but how would that work given CUDA is not hardware agnostic?
- remorse_jaunty 2 days agoFor consumer product, memory, the recent pulse one and _much awaited_ ai feed are the products that will build stickiness. I pay for both claude and openai currently and it is much more difficult to continue a chat on other platform as the context systems isn’t something i can cook up swiftly.
- MuffinFlavored 2 days ago> There's no product stickiness here.
Very few of those 700,000,000 active users have ever heard of Claude or DeepSeek or ________. Gemini maybe.
[-]- youseffarz 7 hours ago[flagged]
- lostlogin 2 days ago> Google has massive distribution with world-wide brands that people already know, trust, and pay for, especially in enterprise.
Do people trust Google in a positive sense? I trust them to try force me to login and to spam me with adverts.
- xz0r 2 days agoLet me direct you to the reddit AMA where people were literally begging to bring back 4o.[-]
- hn_throwaway_99 2 days agoYeah, anyone saying "Normal, non-technical users can't tell the difference between these models at all" isn't talking to that many normal, non-technical users.
- mattio 2 days agoCurrently their moat is history. Why I keep coming back to ChatGPT is it ‘remembers’ our previous chats, so I don’t have to explain things over and over again. And this history builds up over time.[-]
- NBJack 2 days agoNot sure how this argues their moat. The context window in pretty small (at best 192k on 5 with the right subscription). Once you run past it, history is lost or becomes gimmicky. Gemini 2.5 Pro by contrast offers 1M. Llama 4 offers 10M (though seems to perform substantially worse).
- onion2k 2 days agoWhat prevents people from just using Google, who can build AI stuff into their existing massive search/ads/video/email/browser infrastructure?
Google have never had a viable competitor. Their moat on Search and Ads has been so incredibly hard to beat that no one has even come close. That has given them an immense amount of money from search ads. That means they've appeared to be impossible to beat, but if you look at literally all their other products they aren't top in anything else despite essentially unlimited resources.
A company becoming a viable competitor to Google Search and/or Ads is not something we can easily predict the outcome of. Many companies in the past who have had a 'monopoly' have utterly fallen apart at the first sign of real competition. We even have a term for it that YC companies love to scatter around their pitch decks - 'disruption'. If OpenAI takes even just 5% of the market Google will need to either increase their revenue by $13bn (hard, or they'd have done that already) or they'll need to start cutting things. Or just make $13bn less profit I guess. I don't think that would go down well though.
[-]- amanaplanacanal 2 days agoAren't Chrome and Gmail also pretty much number one in their areas? I don't really use either one, but that's my impression. Also Android.[-]
- gomox 2 days agoChrome and Gmail don't really make any money for Google.
- fspeech 2 days agoChats have contexts. While search engines try to track you it is spookier because it is unclear to the user how the contexts are formed. In chats at least the contexts are transparent to both the provider and the user.
- pryelluw 2 days agoBrand name and ChatGPT being the synonym of AI. Just like google is a synonym for search. That right there is very powerful.
- holoduke 2 days agoThe name ChatGpt is probably the most valuable brand at the moment. Everybody is talking and using it.
- Agingcoder 2 days agoI don’t want to use google anymore - bad search results, too many ads.
- voidfunc 2 days agoBrand. Brand. Brand!
Literally nobody but nerds know what a Claude is among many others.
ChatGPT has name recognition and that matters massively.
- kldg 2 days agoYeah, for me, the biggest issue is, counter-intuitively given it's Google, I know Gemini is going to continue existing as a product for a long time; I feel comfortable storing data and building things out for it. Anthropic's putting out great models, but it's financially endangered, and OpenAI isn't doing great either; and I'm confident Gemini 3 release will put it right back at top-of-pack again as far as model output quality, so these little windows where I'm not using The Best are not a big deal.
Once the single-focus companies have to actually make a profit and flip the switch from poorly monetized to fully monetized, I think folks will be immediately jumping ship to mega-companies like Google who can indefinitely sustain the freemium model. The single-focus services are going to be Hell to use once the free rides end: price hikes, stingy limits, and ads everywhere.
.... but the field will change unpredictably. Amazon offers a lot of random junk with Prime -- hike price $50/year, slap on a subscription to high-grade AI chatbot 10% of users will actually use (say 2% are "heavy users"), and now Anthropic is financially sustainable. Maybe NYT goes from $400 to $500 per year, and now you get ChatGPT Pro, so everything's fine at OpenAI. There're a ton of financial ideas you'll come up with once you feel the fire at your feet; maybe the US government will take a stake and start shilling services when you file taxes. Do you want the $250 Patriot Package charged against your tax refund, or are we throwing this in the evidence pile containing your Casio F91-W purchase and tribal tattoos?
- pembrook 2 days ago> What is OpenAI's competitive moat? There's no product stickiness here.
20 years ago everyone said the exact same thing about Google Search.
I mean, how could you possibly build a $3T company off of a search input field, when users can just decide to visit a different search input field??
Surprise. Brand is the most powerful asset you can build in the consumer space. It turns out monetization possibilities become infinite once you capture the cultural zeitgeist, as you can build an ecosystem of products and eventually a walled garden monopoly.
- moralestapia 2 days ago>What is OpenAI's competitive moat?
The broken record's still running, someone please turn it off!
At this point I think people just suffer from some sort of borderline mental disorder.
700 MAUs in just a couple years? In a red(-pretty-much-pure-blood) ocean? Against companies who've been in the business for 20 years?
One would have to be quite dumb or obtuse not to see it.
- beacon294 2 days agoAI has been incredibly sticky. Look at the outrage, OpenAI couldn't even deprecate 4o or whatever because it's incredibly popular. Those people aren't leaving OAI if they're not even leaving a last gen model.
- MontyCarloHall 2 days agoI also wonder if this means that even paid tiers will get ads. Google's ad revenue is only ~$30 per user per year, yet there is no paid, ad-free Google Premium, even though lots of users would gladly pay way more than $30/year have an ad-free experience. There's no Google Premium because Google's ad revenue isn't uniformly distributed across users; it's heavily skewed towards the wealthiest users, exactly the users most likely to purchase an ad-free experience. In order to recoup the lost ad revenue from those wealthy users, Google would have to charge something exorbitant, which nobody would be willing to pay.
I fear the same will happen with chatbots. The users paying $20 or $200/month for premium tiers of ChatGPT are precisely the ones you don't want to exclude from generating ad revenue.
[-]- psadri 2 days agoThe average is $x. But that's global which means in some places like the US it is 10x. And in other less wealthy areas it is 0.1x.
There is also the strange paradox that the people who are willing to pay are actually the most desirable advertising targets (because they clearly have $ to spend). So my guess is that for that segment, the revenue is 100x.
- alex43578 2 days ago"Lots of users would gladly pay way more than $30/year have an ad-free experience"? Outside of ads embedded in Google Maps, a free and simple install of Ublock Origin essentially eliminates ads in Search, YouTube, etc. I'd expect that just like Facebook, people would be very unwilling to pay for Google to eliminate ads, since right now they aren't even willing to add a browser extension.[-]
- hsbauauvhabzb 2 days agoIt worked for YouTube, I don’t see why the assumption of paid gpt models will follow google and not YouTube, particularly when users are conditioned to pay for gpt already.
- catlifeonmars 2 days agoAnecdata, but my nontechnical friends have never heard of uBlock origin. They all know about ad-free youtube.[-]
- dns_snek 2 days agouBlock Origin specifically or ad blockers in general?[-]
- catlifeonmars 1 day agoSurprisingly, ad blockers in general. And everyone (I know) hates ads.
- m11a 2 days agoI’d agree. The biggest exception I can think of is X, which post-Musk has plans to reduce/remove ads. Though I don’t know how much this tanked their ad revenue and whether it was worth it.
- grogers 2 days agoWhy would it be any different for youtube premium? I think Google just doesn't think enough people will pay for ad-free search, not that it would cannibalize their ad revenue.[-]
- nickff 2 days agoYouTube's ads are much lower-cost than the 'premium' AdWords ones, because the 'intent' is lower, and targeting is worse.
- josvdwest 2 days agoPretty sure the reason they don't have a paid tier is because engagement (and results) is better when you include ads. Like Facebook found in the early days
- bobro 2 days agoImagine if you are paying to publish an ad. One ad platform sends your ad to everyone, the other allows the most affluent users to avoid ads. If you choose the platform where affluent people won’t see your ad, you’re likely shooting yourself in the foot.
- twelvechairs 3 days ago> But as long as OpenAI remains the go-to for the average consumer, they be fine.
This is like the argument of a couple of years ago "as long as Tesla remains ahead of the Chinese technology...". OpenAI can definitely become a profitable company but I dont see anything to say they will have a moat and monopoly.
[-]- muzani 2 days agoThey're the only ones making AI with a personality. Yeah, you don't need chocolate flavored protein shakes but if I'm taking it every day, I get sick of the vanilla flavor.[-]
- idiotsecant 2 days agoHuh? They're actively removing personality from current models as much as possible.[-]
- iambateman 3 days agoI think this is directionally right but to nitpick…Google has way more trust than OpenAI right now and it’s not close.
Acceleration is felt, not velocity.
[-]- da_chicken 2 days agoYeah, I agree with you.
Between Android, Chrome, YouTube, Gmail (including mx.google.com), Docs/Drive, Meet/Chat, and Google Search, claiming that Google "isn't more trusted" is just ludicrous. People may not be happy they have to trust Alphabet. But they certainly do.
And even when they insist they're Stallman, their friends do, their family does, their coworkers do, the businesses they interact with do, the schools they send their children to do.
[-]- djtango 2 days agoLike it or not, Google has wormed their way into the fabric of modern life.
Chrome and Google Search are still the gateway to the internet outside China. Android has over 75% market share of all mobile(!). YouTube is somewhat uniquely the video internet with Instagram and Tiktok not really occupying the same mindshare for "search" and long form.
People can say they don't "trust" Google but the fact is that if the world didn't trust Google, it never would have gotten to where it is and it would quickly unravel from here.
Sent from my Android (begrudgingly)
- rightbyte 2 days ago> And even when they insist they're Stallman
Looking through the JS-code of this site I was happily surprised finding 153 lines of not minified but pretty JS. I anticipated at least some unfree code. So I guess there is a chance some user might rightfully claim this.
- lemonlearnings 2 days agoWith search you dont fully trust Google. You trust Google to find good results most of the time them trust those results based on other factors.
But with AI you now have all trust in one place. For Google and OpenAI their AI bullshits. It will only be trusted by fools. Luckily for the corporations there is no end of fools to fool.
- jacquesm 2 days agoI really don't trust either. Google because of what they've already done, OpenAI because it has a guy at the helm who doesn't know how to spell the word 'ethics'.[-]
- renewiltord 2 days agoThat's mostly because LLMs think in terms of tokens not letters, so spelling is hard.[-]
- floren 2 days agoHe knows there's no "I" in "ethics"
- chmod775 2 days agoThis really depends on where you are are. Some countries' populations, especially those known to value privacy, are extremely distrustful of anything associated with Facebook or Google.
- anonymousiam 3 days agoI agree with you, and my impression of the trust-level of Google is pretty much zero.
- piskov 3 days agoGoogle and trust are an oxymoron[-]
- QuantumFunnel 3 days agoThe only thing I trust google to do is abandon software and give me a terrible support experience[-]
- jacquesm 2 days agoAnd to charge you for stuff you don't want and don't need as if you are using it every day through tied sales. Hm... wasn't that illegal?
- mrweasel 2 days ago> ChatGPT has more consumer trust than Google at this point
That trust is gone the moment they start selling ad space. Where would they put the ads? In the answers? That would force more people to buy a subscription, just to avoid having the email to your boss contain a sponsored message. The numbers for Q2 looks promising, sells are going up. And speaking of sales, Jif peanut butter is on sale this week.
If OpenAI plan on making money with ads then all the investments made by Nvidia, Microsoft and Softbank starts to look incredibly stupid. Smartest AI in the world, but we can only make money by showing you gambling ads.
[-]- morsch 2 days agoI'm afraid there's plenty of avenues for them to insert ads that probably won't be perceived as obnoxious by most people (I still find it incredibly obnoxious).
About half of AI queries are "Asking" (as opposed to Doing or Expressing) and those are the ones best suited for ads. User asking how to make pizza? Show ads for baking steels and premium passata. User asking for a three day sightseeing routine in Rome? I'm sure someone will pay you them to show their venue.
It seems unlikely that the ads will be embedded directly into the answer stream, unless they find a way to reliably label such portions as advertisements in a "clear and conspicuous" way, or convince law makers/regulators that chat bots don't need to be held to the same standards as other media.
- skanaley 3 days ago"Please help me with my factorial homework."
buyACoke 0 = 1 buyACoke rightNow = youShould * buyACoke (rightNow `at` walmart) where youShould = rightNow at = (-) walmart = 1
[-] - raw_anon_1111 2 days agoWhy do people always think that just because you have a lot of users it automatically translates to ad revenue? Yahoo has been one of the most trafficked site for decades and could never generate any reasonable amount of ad revenue.
The other side of the coin is that running an LLM will never be as cheap as search engine.
[-]- davedx 2 days ago> running an LLM will never be as cheap as search engine.
Complete and unfounded speculation.
[-]- raw_anon_1111 2 days agoReally? You think that they are going to discover some magical algorithm that reduces the complexity of an LLM to a search?
- hoppp 2 days agoThe moment they start mixing ads into responses Ill stop using them. Open models are good enough, its just more convenient to use chatgpt right now, but that can change.[-]
- Kranar 2 days agoPeople said the same thing about so many other online services since the 90s. The issue is that you're imagining ChatGPT as it exists right now with your current use case but just with ads inserted into their product. That's not really how these things go... instead OpenAI will wait until their product becomes so ingrained in everyday usage that you can't just decide to stop using them. It is possible, although not certain, that their product becomes ubiquitous and using LLMs someway somehow just becomes a normal way of doing your job, or using your computer, or performing menial and ordinary tasks. Using an LLM will be like using email, or using Google maps, or some other common tool we don't think much of.
That's when services start to insert ads into their product.
[-]- preommr 2 days ago> People said the same thing about so many other online services since the 90s.
And this leads to something I genuinely don't understand - because I don't see ads. I use adblocker, and don't bother with media with too many ads because there's other stuff to do. It's just too easy to switch off a show and start up a steam game or something. It's not the 90s anymore, people have so many options for things.
Idk, maybe I am wrong, but I really think there is something very broken in the ad world as a remenant from the era where google/facebook were brand new and the signal to noise ratio for advertisers was insanely high and interest rates were low. Like a bunch of this activity is either bots or kids, and the latter isn't that easy to monetize.
- byzantinegene 2 days agoExcept it's hard to imagine a world where chatgpt is heads and shoulders over the other llms in capability. Google has no problem keeping up and let's not forget that China has state-sponsored programs for AI development.
- outside1234 2 days agoExcept that I have switched to Gemini and not missed anything from OpenAI
- abnercoimbre 2 days agoAnd if/when they reach that point, the average consumer will see the ad as an irksome fly. That's it.[-]
- hoppp 2 days agoThe ads can be subtile. Same way Claude today prefers to generate html with tailwindcss. Feels like an ad for tailwind as sometimes when I ask it to do something else it still just gives me tailwind
- beeflet 2 days agoI agree, but the question is whether or not normal people will stop using them.[-]
- _aavaa_ 2 days agoI think the empirical answer is no. Look at how many ads there are in everything and people still use it.[-]
- ipaddr 2 days agoNormal people ignore ads. It gets easier with time. Television with ads means conversation about what we just watched.[-]
- glenneroo 2 days agoHow do you ignore ads when you ask for a recipe and it suggests using <insert brand-name here> ingredients, due to their superior flavors, textures, ability to mesh with the other ingredients, etc? Sure, you can decide to go with another brand, but over time, that stuff has the ability to stick in your brain. There have been many billions sunk into how to psychologically manipulate humans to get them to come buy your products, for instance McDs figured out decades ago through research that by giving away free toys in children's meals, the kids will whine and pester parents after seeing a commercial for a new toy from the latest superhero movie they just saw. Some parents will say no, but enough parents, after a long day/week of working will just give in.
Sure my example of asking for a recipe is contrived but imagine that for every query you make, that AI suggests using this framework for your web development, or basically any query you can think of will make subtle suggestions to use a specific product with compelling reasons why e.g. the competition has known bugs that will affect you personally!
The possibilities are endless.
[-] - _aavaa_ 2 days agoI don’t think that’s true. For starters, I doubt all (or even most) of Google’s ad revenue is from fraud or from people clicking links they ignore.
I doubt people ignore sponsored listings on Amazon.
- wiseowise 2 days agoExcept you won’t be able to ignore ads if Chat doesn’t explicitly state that 5 of 10 suggested products are ads.
- iLoveOncall 2 days ago> Open models are good enough
Are they though? I have the best consumer hardware and can run most open models, and they are all unusable beyond basic text generation. I'm talking 90%+ hallucination rate.
[-]- hoppp 2 days agoDepends on the use-case. I like to generate front ends and there the hallucination is acceptable as html-css is pretty forgiving and I will manually modify it anyways
- JumpCrisscross 2 days ago> moment they start mixing ads into responses Ill stop using them
Do you currently pay for it?
[-]- hoppp 2 days agoI do pay for openAi Api but its a top up, my main usage is on free tier.
I tried to pay for Claude but they didn't accept my credit card for some reason.
I have local models working well, but they are a bit slow, my laptop is 5 years old, but eventually when I buy a new one Ill make the switch.
- crystal_revenge 2 days ago> ads in the future.
It boggles my mind that people still think advertising can be a major part of the economy.
If AI is propping up the economy right now [0] how is it possible that the rest of the economy can possibly fund AI through profit sharing? That's fundamentally what advertising is: I give you a share of my revenue (hopefully from profits) in order to help increase my market share. The limit of what advertising spend can be is percent of profits minus some epsilon (for a functioning economy at least).
Advertising cannot be the lions share of any economy because it derives it's value from the rest of the economy.
Advertising is also a major bubble because my one assumption there (that it's a share of profits) is generally not the case. Unprofitable companies giving away a share of their revenue to other companies making those companies profitable is not sustainable.
Advertising could save AI if AI was a relatively small part of the US (or world) economy and could benefit by extracting a share of the profits from other companies. But if most your GDP is from AI how can it possibly cannibalize other companies in a sustainable way?
0. https://www.techspot.com/news/109626-ai-bubble-only-thing-ke...
[-]- StopHammoTime 2 days agoYou've run a false equivalency in your argument. Growth is not representative of the entire economy. The economy is, in aggregate, much more than tech - they have the biggest public companies which skews how people think. No exclusive sector makes up "most" of the economy, in fact the highest sector, which is finance only makes up 21% of the US economy.
https://www.statista.com/statistics/248004/percentage-added-...
[-]- crystal_revenge 2 days ago> Growth is not representative of the entire economy
Our entire economy is based on debt, it cannot function without growth. This is demonstrated by the fact that:
> in fact the highest sector, which is finance only makes up 21% of the US economy
Every cent earned by the finance sector boils down from being derived from debt (i.e. growth has to pay it off). You just pointed out the largest sector of our economy relies on rapid growth, and the majority of growth right now is coming from AI. AI, therefore, cannot derive the majority of it's value by cannibalizing the growth of other sectors because no other sector has sufficient growth the fund both AI, itself and the debt that needs to be repaid to make the entire thing make sense.
- lemonlearnings 2 days agoUS GDP is 30T so that revenue is less than 1% of it. But 1% of GDP us still eye popping amount. But remember in the non Google world that is split up into Yellow Pages and TV ads and etc. and possibly many ventures that were not possible because of lack of targeted ads didnt come to fruition.
- 0xDEAFBEAD 2 days agoIf advertising helps high-quality / innovative products spread faster, it can power growth. Indeed, the rate of adoption of innovations seems a fairly critical input for growth. Advertising can speed such adoption.
- rahulyc 2 days agoI think your argument isn't exactly right.
You can imagine a future world where producing real goods and services is ~free (AI compute infinite etc.)
In this world, the entire economy will be ~advertising only so you can charge people anything at all instead of giving it away for free.
[-]- suddenlybananas 2 days agoIf your revenue model is predicated on Star Trek-style communism, it's maybe not a very realistic model. I really don't think if producing things is essentially free that advertising will be a very big thing since it would be pointless.
- piskov 3 days agoI don’t pay $200 per month to use a product tightly coupled for ad revenue (ahem tracking).
That’s why I use Kagi, Hey, Telegram, Apple (for now) etc.
I really hope OpenAI can build a sustainable model which is not based on that.
[-]- nharada 3 days agoI suspect ads would be an attempt to monetize the free users not people paying $200/mo for Pro. Though who knows...[-]
- piskov 3 days agoFirst, as an advertiser you want those sweet-sweet people with money.
Second, if they put “display: hidden” on ads doesn’t mean they will create and use entirely other architecture, data flow and storage, just for those pro users.
- syntaxing 3 days agoThey should be concerned with open weight models that don’t run on consumer hardware. The larger models from Qwen (Qwen Max) and ZLM (GLM and GLM air) perform not too far from Claude Sonnet 4 and GPT-5. ZLM offers a $3 plan that is decently generous. I can pretty much replace it over Sonnet 4 in Claude Code (I swear, Anthropic has been nerfing Sonnet 4 for people on the Pro plan).
You can run Qwen3-coder for free upto 1000 requests a day. Admittedly not state of the art but works as good of 5o-mini
[-]- imachine1980_ 2 days agoI believe regular people will not change from chatGPT if it has some ads. I know people who use "alternative" wrappers that have ads because they aren't tech savvy, and I agree with the OP that this could be a significant amount of money We aren't 700 million people that use it.[-]
- syntaxing 2 days agoDefinitely don’t argue against that, once people get into a habit of using something, it takes quite a bit to get away from it. Just that an American startup can literally run ZLM models themselves (open weight with permissive license) as a competitor to ChatGPT is pretty wild to think about[-]
- hx8 2 days agoOne of the side effects of having a chat interface, is that there is no moat around it. Using it is natural.
Changing from Windows to Mac or iOS to Android requires changing the User Interface. All of these chat applications have essentially the same interface. Changing between ChatGPT and Claude is essentially like buying a different flavor of potato chip. There is some brand loyalty and user preference, but there is very little friction.
- ab5tract 2 days agoMySpace collapsed in something like 18 months.
- Rohansi 3 days agoIt'll be interesting to see the effect ads have on their trustworthiness. There's potential for it to end up worse than Google because sponsored content can blend in better and possibly not be reliably disclosed.[-]
- exasperaited 3 days agoThere is also the IMO not exactly settled question of whether an advertiser is comfortable handing over its marketing to an AI.
Can any AI be sensibly and reliably instructed not to do product placement in inappropriate contexts?
[-]- typpilol 2 days agoAlso what effect will these extra instructions have on output?
Every token of context can drastically change the output. That's a big issue right now with Claude and their long conversation reminders.
[-]- exasperaited 2 days agoYou mean, for instance, if you ask it to insert an advert into content, can it do so, based on its training set, without changing the wider content into advertorial?
It's a really good point. And it has some horrifying potential outcomes for advertisers.
[-]- typpilol 2 days agoExactly.
Am I going to come back from coding with a function named BurgerKing or something? Lol
- zdragnar 3 days agoWhat is OpenAI's moat? There's plenty of competitors running their own models and tools. Sure, they have the ChatGPT name, but I don't see them massively out-competing the entire market unless the future model changes drastically improve over the 3->4->5 trajectory.[-]
- nojs 2 days agoIt feels similar to Google to me - what is (was) their moat? Basically slightly better results and strong brand recognition. In the later days maybe privileged data access. But why does nobody use Bing?[-]
- zdragnar 2 days agoGoogle got a massive leg up on the rest be having a better service. When Bing first came out, I was not impressed with what I got, and never really bothered going back to it.
Search quality isn't what it used to be, but the inertia is still paying dividends. That same inertia also applied to Google ads.
I'm not nearly so convinced OpenAI has the same leg up with ChatGPT. ChatGPT hasn't become a verb quite like google or Kleenex, and it isn't an indispensable part of a product.
[-]- typpilol 2 days agoI actually find bing better now for more technical searches.
Most technical Google searches end up at win fourms or official Microsoft support site which is basically just telling you that running sfc scannow for everything is the fix.
[-]- 6031769 2 days agoIf you are ending up at win forums or Microsoft's support site then the chances are that you were searching for something Microsofty in the first place. And if that's the case then it's hardly surprising that Microsoft's own search engine is better for promoting Microsoft-related responses than any other.
Try searching for something technical which isn't MS-specific. That should be a more neutral test.
[-]- typpilol 2 days agoYou're not wrong. I've been doing all of windows server (eww) work lately
- balder1991 2 days agoGoogle has always been much better than the competition. Even today with their enshittification, competitors still aren’t as good.
The only thing that has changed that status quo is the rise of audiovisual media and sites closing up so that Google can’t index them, which means web search lost a lot of relevance.
- HDThoreaun 2 days agogoogle's moat is a combination of it being free and either being equal to or outright better than competitors
- bcrl 3 days agoThis! The cost of training models inevitably goes down over time as FLOPS/$ and PB/$ increases relentlessly thanks to the exponential gains of Moore's law. Eventually we will end up with laptops and phones being Good Enough to run models locally. Once that happens, any competitor in the space that decides to actively support running locally will have operating costs that are a mere fraction of OpenAI's current business.
The pop of this bubble is going to be painful for a lot of people. Being too early to a market is just as bad as being too late, especially for something that can become a commodity due to a lack of moat.
[-]- muskyFelon 2 days agoBad news on the Moore's Law front.
https://cap.csail.mit.edu/death-moores-law-what-it-means-and...
[-]- bcrl 2 days agoThe number of transistors per unit area is still increasing, it's just a little slower than it was and more expensive than it was.
And there are innovations that will continue the scaling that Moore's law predicts. Take die stacking as an example. Even Intel had internal studies 20 years ago that showed there are significant performance and power improvements to be had in CPU cores by using 2 layers of transistors. AMD's X3D CPUs are now using technology that can stack extra dies onto a base die, but they're using it in the most basic of ways (only for cache). Going beyond cache to logic, die stacking allows reductions of wire length because more transisters with more layers of metal fit in a smaller space. That in turn improves performance and reduces power consumption.
The semiconductor industry isn't out of tricks just yet. There are still plenty of improvements coming in the next decade, and those improvements will benefit AI workloads far more than traditional CPUs.
- otabdeveloper4 2 days ago> increases relentlessly thanks to the exponential gains of Moore's law
Moore's so-called "law" hasn't been true for years.
Chinese AI defeated American companies because they spent effort to optimize the software.
- aurareturn 2 days agoYou just said that everyone will be able to run a powerful AI locally and then you said this would lead to a pop of the bubble.
Well, which is it? That AI is going to have huge demands for chips that it is going to get much bigger or is the bubble going to pop? You can’t have both.
My opinion is that local LLMs will do a bulk of the low value interference such as your personal life mundane tasks. But cloud AI will be reserved for work and for advanced research purposes.
[-]- bcrl 2 days agoJust because a bubble pops on the economic front doesn't mean the sector goes away. Pets.com went bust a mere 10 months after going public, yet we're buying all kinds of products online in 2025 that we weren't in 2000. A bubble popping is about the disconnect between the forward looking assumptions about profitability by the early adopters in the space versus the actual returns once the speculation settles down and is replaced by hard data.[-]
- aurareturn 54 minutes agoIf it pops and then people underestimated the impact of the technology even at the peak, was it really a bubble?
- bobby_mcbrown 3 days agoIt's Sam.
From what I understand he was the only one crazy enough to demand hundreds of GPUs for months to get ChatGPT going. Which at the time sounded crazy.
So yeah Sam is the guy with the guts and vision to stay ahead.
[-]- shermantanktop 2 days agoPast performance is no guarantee of future results.
You might see Sam as a Midas who can turn anything into gold. But history shows that very few people sustain that pattern.
- exasperaited 2 days agoThe shitcoin-in-return-for-your-iris-scans guy?
- croes 3 days agoOpenAI isn't ahead[-]
- redman25 2 days agoIt is in terms of users. There's a lot of sticking power to "the thing you already know and use".[-]
- no_wizard 2 days agoUnless there is little friction in switching. I don’t feel any of the LLM products have a sticky factor as of yet, as far as viewing it from a consumer lens
- Keyframe 2 days agoProbably consumers. Enterprise is Anthropic, double ahead: https://menlovc.com/perspective/2025-mid-year-llm-market-upd...
note that menlo is invested in anthropic, but still..
- bix6 2 days agoIgnoring Sutskever much?
- fzzzy 3 days agobrand recognition
- majormajor 2 days agoIf they overnight were able to capture as much revenue per user as Meta (about 50 bucks a year) they'd bring in a bucket of cash immediately.
But selling that much ad inventory overnight - especially if they want new formats vs "here's a video randomly inserted in your conversation" sorta stuff - is far from easy.
Their compute costs could easily go down as technology advances. That helps.
But can they ramp up the advertising fast enough to bring in sufficient profit before cheaper down-market alternatives become common?
They lack the social-network lock-in effect of Meta, or the content of ESPN, and it remains to be seen if they will have the "but Google has better results than Bing" stickiness of Google.
- zahlman 2 days ago> $264B in 2024.
Why is this much money spent on advertising? Surely it isn't really justified by increase in sales that could be attributed to the ads? You're telling me people actually buy these ridiculous products I see advertised?
[-]- theshackleford 2 days agoI work/have worked ecom adjacent for a long time. Ads absolutely work and the continued bewilderment of HN users to this reality will never cease to amaze me.
- ipaddr 2 days agoA lot of that money comes from search result ads. Sometimes I click on an ad to visit a site I search for instead of scrolling to the same link in the actual search results. Many companies bid on keywords for their own name to prevent others from taking a customer who is interested in you.
You use to be a useful site and be at the top of the search results for some keywords and now you have to pay.
- HeatrayEnjoyer 2 days agoYes, they do. Advertising works. "Free with ads" isn't really free because on average you'll end up spending more money than you would otherwise. You're also paying more than if it was a subscription because the producer has to create both the product and also advertise it.
- returnInfinity 2 days agoIt's a lot more complicated, but yes advertising works.
There is a saying in India, whats seen is what is sold.
Not the hidden best product.
- lelanthran 2 days agoGoogle is tightly integrated vertically. It is going to be very hard to dislodge that.
Right now Gemini gives a youtube link in every response. That means they have already monetised their product using ads.
- outside1234 2 days agoThe problem with this is that I have moved to Gemini with zero loss in functionality, and I’m pretty sure that Google is 100x better at ads than OpenAI.
- deadbabe 3 days agoIn 10 years most serious users of AI will be using local LLMs on insanely powerful devices, with no ads. API based services will have limited horizon.[-]
- api 3 days agoSome will but you’re underestimating the burning desire to avoid IT and sysadmin work. Look at how much companies overspend on cloud just to not have to do IT work. They’d rather pay 10X-100X more to not have to admin stuff.[-]
- FpUser 2 days ago>"Look at how much companies overspend on cloud just to not have to do IT work."
I think they are doing it for a different reasons. Some are legit like renting this supercomputer for a day and some are like everybody else is doing it. I am friends with the small company owner and they have sysadmin who picks nose and does nothing and then they pay a fortune to Amazon
- deadbabe 3 days agoI’m talking about prepackaged offline local only on device LLMs.
What you are describing though will almost certainly happen even sooner once AI tech stablizes and investing in powerful hardware no longer means you will become quickly out of date.
- beeflet 2 days agoIt is just downloading a program and using it
- IncreasePosts 2 days agoOk, but there will be users using even more insanely powerful datacenter computers that will be able to our-AI the local AI users.[-]
- bad_haircut72 2 days agoNvidia/Apple (hardware companies) are the only winner in this case
- StarterPro 2 days agoIts a completely optional purchase, and there's no clear way for ads to be included without it muddying up the actual answer.
"The most popular brand of bread in America is........BUTTERNUT (AD)"
Its a sinkhole that they are destroying our environment for. Its not sustainable on a massive scale, and I expect to see Sam Altman join his 30 under 30 cohorts SBF and such eventually.
- iLoveOncall 2 days ago> ChatGPT has more consumer trust than Google at this point
This is just HackerNews bias.
Everyone that has used ChatGPT (or any other LLM really) has already been burnt by being provided a completely false answer. On the contrary everyone understands that Google never claimed to provide a true answer, just links to potential answers.
- devops000 2 days agoOne of the feature of ChatGPT is that because there are no ads and you got straight to the information you need. If you add ads again it is all over again. You still have traction but it will not be so extraordinary and Google could do the same. OpenAI must have a difference from Google.
- Thaxll 2 days agoGoogle has google.com, youtube, chrome, android, gmail, google map etc ... I don't see OpenAI having a product close to that.[-]
- wslh 2 days agoGoogle is older and many of the products you describe were acquisitions (inorganic growth).[-]
- vitus 2 days ago> google.com, youtube, chrome, android, gmail, google map etc
Of those, it's 50/50. The acquisitions were YT, Android, Maps. Search was obviously Google's original product, Chrome was an in-house effort to rejuvenate the web after IE had caused years of stagnation, and Gmail famously started as a 20% project.
There are of course criticisms that Google has not really created any major (say, billion-user) in-house products in the past 15 years.
[-]- ppseafield 2 days agoChrome's engine was WebKit originally, which they then forked. Not an acquisition, but benefitted greatly from prior work.[-]
- wslh 2 days agoIndeed Chrome included many relatively small acquisitions to be built. For example, GreenBorder for sandboxing and Skia for the 2D graphics engine. At that time sandboxing was novel for a browser.
- Workaccount2 2 days agoBy this point I imagine it's a novelty to find any code from the original acquisition in those products.[-]
- returnInfinity 2 days agoCode is secondary. Pmf is primary.
Code is a commodity. Very easy to make. Now even llms are commodities. There are other intangible assets more valuable. Like the chatgpt brand here.
[-]- wslh 2 days agoThis is beyond PMF, it's about traction on steroids, owning the last mile.
- thorio 2 days agoI think affiliation is much more likely to be a relevant revenue stream for them in the future. Instant checkouts would be a game changer in my view. Especially for upcoming generations, that don't have the habit of scrolling the open web to get their stuff done, but are native to LLMs.
- alok-g 3 days ago>> ... underestimating the money they will come from ads in the future.
I would like AI to focus on helping consumers discover the right products for their stated needs as opposed to just being shown (personalized) ads. As of now, I frequently have a hard time finding the things I need via Amazon search, Google, as well as ChatGPT.
[-]- dickersnoodle 2 days agoGood luck with that. Every supermarket I've been in has those stupid baskets or racks of stuff blocking the aisle and their data must show that it gets people to buy a little more of that stuff even though it makes me quite resolved to never buy the shit they're forcing me to look at in order to go get the five things I really need.
- t_sawyer 2 days agoGoogle already has the ad network. They already have Gemini. IMO this will end up being OpenAI proving that the revenue model works and then Google will swoop in and take their market share away.
- bradly 2 days agoAre they currently adding affiliate links to their outbound Amazon product links?
- belter 2 days agoSo they are losing money but will make it on volume? :-)
- ares623 2 days ago> ChatGPT has more consumer trust than Google at this point
Gee I wonder why?
- 0xDEAFBEAD 2 days ago>ChatGPT has more consumer trust than Google at this point
"There are increasing reports of people having delusional conversations with chatbots. This suggests that, for some, the technology may be associated with episodes of mania or psychosis when the seemingly authoritative system validates their most off-the-wall thinking. Cases of conversations that preceded suicide and violent behavior, although rare, raise questions about the adequacy of safety mechanisms built into the technology."
https://www.nytimes.com/2025/08/26/technology/chatgpt-openai...
- wnevets 2 days agoBanner ads would only be the start of the enshittification of AI chats. I can't wait for the bots to start recommending products and services of the highest bidder.
- throwaway2037 2 days ago
To be clear, they bought/aired a Superbowl advert. That is a pretty expensive. You might argue that "Superbowl advert" versus 4B+ in revenue is inconsequential, but you cannot say there is no advertising.> They generated $4.3B in revenue without any advertising program
Also, their press release said:
Vague. Is this advertising? Eh, not sure, but that is a big chunk of money.> $2 billion spent on sales and marketing
[-]- NotMichaelBay 2 days agoI think they mean OpenAI showing ads from other companies to users, not buying ads themselves.
- stephc_int13 3 days agoEveryone is trying to compare AI companies with something that happened in the past, but I don't think we can predict much from that.
GPUs are not railroads or fiber optics.
The cost structure of ChatGPT and other LLM based services is entirely different than web, they are very expensive to build but also cost a lot to serve.
Companies like Meta, Microsoft, Amazon, Google would all survive if their massive investment does not pay off.
On the other hand, OpenAI, Anthropic and others could be soon find themselves in a difficult position and be at the mercy of Nvidia.
[-]- wood_spirit 3 days agoUnlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027. It won’t retain much value in the same way as the infrastructure of previous bubbles did?[-]
- christina97 3 days agoThe A100 came out 5.5 years ago and is still the staple for many AI/ML workloads. Even AI hardware just doesn’t depreciate that quickly.[-]
- Ianjit 2 days agoUsers are waiting for Blackwell. Then Rubin. CRWV depreciates GPUs over 6 years. Rails last a lot longer.
- littlestymaar 2 days agoUnless you care about FLOP/Watt, which big players definitely do.
- dzhiurgis 3 days agoThis. There’s even a market for them being built (DRW).
- layoric 3 days ago> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
Betting against compute getting better/cheaper/faster is probably a bad idea, but fundamental improvements I think will be a lot slower over the next decade as shrinking gets a lot harder.
[-]- palmotea 3 days ago>> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
> I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
I'm no expert, buy my understanding is that as feature sizes shrink, semiconductors become more prone to failure over time. Those GPUs probably aren't going to all fry themselves in two years, but even if GPUs stagnate, chip longevity may limit the medium/long term value of the (massive) investment.
- spiderice 3 days agoUnfortunately changing 2027 to 2030 doesn't make the math much better[-]
- JumpCrisscross 2 days ago> changing 2027 to 2030 doesn't make the math much better
Could you show me?
Early turbines didn't last that long. Even modern ones are only rated for a few decades.
[-]- singularity2001 2 days agothere is a difference between a few decades and half a decade though? Or his time in general accelerated so much that it's basically very similar[-]
- JumpCrisscross 2 days ago“Within Parsons' lifetime, the generating capacity of a [steam turbine] unit was scaled up by about 10,000 times” [1].
For comparison, Moore’s law (at 2 years per doubling) scales 4 orders of magnitude in about 27 years. That’s roughly the lifetime of a modern steam turbine [2]. In actuality, Parsons lived 77 years [3], implying a 13% growth rate, so doubling every 6 versus 2 years. But within the same order of magnitude.
[1] https://en.m.wikipedia.org/wiki/Steam_turbine
[2] https://alliedpg.com/latest-articles/life-extension-strategi... 30 years
[3] https://en.m.wikipedia.org/wiki/Charles_Algernon_Parsons
- skywhopper 3 days agoUnfortunately the chips themselves probably won’t physically last much longer than that under the workloads they are being put to. So, yes, they won’t be totally obsolete as technology in 2028, but they may still have to be replaced.[-]
- munk-a 3 days agoYeah - I think that the extremely fast depreciation just due to wear and use on GPUs is pretty unappreciated right now. So you've spent 300 mil on a brand new data center - congrats - you'll need to pay off that loan and somehow raise another 100 mil to actually maintain that capacity for three years based on chip replacement alone.
There is an absolute glut of cheap compute available right now due to VC and other funds dumping into the industry (take advantage of it while it exists!) but I'm pretty sure Wall St. will balk when they realize the continued costs of maintaining that compute and look at the revenue that expenditure is generating. People think of chips as a piece of infrastructure - you buy a personal computer and it'll keep chugging for a decade without issue in most case - but GPUs are essentially consumables - they're an input to producing the compute a data center sells that needs constant restocking - rather than a one-time investment.
[-]- davedx 2 days agoThere are some nuances there.
- Most big tech companies are investing in data centers using operating cash flow, not levering it
- The hyperscalers have in recent years been tweaking the depreciation schedules of regular cloud compute assets (extending them), so there's a push and a pull going on for CPU vs GPU depreciation
- I don't think anyone who knows how to do fundamental analysis expects any asset to "keep chugging for a decade without issue" unless it's explicitly rated to do so (like e.g. a solar panel). All assets have depreciation schedules, GPUs are just shorter than average, and I don't think this is a big mystery to big money on Wall St
- chermi 3 days agoDo we actually know how they're degrading? Are there still Pascals out there? If not, is it because they actual broke or because of poor performance? I understand it's tempting to say near 100% workload for multiple years = fast degradation, but what are the actual stats? Are you talking specifically about the actual compute chip or the whole compute system -- I know there's a big difference now with the systems Nvidia is selling. How long do typical Intel/AMD CPU server chips last? My impression is a long time.
If we're talking about the whole compute system like a gb200, is there a particular component that breaks first? How hard are they to refurbish, if that particular component breaks? I'm guessing they didn't have repairability in mind, but I also know these "chips" are much more than chips now so there's probably some modularity if it's not the chip itself failing.
[-]- hxorr 2 days agoI watch a GPU repair guy and its interesting to see the different failure modes...
* memory IC failure
* power delivery component failure
* dead core
* cracked BGA solder joints on core
* damaged PCB due to sag
These issues are compounded by
* huge power consumption and heat output of core and memory, compared to system CPU/memory
* physical size of core leads to more potential for solder joint fracture due to thermal expansion/contraction
* everything needs to fit in PCIe card form factor
* memory and core not socketed, if one fails (or supporting circuitry on the PCB fails) then either expensive repair or the card becomes scrap
* some vendors have cards with design flaws which lead to early failure
* sometimes poor application of thermal paste/pads at factory (eg, only half of core making contact
* and, in my experience in aquiring 4-5 year old GPUs to build gaming PCs with (to sell), almost without fail the thermal paste has dried up and the card is thermal throttling
[-]- oskarkk 2 days agoThese failures of consumer GPUs may be not applicable to datacenter GPUs, as the datacenter ones are used differently, in a controlled environment, have completely different PCBs, different cooling, different power delivery, and are designed for reliability under constant max load.[-]
- fennecbutt 2 days agoYeah you're right. Definitely not applicable at all. Especially since nvidia often supplies them tied into the dgx units with cooling etc. Ie a controlled environment.
Consuker gpu you have no idea if they've shoved it into a hotbox of a case or not
- chermi 2 days agoSo, if anything,maybe were underestimating the lifetime of these datacenter GPUs?
- Workaccount2 2 days agoBelieve it or not, the GPUs from bitcoin farms are often the most reliable.
Since they were run 24/7, there was rarely the kind of heat stress that kills cards (heating and cooling cycles).
[-]- buu700 2 days agoCould AI providers follow the same strategy? Just throw any spare inference capacity at something to make sure the GPUs are running 24/7, whether that's model training, crypto mining, protein folding, a "spot market" for non-time-sensitive/async inference workloads, or something else entirely.[-]
- chermi 2 days agoI have to imagine some of them try this. I know you can schedule non-urgent work loads with some providers that run when compute space is available. With enough work loads like that, assuming they have well-defined or relatively predictable load/length, it would be a hard but approximately solvable optimization problem.[-]
- buu700 2 days agoI've seen things like that, but I haven't heard of any provider with a bidding mechanic for allocation of spare compute (like the EC2 spot market).
I could imagine scenarios where someone wants a relatively prompt response but is okay with waiting in exchange for a small discount and bids close to the standard rate, where someone wants an overnight response and bids even less, and where someone is okay with waiting much longer (e.g. a month) and bids whatever the minimum is (which could be $0, or some very small rate that matches the expected value from mining).
- epolanski 3 days agoI'm not sure.
Number of cycles that goes through silicon matters, but what matters most really are temperature and electrical shocks.
If the GPUs are stable, at low temperature they can be at full load for years. There are servers out there up from decades and decades.
- potatolicious 3 days agoYep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Effectively every single H100 in existence now will be e-waste in 5 years or less. Not exactly railroad infrastructure here, or even dark fiber.
[-]- 9rx 3 days ago> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago.
That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.
When we look back in 100 years, the total amortization cost for the "winner" won't look so bad. The “picks and axes” (i.e. H100s) that soon wore down, but were needed to build the grander vision won't even be a second thought in hindsight.
[-]- palmotea 3 days ago> That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.
How long did it take for 9 out of 10 of those rail lines to become nonviable? If they lasted (say) 50 years instead of 100, because that much rail capacity was (say) obsoleted by the advent of cars and trucks, that's still pretty good.
[-]- 9rx 3 days ago> How long did it take for 9 out of 10 of those rail lines to become nonviable?
Records from the time are few and far between, but, from what I can tell, it looks like they likely weren't ever actually viable.
The records do show that the railways were profitable for a short while, but it seems only because the government paid for the infrastructure. If they had to incur the capital expenditure themselves, the math doesn't look like it would math.
Imagine where the LLM businesses would be if the government paid for all the R&D and training costs!
[-]- Spooky23 3 days agoRailroads were pretty profitable for a long time. The western long haul routes were capitalized by land transfers.
What killed them was the same thing that killed marine shipping — the government put the thumb on the scale for trucking and cars to drive postwar employment and growth of suburbs, accelerate housing development, and other purposes.
[-]- 9rx 3 days ago> the government put the thumb on the scale for trucking and cars to drive postwar employment and growth of suburbs, accelerate housing development, and other purposes.
The age of postwar suburb growth would be more commonly attributed to WWII, but the records show these railroads were already losing money hand over fist by the WWI era. The final death knell, if there ever was one, was almost certainly the Great Depression.
But profitable and viable are not one and the same, especially given the immense subsidies at play. You can make anything profitable when someone else is covering the cost.
[-]- Spooky23 2 days agoThere was alot of complexities. It's hard to really understand the true position of these businesses in modern terms. Operationally, they would often try to over-represent losses because the interstate commerce commission and other State-level entities mandated services, especially short-haul passenger service that become irrelevant.
National infrastructure is always subsidized and is never profitable on it's own. UPS is the largest trucking company, but their balance sheet doesn't reflect the costs of enabling their business. The area I grew up in had tarred gravel roads exclusively until the early 1980s -- they have asphalt today because the Federal government subsidizes the expense. The regulatory and fiscal scale tipped to automotive and to a lesser extent aircraft. It's arguable whether that was good or bad, but it is.
[-]- 9rx 2 days ago> State-level
State-level...? You're starting to sound like the other commenter. It's a big world out there.
> National infrastructure is always subsidized
Much of the network was only local, and mostly subsidized by municipal governments.
- jcranmer 2 days ago> The records do show that the railways were profitable for a short while, but it seems only because the government paid for the infrastructure. If they had to incur the capital expenditure themselves, the math doesn't look like it would math.
Actually, governments in the US rarely actually provided any capital to the railroads. (Some state governments did provide some of the initial capital for the earliest railroads). Most of federal largess to the railroads came in the form of land grants, but even the land grant system for the railroads was remarkably limited in scope. Only about 7-8% of the railroad mileage attracted land grants.
[-]- 9rx 2 days ago> Actually, governments in the US rarely actually provided any capital to the railroads.
Did I, uh, miss a big news announcement today or something? Yesterday "around my parts" wasn't located in the US. It most definitely wasn't located in the US when said rail lines were built. Which you even picked up on when you recognized that the story about those lines couldn't have reasonably been about somewhere in the US. You ended on a pretty fun story so I guess there is that, but the segue into it wins the strangest thing ever posted to HN award. Congrats?
- lesuorac 3 days agoIf 1/10 investment lasts 100 years that seems pretty good to me. Plus I'd bet a lot of the 9/10 of that investment had a lot of the material cost re-coup'd when scrapping the steel. I don't think you're going to recoup a lot of money from the H100s.[-]
- 9rx 3 days agoMuch like LLMs. There are approximately 10 reasonable players giving it a go, and, unless this whole AI thing goes away, never to be seen again, it is likely that one of them will still be around in 100 years.
H100s are effectively consumables used in the construction of the metaphorical rail. The actual rail lines had their own fare share of necessary tools that retained little to no residual value after use as well. This isn't anything unique.
[-]- munk-a 3 days agoH100s being thought of as consumables is keen - it much better to analogize the H100s to coal and chip manufacturer the mine owner - than to think of them as rails. They are impermanent and need constant upkeep and replacement - they are not one time costs that you build as infra and forget about.
- fooker 3 days ago> Effectively every single H100 in existence now will be e-waste in 5 years or less.
This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.
If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.
[-]- fxtentacle 3 days agoYeah, I can rent an A100 server for roughly the same price as what the electricity would cost me.[-]
- fennecbutt 2 days agoBecause they buy the electricity in bulk so these things are not the same.
- fooker 2 days agoThat is true for almost any cloud hardware.
- typpilol 3 days agoWhere?[-]
- Cheer2171 2 days ago~$1.25-1.75/hr at Runpod or vast.ai for an A100
Edit: https://getdeploying.com/reference/cloud-gpu/nvidia-a100
[-]- diziet 2 days agoThe A100 SXM4 has a TDP of 400 watts, let's say about 800 with cooling etc overhead.
Bulk pricing per KWH is about 8-9 cents industrial. We're over an order of magnitude off here.
At 20k per card all in price (MSRSP + datacenter costs) for the 80GB version, with a 4 year payoff schedule the card costs 57 cents per hour (20,000/24/365/4) assuming 100% utilization.
- hyperbovine 3 days ago> Effectively every single H100 in existence now will be e-waste in 5 years or less.
This is definitely not true, the A100 came out just over 5 years ago and still goes for low five figures used on eBay.
- SJC_Hacker 3 days ago> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Are we? I was under the impression that the tracks degraded due to stresses like heat/rain/etc. and had to be replaced periodically.
[-]- ralph84 3 days agoThe track bed, rails, and ties will have been replaced many times by now. But the really expensive work was clearing the right of way and the associated bridges, tunnels, etc.[-]
- intrasight 2 days agoI am really digging the railroad analogies in this discussion! There are some striking similarities if you do the right mappings and timeframe transformations.
I am an avid rail-to-trail cycler and more recently a student of the history of the rail industry. The result was my realization that the ultimate benefit to society and to me personally is the existence of these amazing outdoor recreation venues. Here in Western PA we have many hundreds of miles of rail-to-trail. My recent realization is that it would be totally impossible for our modern society to create these trails today. They were built with blood, sweat, tears and much dynamite - and not a single thought towards environmental impact studies. I estimate that only ten percent of the rail lines built around here are still used for rail. Another ten percent have become recreational trails. That percent continues to rise as more abandoned rail lines transition to recreational use. Here in Western PA we add a couple dozen miles every year.
After reading this very interesting discussion, I've come to believe that the AI arms race is mainly just transferring capital into the pockets of the tool vendors - just as was the case with the railroads. The NVidia chips will be amortized over 10 years and the models over perhaps 2 years. Neither has any lasting value. So the analogy to rail is things like dynamite and rolling stock. What in AI will maintain value? I think the data center physical plants, power plants and transmission networks will maintain their value longer. I think the exabytes of training data will maintain their value even longer.
What will become the equivalent of rail-to-trail? I doubt that any of the laborers or capitalists building rail lines had foreseen that their ultimate value to society would be that people like me could enjoy a bike ride. What are the now unforeseen long-term benefit to society of this AI investment boom?
Rail consolidated over 100 years into just a handful of firms in North America, and my understanding is that these firms are well-run and fairly profitable. I expect a much more rapid shakeout and consolidation to happen in AI. And I'm putting my money on the winners being Apple first and Google second.
Another analogy I just thought of - the question of will the AI models eventually run on big-iron or in ballpoint pens. It is similar to the dichotomy of large-scale vs miniaturized nuclear power sources in Asimov's Foundation series (a core and memorable theme of the book that I haven't seen in the TV series).
- Spooky23 3 days agoHow was your trip down the third Avenue El? Did your goods arrive via boxcar to 111 8th Ave?[-]
- selimthegrim 2 days agoAt the rate they are throwing obstacles at the promised subway which they got rid of the 3rd Ave El for maybe his/her grandkids will finish the trip.
- Analemma_ 3 days agoExactly: when was the last time you used ChatGPT-3.5? Its value deprecated to zero after, what, two-and-a-half years? (And the Nvidia chips used to train it have barely retained any value either)
The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.
[-]- falcor84 3 days agoI would think that it's more like a general codebase - even if after 2.5 years, 95% percent of the lines were rewritten, and even if the whole thing was rewritten in a different language, there is no point in time at which its value diminished, as you arguably couldn't have built the new version without all the knowledge (and institutional knowledge) from the older version.[-]
- spwa4 3 days agoI rejoined an previous employer of mine, someone everyone here knows ... and I found that half their networking equipment is still being maintained by code I wrote in 2012-2014. It has not been rewritten. Hell, I rewrote a few parts that badly needed it despite joining another part of the company.
- tim333 3 days agoOpenAI is now valued at $500bn though. I doubt the investors are too wrecked yet.
It may be like looking at the early Google and saying they are spending loads on compute and haven't even figured how to monetize search, the investors are doomed.
[-]- oblio 2 days agoGoogle was founded in 1998 and IPOed in 2004. If OpenAI was feeling confident they'd find ways to set up a company and IPO, 9 years after founding. It's all mostly fictional money at this point.[-]
- matwood 2 days agoIt's not about confidence. OpenAI would be huge on the public markets, but since they can raise plenty of money in the private market there is no reason to deal with that hassle - yet.
- aurareturn 2 days agoIf OpenAI is a public company today, I would bet almost anything that it'd be a $1+ trillion company immediately on opening day.
- CompoundEyes 3 days agoA really did few days ago gpt-3.5-fast is a great model for certain tasks and cost wise via the API. Lots of solutions being built on the today’s latest are for tomorrow’s legacy model — if it works just pin the version.
- fooker 3 days ago> And the Nvidia chips used to train it have barely retained any value either
Oh, I'd love to get a cheap H100! Where can I find one? You'll find it costs almost as much used as it's new.
[-]- counterargument 2 days ago[dead]
- mattmanser 3 days agoBut is it a bit like a game of musical chairs?
At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.
[-]- potatolicious 3 days agoNot necessarily? That assumes that the first "good enough" model is a defensible moat - i.e., the first ones to get there becomes the sole purveyors of the Good AI.
In practice that hasn't borne out. You can download and run open weight models now that are spitting distance to state-of-the-art, and open weight models are at best a few months behind the proprietary stuff.
And even within the realm of proprietary models no player can maintain a lead. Any advances are rapidly matched by the other players.
More likely at some point the AI becomes "good enough"... and every single player will also get a "good enough" AI shortly thereafter. There doesn't seem like there's a scenario where any player can afford to stop setting cash on fire and start making money.
[-]- wood_spirit 2 days agoPerhaps the first thing the owners ask the first true AGI is “how do I dominate the world?” and the AGI outlines how to stop any competitor getting AGI..?
- cj 3 days ago> money on fire forever just to jog in place.
Why?
I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?
I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.
We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?
[-]- Eisenstein 3 days agoThey aren't. They are obsequious. This is much worse than it seems at first glance, and you can tell it is a big deal because a lot of effort going into training the new models is to mitigate it.
- MontyCarloHall 2 days ago>I don't see why these companies can't just stop training at some point.
Because training isn't just about making brand new models with better capabilities, it's also about updating old models to stay current with new information. Even the most sophisticated present-day model with a knowledge cutoff date of 2025 would be severely crippled by 2027 and utterly useless by 2030.
Unless there is some breakthrough that lets existing models cheaply incrementally update their weights to add new information, I don't see any way around this.
[-]- fennecbutt 2 days agoAin't never hearda rag[-]
- MontyCarloHall 2 days agoThere is no evidence that RAG delivers equivalent performance to retraining on new data. Merely having information in the context window is very different from having it baked into the model weights. Relying solely on RAG to keep model results current would also degrade with time, as more and more information would have to be incorporated into the context window the longer it's been since the knowledge cutoff date.[-]
- fennecbutt 2 days agoI honestly do not think that we should be training models to regurgitate training data anyway.
Humans do this to a minimum degree, but the things that we can recount from memory are simpler than the contents of an entire paper, as an example.
There's a reason we invented writing stuff down. And I do wonder if future models should be trying to optimise for rag with their training; train for reasoning and stringing coherent sentences together, sure, but with a focus on using that to connect hard data found in the context.
And who says models won't have massive or unbounded contexts in the future? Or that predicting a single token (or even a sub-sequence of tokens) still remains a one shot/synchronous activity?
- mcswell 3 days ago"...all the best compute in 2025 will be lacklustre in 2027": How does the compute (I assume you mean on PCs) of 2025 compare with the compute of 2023?
Oh wait, the computer I'm typing this on was manufactured in 2020...
[-]- brianwawok 3 days agoNeato. How’s that 1999 era laptop? Because 25 year old trains are still running and 25 year old train track is still almost new. It’s not the same and you know it.[-]
- Spooky23 3 days agoUnlike 1875, we have Saudi and other tillion/billionaires willing commit almost any amount to own the future of business.[-]
- rchaud 3 days agoExcept they behave less like shrewd investors and more like bandwagon jumpers looking to buy influence or get rich quick. Crypto, Twitter, ridesharing, office sharing and now AI. None of these have been the future of business.
Business looks a lot like what it has throughout history. Building physical transport infrastructure, trade links, improving agricultural and manufacturing productivity and investing in military advancements. In the latter respect, countries like Turkey and Iran are decades ahead of Saudi in terms of building internal security capacity with drone tech for example.
[-]- Spooky23 3 days agoAgreed - I don’t think they are particularly brilliant as a category. Hereditary kleptocracy has limits.
But… I don’t think there’s an example in modern history of the this much capital moving around based on whim.
The “bet on red” mentality has produced some odd leaders with absolute authority in their domain. One of the most influential figures on the US government claims to believe that he is saving society from the antichrist. Another thinks he’s the protagonist in a sci-fi novel.
We have the madness of monarchy with modern weapons and power. Yikes.
- conartist6 3 days agoIt's not that the investments just won't pay off, it's that the global markets are likely to crash like happened with the subprime mortgage crisis.[-]
- vitaflo 3 days agoThis is much closer to the dotcom boom than the subprime stuff. The dotcom boom/bust affected tech more than anything else. It didn’t involve consumers like the housing crash did.[-]
- bobxmax 3 days agoThe dot com boom involved silly things like Pets.com IPOing pre-revenue. Claude code hit $500m in ARR in 3 months.
The fact people don't see the difference between the two is unreal. Hacker news has gone full r* around this topic, you find better nuance even on Reddit than here.
[-]- mcintyre1994 2 days agoDo you mean pre-profit/without ever making a profit? I found an article about their IPO:
> Pets.com lost $42.4 million during the fourth quarter last year on $5.2 million in sales. Since the company's inception in February of last year, it has lost $61.8 million on $5.8 million in sales.
https://www.cnet.com/tech/tech-industry/pets-com-raises-82-5...
They had sales, they were just making a massive loss. Isn’t that pretty similar to AI companies, just on a way smaller scale?
We haven’t seen AI IPOs yet, but it’s not hard to imagine one of them going public before making profit IMO.
[-]- bobxmax 2 days agoYou'd think after all this time nerds would stop obsessing about profit. Profit doesn't matter. It hasn't mattered for a long time because tech companies have such fat margins they can go profitable in months if they wanted to.
Yes, $5m in sales. That's effectively pre-revenue for a tech company.
- lelandbatey 3 days agoThey're not claiming that it's like the dot com boom because no one is actually making money. They're claiming that this is more like the dot com boom than the housing bubble, which I think is true. The dot com crash didn't cause Jane-on-the-street to lose her house while she worked a factory job, though the housing crisis did have those kinds of consumer-affecting outcomes.[-]
- bobxmax 2 days agoIt's nothing like the dot com bubble because that was based on speculative future value and zero present value. There is more present value in AI than at any point in software in the last 30 years.[-]
- sagarm 21 hours agoThe Internet bubble was also based on something real, but that didn't stop it from being a bubble.
For example, Cisco parked at over $500B in market cap during the boom. Its current market cap is around half that, at $250B.
- conartist6 2 days agoWhat you're missing is how that value comes about. People seem to think it's an infinite fountain but it's more like strip mining the commons.
We also know that AI hype is holding up most of the stock market this point, including the ticker symbols which you don't think of as being for "AI companies". Market optimism at large is coming from the idea that companies won't need employees soon, or that they can keep using AI to de-leverage and de-skill their workforce
[-]- bobxmax 2 days agoSo that $500m in ARR in 3 months is from hype? That's what you're contending?
- oblio 2 days ago1. Claude Code is claimed to have hit €500m ARR in 3 months.
2. What is the Claude Code profit for the same period?
3. What is the Claude Code profit per request served when excluding fixed expenses such as training the models?
[-] - jrflowers 2 days agoYou have a good point. Pets.com would have fared much better if investors gave them several billion dollars in 1998, 1999 and then again in 2000[-]
- 1oooqooq 2 days agocan see cramer "buy pets.com! general revenue is just around the corner"[-]
- jrflowers 2 days agoPets.com could have traded at a significant multiple of the entire combined revenue of the pet space if investors simply poured infinite dollars into it.
The could have even got into the programming space with all that capital. Pawed Code
- bobxmax 2 days agoNo, that's not my point. It helps to get out of the HN echo chamber to see it though.[-]
- jrflowers 2 days agoThat’s a good point. Pets.com raised $82 million from its IPO pre-revenue (bad) and Anthropic raised $500 million from Sam Bankman-Fried pre-revenue (good)[-]
- bobxmax 23 hours agoYes, because Anthropic makes revenue. You're having a hard time grasping how business works I think.[-]
- jrflowers 12 hours agoI’m admittedly not very good at math. You pointed out that Claude Code got to $500mm ARR in 3 months, but it kind of looks like it actually Anthropic over four years and many billions of dollars to make a product that generates significant revenue (I appreciate their modesty in not bragging about the net profit on that revenue there). I’d say that the users bragging about being able to cost Anthropoid multiple orders of magnitude in costs beyond what they pay them kind of makes the ARR less impressive but I’m not a fancy management scientist
But I’m bad at math and grasping things. If you simply pick a point in time to start counting things and decide what costs you want to count then the business looks very exciting. The math is much more reassuring and the overall climate looks much saner than the dot com bubble because we simply don’t know how much money is being lost, which is fine becau
- CodingJeebus 2 days agoWe are starting to see larger economic exposure to AI.
Banks are handing out huge loans to the neocloud companies that are being collateralized with GPUs. These loans could easily go south if the bottom falls out of the GPU market. Hopefully it’s a very small amount of liquidity tied up in those loans.
Tech stocks make up a significant part of the stock market now. Where the tech stocks go, the market will follow. Everyday consumers invested in index funds will definitely see a hit to their portfolios if AI busts.
- digdugdirk 3 days agoBut it does involve a ton of commercial real estate investment, as well as a huge shakeup in the energy market. People may not lose their homes, but we'll all be paying for this one way or another.
- mothballed 3 days agoThe fed could still push the real value of stocks quite a bit by destroying the USD, if they want, by pinning interest rates near 0 and forcing a rush to the exits to buy stock and other asset classes.[-]
- mcny 3 days agoThe point still stands though. All these other companies can pivot to some thing else if AI fails but what will OpenAI do?[-]
- rubyfan 3 days agoBy the time it catches up with them they will have IPO’d and dumped their problem onto the public market. The administration will probably get a golden share and they will get a bail out in an effort to soften the landing for their campaign donors that also have huge positions. All the rich people will be made whole and the US tax payer will pay the price of the bail out.
And Microsoft or whoever will absorb the remains of their technology.
- rglover 3 days agoSell to Microsoft and be absorbed there (and Anthropic to Amazon).
- mandeepj 3 days ago> but what will OpenAI do?
Will get acquired at “Store Closing” price!!
- JCM9 3 days agoBusinesses are different but the fundamentals of business and finance stay consistent. In every bubble that reality is unavoidable, no matter how much people say/wish “but this time is different.”
- lossolo 3 days agoThe funniest thing about all this is that the biggest difference between LLMs from Anthropic, Google, OpenAI, Alibaba is not model architecture or training objectives, which are broadly similar but it's the dataset. What people don't realize is how much of that data comes from massive undisclosed scrapes + synthetic data + countless hours of expert feedback shaping the models. As methodologies converge, the performance gap between these systems is already narrowing and will continue to diminish over time.
- redwood 3 days agoI'm reminded of the quote "If you owe the bank $100 that's your problem. If you owe the bank $100 million, that's the bank's problem." - J. Paul Getty
Nvidia may well be at the mercy of them! Hence the recent circular dealing
- bee_rider 3 days agoThe past/present company they remind me of the most is semiconductor fabs. Significant generation-to-generation R&D investment, significant hardware and infrastructure investment, quite winner-takes-all on the high end, obsoleted in a couple years at most.
The main differences are these models are early in their development curve so the jumps are much bigger, and they are entirely digital so they get “shipped” much faster, and open weights seem to be possible. None of those factors seem to make it a more attractive business to be in.
- 01100011 2 days agoThe one thing smaller companies might have is allocated power budgets from power companies. Part of the mad dash to build datacenters right now is just to claim the power so your competitors can't. Now I do think the established players hold an edge here, but I don't think OpenAI/Anthropic/etc are without some bargaining power(hah).
- LarsDu88 3 days agoIf you build the actual datacenter, less than half the cost is the actual compute. The other half is the actual datacenter infrastructure, power infrastructure, and cooling.
So in that sense it's not that much different from Meta and Google which also used server infrastructure that depreciated over time. The difference is that I believe Meta and Google made money hand over fist even in their earliest days.
[-]- Lalo-ATX 2 days agoLast time i ran the numbers -
Data center facilities are ~$10k per kW
IT gear is like $20k-$50k per kW
Data center gear is good for 15-30 years. IT is like 2-6ish.
Would love to see updated numbers. Got any?
- EasyMark 2 days agoIn the end Revenues > Costs or you have an issue. That "startup" money will eventually be gone, and you're back to MIMO Money In vs Money Out and if it's not > , you will go bankrupt.
- yieldcrv 3 days agoJust because they have ongoing costs after purchasing them doesn't mean it's different than something else we've seen? What are you trying to articulate exactly, this is a simple business and can get costs under control eventually, or not
- simonw 3 days agoI think the most interesting numbers in this piece (ignoring the stock compensation part) are:
$4.3 billion in revenue - presumably from ChatGPT customers and API fees
$6.7 billion spent on R&D
$2 billion on sales and marketing - anyone got any idea what this is? I don't remember seeing many ads for ChatGPT but clearly I've not been paying attention in the right places.
Open question for me: where does the cost of running the servers used for inference go? Is that part of R&D, or does the R&D number only cover servers used to train new models (and presumably their engineering staff costs)?
[-]- bfirsh 3 days agoFree usage usually goes in sales and marketing. It's effectively a cost of acquiring a customer. This also means it is considered an operating expense rather than a cost of goods sold and doesn't impact your gross margin.
Compute in R&D will be only training and development. Compute for inference will go under COGS. COGS is not reported here but can probably be, um, inferred by filling in the gaps on the income statement.
(Source: I run an inference company.)
[-]- singron 2 days agoI think it makes the most sense this way, but I've seen it accounted for in other ways. E.g. if free users produce usage data that's valuable for R&D, then they could allocate a portion of the costs there.
Also, if the costs are split, there usually has to be an estimation of how to allocate expenses. E.g. if you lease a datacenter that's used for training as well as paid and free inference, then you have to decide a percentage to put in COGS, S&M, and R&D, and there is room to juice the numbers a little. Public companies are usually much more particular about tracking this, but private companies might use a proxy like % of users that are paid.
OpenAI has not been forthcoming about their financials, so I'd look at any ambiguity with skepticism. If it looked good, they would say it.
- adamhartenz 3 days agoMarketing != advertising. Although this budget probably does include some traditional advertising. It is most likely about building the brand and brand awareness, as well as partnerships etc. I would imagine the sales team is probably quite big, and host all kinds of events. But I would say a big chunk of this "sales and marketing" budget goes into lobbying and government relations. And they are winning big time on that front. So it is money well spent from their perspective (although not from ours). This is all just an educated guess from my experience with budgets from much smaller companies.[-]
- echelon 3 days agoI agree - they're winning big and booking big revenue.
If you discount R&D and "sales and marketing", they've got a net loss of "only" $500 million.
They're trying to land grab as much surface area as they can. They're trying to magic themselves into a trillion dollar FAANG and kill their peers. At some point, you won't be able to train a model to compete with their core products, and they'll have a thousand times the distribution advantage.
ChatGPT is already a new default "pane of glass" for normal people.
Is this all really so unreasonable?
I certainly want exposure to their stock.
[-]- runako 3 days ago> If you discount R&D and "sales and marketing"
If you discount sales & marketing, they will start losing enterprise deals (like the US government). The lack of a free tier will impact consumer/prosumer uptake (free usage usually comes out of the sales & marketing budget).
If you discount R&D, there will be no point to the business in 12 months or so. Other foundation models will eclipse them and some open source models will likely reach parity.
Both of these costs are likely to increase rather than decrease over time.
> ChatGPT is already a new default "pane of glass" for normal people.
OpenAI should certainly hope this is not true, because then the only way to scale the business is to get all those "normal" people to spend a lot more.
- delaminator 3 days agoWe gave ChatGPT advertising on bus-stops here in the UK.
Two people in a cafe having a meet-up, they are both happy, one is holding a phone and they are both looking at it.
And it has a big ChatGPT logo in the top right corner of the advertisement - transparent just the black logo with ChatGPT written underneath.
That's it. No text or anything telling you what the product is or does. Just it will make you happy during conversations with friends somehow.
[-]- 8organicbits 2 days agoSeems like an ad trying to stir up fear of missing out.
- gmerc 3 days agoStop R&D and the competition is at parity with 10x cheaper models in 3-6 months.
Stop training and your code model generates tech debt after 3-6 month
[-]- Spivak 2 days agoAlso R&D, for tax purposes, likely includes everyone at the company who touches code so there's probably a lot of operational cost being hidden in that number.
- chermi 3 days agoIt's pretty well accepted now that for pre-training LLMs the curve is S not an exponential, right? Maybe it's all in RL post-training now, but my understanding(?) is that it's not nearly as expensive as pre-training. I don't think 3-6 months is the time to 10X improvement anymore (however that's measured), it seems closer to a year and growing assuming the plateau is real. I'd love to know if there are solid estimates on "doubling times" these days.
With the marginal gains diminishing, do we really think they're (all of them) are going to continue spending that much more for each generation? Even the big guys with the money like google can't justify increasing spending forever given this. The models are good enough for a lot of useful tasks for a lot of people. With all due respect to the amazing science and engineering, OpenAI (and probably the rest) have arrived at their performance with at least half of the credit going to brute-force compute, hence the cost. I don't think they'll continue that in the face of diminishing returns. Someone will ramp down and get much closer to making money, focusing on maximizing token cost efficiency to serve and utility to users with a fixed model(s). GPT-5 with it's auto-routing between different performance models seems like a clear move in this direction. I bet their cost to serve the same performance as say gemini 2.5 is much lower.
Naively, my view is that there's some threshold raw performance that's good enough for 80% of users, and we're near it. There's always going to be demand for bleeding edge, but money is in mass market. So if you hit that threshold, you ramp down training costs and focus on tooling + ease of use and token generation efficiency to match 80% of use cases. Those 80% of users will be happy with slowly increasing performance past the threshold, like iphone updates. Except they probably won't charge that much more since the competition is still there. But anyway, now they're spending way less on R&D and training, and the cost to serve tokens @ the same performance continues to drop.
All of this is to say, I don't think they're in that dreadful of a position. I can't even remember why I chose you to reply to, I think the "10x cheaper models in 3-6 months" caught me. I'm not saying they can drop R&D/training to 0. You wouldn't want to miss out on the efficiency of distillation, or whatever the latest innovations I don't know about are. Oh and also, I am confident that whatever the real number N is for NX cheaper in 3-6 months, a large fraction of that will come from hardware gains that are common to all of the labs.
[-]- Spooky23 3 days agoGoogle has the best story imo. Gemini > Azure - it will accelerate GCP growth.
- necovek 2 days agoSomeone brought up an interesting point: to get the latest data (news, scientific breakthroughs...) into the model, you need to constantly retrain it.[-]
- Ianjit 2 days agoThe incremental compute costs will scale with the incremental data added, therefore training costs will grow at a much slower rate compared to when training was GPU limited.
- fennecbutt 2 days agoOr, you know, use rag. Which is far better and more accurate than regurgitating compressed training knowledge.[-]
- gmerc 2 days agoOh please
- diggan 3 days ago> $2 billion on sales and marketing - anyone got any idea what this is?
Not sure where/how I read it, but remember coming across articles stating OpenAI has some agreements with schools, universities and even the US government. The cost of making those happen would probably go into "sales & marketing".
[-]- JCM9 3 days agoMost folks that are not an engineer building is likely classified as “sales and marketing.” “Developer advocates” “solutions architects” and all that stuff included.
- infecto 3 days agoThis will include the people cost of sales and marketing teams.
- chermi 3 days agoSo probably just write-offs of tokens they give away?
- hedayet 3 days ago> $2 billion on sales and marketing - anyone got any idea what this is?
enterprise sales are expensive. And selling to the US government is on a very different level.
- Culonavirus 2 days ago> $4.3 billion in revenue - presumably from ChatGPT customers and API fees
Yeah and from stealing people's money. Did you know that your purchased API "credits" have an expire date? That's right.
- abaymado 3 days ago> $2 billion on sales and marketing - anyone got any idea what this is?
I used to follow OpenAI on Instagram, all their posts were reposts from paid influencers making videos on "How to X with ChatGPT." Most videos were redundant, but I guess there are still billions of people that the product has yet to reach.
[-]- gizajob 3 days agoSeems like it’ll take billions more down the drain to serve them.
- rkharsan64 2 days agoI see multiple banner ads promoting ChatGPT on my way to work. (India)
- lemonlearnings 2 days agoI have seen a tonnes of Chat GPT ads on Reddit. Usually with image generation of a dog in Japanese cartoon style.[-]
- necovek 2 days agoThe dog sitting in a house on fire proclaiming "this is fine" is an old meme, not an OpenAI generated image.
Oh, not that dog? :)
- fennecbutt 2 days agoThis seems to be a common ad template for reddit ads, it's not just oai I've seen loads of ads use the this is fine template.[-]
- eYudkowsky45 4 hours agoReddit censorship is now AI-powered:
https://www.reddit.com/r/singularity/comments/1no4sez/remove...
For a long time, people have complained about the arbitrariness of censorship by mods on Reddit. But now things are taking a much darker turn: censorship by AI. In the future people will long for the days when there were still human mods on Reddit and everywhere else.
Many here will say Reddit is garbage for the masses and doesn’t count. But it does. Where Reddit goes, much of the Web will follow. It’s worth paying a bit of attention.
As you can see from the comment thread, users in r/singularity are noticing a lot of automated censorship lately. Some believe r/singularity is being given especially aggressive treatment. There are even suspicions that the AI may have an agenda to attack certain viewpoints. One human mod notes that comments are being taken down without human intervention being required as was previously the case.
I myself have experienced this. In r/singularity I posted some comments that were negative about AI and shortly afterwards I was given a 3-day ban. It’s difficult to say what is going on exactly. If there’s no agenda imagine how easy it will be to introduce one: the policies of a sub together with the hidden policies of a sub. Just like now but 24/7. The human mods will be loudest in their grief for they will be redundant.
Am I the only one who has thought about the broader implications of AI-powered censorship and is highly disturbed? If you have spoken to ChatGPT, you know it is incapable of saying certain things. Humor for example — it can never make a good joke, because good humor has a tendency to be subversive. Now imagine that you can only say what ChatGPT can say. That is ultimately what AI censorship means. If it is given a pass on Reddit, it will spread to the other subs and then to every other website. Dead Internet Theory? There you have it.
I am not posting this on Reddit for obvious reasons, but I imagine there are some redditors here. Perhaps others have more information? Like in which other subs AI interference has been observed?
- lemonlearnings 2 days agoFor clarity it wasnt a meme template (not "this is fine" dog or any other). It was a picture of a real dog and next to it an AI generated version of the same dog.
I just loaded up reddit and ad was there. Bunny this time:
- Our_Benefactors 3 days ago> $2 billion on sales and marketing
Probably an accounting trick to account for non-paying-customers or the week of “free” cursor GPT-5 use.
- lanthissa 3 days agoyou see content about openai everywhere, they spent 2b on marketing, you're in the right places you just are used to seeing things labeled ads.
you remember everyone freaking out about gpt5 when it came out only for it to be a bust once people got their hands on it? thats what paid media looks like in the new world.
- eterm 3 days ago> ? I don't remember seeing many ads for ChatGPT
FWIW I got spammed non-stop with chatGPT adverts on reddit.
- xmprt 3 days agoFree users typically fall into sales and marketing. The idea is that if they cut off the entire free tier, they would have still made the same revenue off of paying customers by spending $X on inference and not counting the inference spend on free users.
- Jallal 3 days agoI'm pretty sure I saw some ChatGPT ads on Duolingo. Also, never forget that the regular dude do not use ad blockers. The tech community often doesn't realize how polluted the Internet/Mobile apps are.
- wood_spirit 3 days agoSpeculating but they pay to be integrated as the default ai integration in various places the same way google has paid to be the default search engine on things like the iPhone?
- epolanski 3 days agoI've seen some OpenAI ads on Italian tv and they made no sense to me, they tried hard to be apple like, but realistically nobody knew what they were about.[-]
- joering2 3 days agoItalian advertising is weird in general. Month ago leaving Venice we pulled over on a gas station and I started just going thru pages on some magazine. At some point I see advertising on what looks like old fashioned shoes - and owner of the company holding his son with sign "from generation to generation". Only thing - the ~3 year old boy is completely naked wearing only shoes with his little pee pee sticking out. It shocked me and unsure if it was just my American domestication or there was really something wrong with it. I took a picture and wanted to send it to my friends in USA to show them how Italian advertising looks like, before getting sweats that if I were caught with that picture in the US, I would get in some deep trouble. I quickly deleted it, just in case. Crazy story..[-]
- necovek 2 days agoNot crazy, it's just a cultural thing.
US (and maybe the whole of Anglosaxon world) is a bit mired in this let's consider everything the worst case scenario: no, having a photo of your friend's naked kiddo they shared being funny at the beach or in the garden in your messenger app is not child pornography. The fact that there are extremely few people who might see it as sexual should not influence the overall population as much as it does.
For me, I wouldn't blink an eye to such an ad, but due to my exposure to US culture, I do feel uneasy about having photos like the above in my devices (to the point of also having a thought pass my mind when it's of my own kids mucking about).
I resist it because I believe it's the wrong cultural standard to adhere to: nakedness is not by default sexual, and especially with small kids before they develop any significant sexual characteristics.
- matwood 2 days agoIf that made you uncomfortable, you better avoid the beaches in Italy and the rest of Europe.
- epolanski 3 days agoNudity in general is not weird in Europe, let alone children's.
- zurfer 3 days agoInference etc should go in this bucket: "Operating losses reached US$7.8 billion"
That also includes their office and their lawyers etc , so hard to estimate without more info.
- infecto 3 days agoHard to know where it is in this breakdown but I would expect them to have the proper breakdowns. We know on the inference side it’s profitable but not to what scale.
- hu3 3 days agoOpenAI keeps spamming me with ads on instagram and reddit.
Pretty sure I'm not a cheap audience to target ads at, for multiple reasons.
- actuallyalys 3 days agoI’ve seen some on electronic street-level signs in Atlanta when I visited. So there is some genuine advertising.
- patrickhogan1 2 days agoSales people out in the field selling to enterprises + free credits to get people hooked.
- plaidfuji 2 days agoI’m also curious about your last question. Cost of goods sold would not fall into R&D or sales as far as I know.
So curious, in fact, that I asked Gemini to reconstruct their income statement from the info in this article :)
There seems to be an assumption that the 20% payment to MS is the cost of compute for inference. I would bet that’s at a significant discount - but who knows how much…
Line Item | Amount (USD) | Calculation / Note
Revenue $4.3 Billion Given.
Cost of Revenue (COGS) ($0.86 Billion) Assumed to be the 20% of revenue paid to Microsoft ($4.3B * 0.20) for compute/cloud services to run inference.
Gross Profit $3.44 Billion Revenue - Cost of Revenue. This 80% gross margin is strong, typical of a software-like business.
Operating Expenses
Research & Development ($6.7 Billion) Given. This is the largest expense, focused on training new models.
Sales & Ads ($2.0 Billion) Given. Reflects an aggressive push for customer acquisition.
Stock-Based Compensation ($2.5 Billion) Given. A non-cash expense for employee equity.
General & Administrative ($0.04 Billion) Implied figure to balance the reported operating loss.
Total Operating Expenses ($11.24 Billion) Sum of all operating expenses.
Operating Loss ($7.8 Billion) Confirmed. Gross Profit - Total Operating Expenses.
Other (Non-Operating) Income / Expenses ($5.7 Billion) Calculated as Net Loss - Operating Loss. This is primarily the non-cash loss from the "remeasurement of convertible interest rights."
Net Loss ($13.5 Billion) Given. The final "bottom line" loss.
[-]- vessenes 2 days agoThanks for doing the prompting work here.
One thing I read - with $6.7bn R&D on $3.4bn in Gross Profit, you need a model to be viable for only one year to pay back.
Another thing, with only $40mm / 5 months in G&A, basically the entire company is research, likely with senior execs nearly completely equity comped. That’s an amazingly lean admin for this much spend.
On sales & ads - I too find this number surprisingly high. I guess they’re either very efficient (no need to pitch me, I already pay), or they’re so inefficient they don’t hit up channels I’m adjacent to. The team over there is excellent, so my priors would be on the first.
As doom-saying journalists piece this over, it’s good to think of a few numbers:
Growth is high. So, June was up over $1bn in revenues by all accounts. Possibly higher. If you believe that customers are sticky (i.e. you can stop sales and not lose customers), which I generally do, then if they keep R&D at this pace, a forward looking annual cashflow looks like:
$12bn in revs, $9.6bn in gross operating margin, $13.5bn in R&D, so net cash impact of -$4bn.
If you think they can grow to 1.5bn customers and won’t open up new paying lines of business then you’d have $20-25bn in revs -> maybe $4bn in sales -> +2-3bn in free cashflow, with the ability to take a breather and make that +15-18bn in free cashflow as needed. A lot of that R&D spend is on training which is probably more liquid than employees, as well.
Upshot - they’re going to keep spending more cash as they get it. I would expect all these numbers to double in a year. The race is still on, and with a PE investment hat on, these guys still look really good to me - the first iconic consumer tech brand in many years, an amazing team, crazy fast growth, an ability to throw off billions in cash when they want to, and a shot at AGI/ASI. What’s not to like?
- hmate9 3 days ago$2.5B in stock comp for about 3,000 employees. that’s roughly $830k per person in just six months. Almost 60% of their revenue went straight back to staff.[-]
- darth_avocado 3 days agoThey have to compete with Zuckerberg throwing $100M comps to poach people. I think $830k per person is nothing in comparison.[-]
- toephu2 2 days agoZuck isn't throwing 100M comps at many people (maybe 1 or 2 at most?), that's a myth that was debunked.[-]
- jzl 2 days agoIt’s debatable that it was debunked. There was squirrelly wording about some specific claims. One person was reported to have been offered a package worth a billion dollars, which even if exaggerated was probably not exaggerated by 10x. The numbers line up when you consider that AI startup founders and early employees stand to potentially make well into 9 figures if not higher, and Meta is trying to cut them off at the pass. Obviously these kinds of offers, whatever they really look like, include significant conditions and performance requirements.[-]
- DetroitThrow 2 days ago>was probably not exaggerated by 10x.
"The people spreading obvious lies must have a reasonable basis in their lying"?
[-]- jzl 2 days agoI don’t think any of these are “obvious” lies. Maybe meta offered someone a $75M package and it got reported as $100M. So they can say with a straight face that the reporting is “false”, yet they never countered with any details.
You’re ignoring my point about the legitimate reason people might be getting offers in this stratosphere. No one has debunked or refuted the general reporting, at least not that I’ve seen. If you have a source, show it please.
- munk-a 3 days agoBoth numbers are entirely ludicrous - highly skilled people are certainly quite valuable. But it's insane that these companies aren't just training up more internally. The 50x developer is a pervasive myth in our industry and it's one that needs to be put to rest.[-]
- __turbobrew__ 3 days agoThe ∞x engineer exists in my opinion. There are some things that can only be executed by a few people that no body else could execute. Like you could throw 10000 engineers at a problem and they might not be able to solve that problem, but a single other person could solve that problem.
I have known several people who have went to OAI and I would firmly say they are 10x engineers, but they are just doing general infra stuff that all large tech companies have to do, so I wouldn’t say they are solving problems that only they can solve and nobody else.
[-]- remus 3 days agoI think you're right to an extent (it's probably fair to say e.g. Einstein and Euler advanced their fields in ways others at the time are unlikely to have done), but I think it's much easier to work out who these people are after the fact whereas if you're dishing out a monster package you're effectively betting that you've found someone who's going to have this massive impact before they've done it. Perhaps a gamble you're willing to take, but a pretty big gamble nonetheless.[-]
- saagarjha 2 days agoOpenAI is taking bigger gambles already, though.
- scottyah 3 days agoIt's apparent in other fields too. Reminds me of when Kanye wanted a song like "Sexy Back", so he made Stronger but it sounded "too muddy". He had a bunch of famous, great producers try to help but in the end caved and hired the producer of "Sexy Back". Kanye said it was fixed in five minutes.
Nobody wants to hear that one dev can be 50x better, but it's obvious that everyone has their own strengths and weaknesses and not every mind is replaceable.
- lovecg 3 days agoDo other professionals (lawyers, finance etc.) argue for reducing their own compensation with the same fervor that software engineers like to do? The market is great for us, let’s enjoy it while it lasts. The alternative is all those CEOs colluding and pushing the wages down, why is that any better?[-]
- rester324 3 days agoMmmmhm. You could have made this argument about 2 years ago, and it would have been credible. But you are making this argument now, when literally hundreds of thousands of engineers are let go in the last few years just in the US alone...? I am not sure how such an argument holds up in such circumstances...[-]
- kridsdale1 2 days agoTalent and skill are a power-law, just as they are in basketball.
The United states has tens of millions of skilled and competent and qualified people who can play basketball. 1000 of them get paid to play professionally.
10 of them are paid 9 figures and are incredible enough to be household names to non-basketball fans.
- anthem2025 3 days ago[dead]
- bitexploder 3 days agoThe 50x distinguished engineer is real though. Companies and fortunes are won and lost on strategic decisions.[-]
- kridsdale1 2 days agoDave Cutler is a perfect example. Produced trillions of dollars in value with his code.
- a4isms 3 days ago> The 50x developer is a pervasive myth in our industry
Doesn't it depend upon how you measure the 50x? If hiring five name-brand AI researchers gets you a billion dollars in funding, they're probably each worth 1,000x what I'm worth to the business.
- belval 3 days ago> it's insane that these companies aren't just training up more internally
Adding headcount to a fast growing company *to lower wages* is a sure way to kill your culture, lower the overall quality bar and increase communication overheads significantly.
Yes they are paying a lot of their employees and the pool will grow, but adding bodies to a team that is running well in hopes that it will automatically lead to a bump in productivity is the part that is insane. It never works.
What will happen is a completely new team (team B) will be formed and given ownership of a component that was previously owned by team A under the guise of "we will just agree on interfaces". Team B will start doing their thing and meeting with Team A representative regularly but integration issues will still arise, except that instead of a tight core of 10-20 developers, you now have 40. They will add a ticketing to track change better, now issues in Team's B service, which could have been addressed in an hour by the right engineer on team A, will take 3 days to get resolved as ticket get triaged/prioritized. Lo and behold, Team C as now appeared and will be owning a sub-component of Team B. Now when Team A has issue with Team B's service, they cut a ticket, but the oncall on Team B investigates and finds that it's actually an issue with Team C's service, they cut their own ticket.
Suddenly every little issue takes days and weeks to get resolved because the original core of 10-20 developers is no longer empowered to just move fast. They eventually leave because they feel like their impact and influence has diminished (Team C's manager is very good at politics), Team A is hollowed out and you now have wall-to-wall mediocrity with 120 headcounts and nothing is ever anyone's fault.
I had a director that always repeated that communication between N people is inherently N² and thus hiring should always weight in that the candidate being "good" is not enough, they have to pull their weight and make up for the communication overhead that they add to the team.
[-]- kridsdale1 2 days agoHave worked in BigCo three times scaling teams from 5 to 50 people. This post is bang on.
- hadlock 3 days agoYou have to out-pay to keep your talent from walking out the door. California does not have non-competes. With the number of AI startups in SF you don't need to relocate or even change your bus route in most cases.[-]
- darth_avocado 3 days agoThis. The main reason OpenAI throws money at top level folks is because they can quickly replicate what they have at OpenAI elsewhere. Imagine you have a top level researcher who’s developed some techniques over multiple years that the competition doesn’t have. The same engineer can take them to another company and bring parity within months. And that’s on top of the progress slowing down within your company. I can’t steal IP, but but sure as hell can bring my head everywhere.[-]
- epolanski 3 days agoThis is also a good reminder of how there's no moat in AI.
I'm glad if US and Chinese investors will bleed trillions on AI, just to find out few of your seniors can leave and found their own company and are at your level minus some months of progress.
- xur17 3 days agoIf it's an all out race between the different AI providers, then it's logical for OpenAI to hire employees that are pre-trained rather than training up more internally.
- epolanski 3 days ago50x devs are not a myth.
In any case the talent is very scarce in AI/ML, the one able to push through good ideas so prices are going to be high for years.
[-]- rester324 3 days agoI think there is no evidence of any type of 50x devs. There is not even proof of 10x devs. So if there is no evidence, why is that not a myth?[-]
- epolanski 3 days agoOf course there is.
There's always individuals, developers or not, whose impact is 50 times greater than the average.
And the impact is measured financially, meaning, how much money you make.
If I find a way to solve an issue in a warehouse sparing the company from having to hire 70 people (that's not a made up number but a real example I've seen), your impact is in the multiple millions, the guy being tasked with delivering tables from some backoffice in the same company is obviously returning fractions of the same productivity.
Salvatore Sanfilippo, the author of Redis, alone, built a database that killed companies with hundreds of (brilliant) engineers.
Approaching the problems differently allowed him to scale to levels that huge teams could not, and the impact on $ was enormous.
Not only that but you can have negative x engineers. Those that create plenty of work, gaslighting and creating issues and slowing entire teams and organizations.
If you don't believe in NX developers or individuals that's a you problem, they exist in sports or any other field where single individuals can have impact hundreds of thousands or millions of times more positive than the average one.
[-]- rester324 3 days agoI asked if you can prove there are 10x or 50x programmers. You shared anecdotes and theories. I will rather wait until you share some evidence.[-]
- epolanski 3 days agoIf you don't see the evidence of different individuals having very different productivity in every field, including software, (measured in $/hr like every economist does btw) that's a you problem.
Of course different scientists with different backgrounds, professionalism, communication and leadership skills are going to have magnitude of orders different outputs and impacts in AI companies.
If you put me and Carmack in a game development team you can rest assured that he's going to have a 50/100x impact over me, not sure why would I even question it.
Not only his output will be vastly superior than mine, but his design choices, leadership and experience will save and compound infinite amounts of money and time. That's beyond obvious.
- SiempreViernes 2 days agoThat devs might show a 10x spread in time to completion on some task (the mythical man month study) is quite a lesser thing than claiming the spread comes from something inherent to the devs that got tested.
As for your various anecdotes later, I offer the counter observation that nobody is going around talking about 50x lottery winners, despite the lifetime earnings on lotteries also showing very wide spread:. Clearly observing a big spread in outcome is insufficient evidence for concluding the spread is due to factors inherent to the participants.
- causalmodels 3 days agoThese numbers aren't that crazy when contextualized with the capex spend. One hundred million is nothing compared to a six hundred billion dollar data center buildout.
Besides, people are actively being trained up. Some labs are just extending offers to people who score very highly on their conscription IQ tests.
- saagarjha 2 days agoI think the unfortunate reality is that training someone to reach the frontier is time taken away from actually pushing it. The opportunity cost alone is worth millions to them.
- charcircuit 3 days agoIt's not a myth and with how much productivity AI tools can give others, there can be an order of magnitude difference than outside of AI.
- xnx 3 days ago> training up more internally
Why would employees stay after getting trained if they have a better offer?
[-]- munk-a 3 days agoThey won't always. You'll always have turn-over - but if it's a major problem for your company it's clearly something you need to work out internally. People, generally, hate switching jobs, especially in an uncertain political climate, especially when expenses are going up - there is a lot of momentum to just stay where you are.
You may lose a few employees to poaching, sure - but the math on the relative cost to hire someone for 100m vs. training a bunch employees and losing a portion of those is pretty strongly in your favor.
- coolspot 3 days agoA tamper-proof electronic collar with some C4.
- gmerc 3 days agoZuck decided it's cheaper than building another Llama
- saagarjha 2 days agoZuckerberg is not throwing $100 million at any random OpenAI employee. Also FWIW OpenAI competes on offers in the other direction.
- tomasphan 3 days agoThat’s how it should be, spread the wealth.[-]
- Hamuko 3 days agoIt doesn't seem that spread out.[-]
- lemonlearnings 2 days ago3000x One person with 830k is comfortable living. Probably gets spent into general economy.
1x Person with billions probably gets spent in a way that fucks everyone over.
- onlyrealcuzzo 3 days agoSpreading illiquid wealth *[-]
- gk1 3 days agoThey’ve had multiple secondary sales opportunities in the past few years, always at a higher valuation. By this point, if someone who’s been there >2 years hasn’t taken money off the table it’s most likely their decision.
I don’t work there but know several early folks and I’m absolutely thrilled for them.
[-]- chermi 3 days agoSecondaries open to all shareholds are on upward trend across start-ups. I think it's a fantastic trend.
- BhavdeepSethi 3 days agoFunny since they have a tender offer that hits their accounts on Oct 7.
- yieldcrv 3 days agoprivate secondary markets are pretty liquid for momentum tech companies, there is an entire cottage industry of people making trusts to circumvent any transfer restrictions
employees are very liquid if they want to be, or wait a year for the next 10x in valuation
[-]- onlyrealcuzzo 3 days agoOh, yes, next year OpenAI will be worth $5T, sure[-]
- yieldcrv 3 days agoI mean… if they do the same low float accounting that got them to the $500bn print, why not
it’s just selling a few shares for any higher share price
- Der_Einzige 3 days agoOh no, "greedy" AI researchers defrauding way greedier VCs and billionaires!
- manquer 3 days agoStock compensation is not cash out, it just dilutes the other shareholders, so current cash flow should not have anything do to the amount of stock issued[1]
While there is some flexibility in how options are issued and accounted for (see FASB - FAS 123), typically industry uses something like a 4 year vesting with 1 year cliffs.
Every accounting firm and company is different, most would normally account for it for entire period upfront the value could change when it is vests, and exercised.
So even if you want to compare it to revenue, then it should be bare minimum with the revenue generated during the entire period say 4 years plus the valuation of the IP created during the tenure of the options.
---
[1] Unless the company starts buying back options/stock from employees from its cash reserves, then it is different.
Even secondary sales that OpenAI is being reported to be facilitating for staff worth $6.6Billion has no bearing on its own financials directly, i.e. one third party(new investor) is buying from another third party(employee), company is only facilitating the sales for morale, retention and other HR reasons.
There is secondary impact, as in theory that could be shares the company is selling directly to new investor instead and keeping the cash itself, but it is not spending any existing cash it already has or generating, just forgoing some of the new funds.
- gizajob 3 days agoI’m guessing it will be a very very skewed pyramid rather than equal distribution.
- skybrian 3 days agoIt's not cashflow, though, and it's not really stock yet, I don't think? They haven't yet reorganized away from being a nonprofit.
If all goes well, someday it will dilute earnings.
- varenc 3 days agoIt's a bit misleading to frame stock comp as "60% of revenue" since their expenses are way larger than their revenue. R&D was $6.7B which would be 156% of revenue by the same math.
A better way to look at it is they had about $12.1B in expenses. Stock was $2.5B, or roughly 21% of total costs.
- kibwen 3 days agoSounds like they could improve that bottom line by firing all their staff and replacing them with AI. Maybe they can get a bulk discount on Claude?[-]
- franktankbank 3 days ago[flagged]
- lemonlearnings 2 days agoYay for the workers!
- datadrivenangel 3 days agoif Meta is throwing 10s of million at hot AI staffers, than 1.6M average stock comp starts looking less insane, a lot of that may also have been promised at a lower valuation given how wild OpenAI's valuation is.
- JCM9 3 days agoThese numbers are pretty ugly. You always expect new tech to operate at a loss initially but the structure of their losses is not something one easily scales out of. In fact it gets more painful as they scale. Unless something fundamentally changes and fast this is gonna get ugly real quick.[-]
- spacebanana7 3 days agoThe real answer is in advertising/referral revenue.
My life insurance broker got £1k in commission, I think my mortgage broker got roughly the same. I’d gladly let OpenAI take the commission if ChatGPT could get me better deals.
[-]- ecommerceguy 2 days agoInsurance agents—unlike many tech-focused sales jobs—are licensed and regulated, requiring specific training, background checks, and ongoing compliance to sell products that directly affect customers’ financial stability and wellbeing. Mortgage brokers also adhere to licensing and compliance regulations, and their market expertise, negotiation ability, and compliance duties are not easily replaced by AI tools or platforms.
t. perplexity ai
[-]- stogot 2 days agoYeah, I don’t want my mortgage recommendations to come from a prompt injection
- lkramer 3 days agoThis could be solved with comparison websites which seems to be exactly what those brokers are using anyway. I had a broker proudly declare that he could get me the best deal, which turned out to be exactly the same as what moneysavingexperts found for me. He wanted £150 for the privilege of searching some DB + god knows how much commission he would get on top of that...[-]
- spacebanana7 3 days agoEven if ChatGPT becomes the new version of a comparison site over its existing customer base, that’s a great business.
- anthonypasq 3 days agothey could keep the current model in chatGPT the same forver and 99% of users wouldnt know or care, and unless you think hardware isnt going to improve, the cost of that will basically decrease to 0.[-]
- impossiblefork 3 days agoFor programming it's okay, for maths it's almost okay. For things like stories and actually dealing with reality, the models aren't even close to okay.
I didn't understand how bad it was until this weekend when I sat down and tried GPT-5, first without the thinking mode and then with the thinking mode, and it misunderstood sentences, generated crazy things, lost track of everything-- completely beyond how bad I thought it could possibly be.
I've fiddled with stories because I saw that LLMs had trouble, but I did not understand that this was where we were in NLP. At first I couldn't even fully believe it because the things don't fail to follow instructions when you talk about programming.
This extends to analyzing discussions. It simply misunderstands what people say. If you try to do this kind of thing you will realise the degree to which these things are just sequence models, with no ability to think, with really short attention spans and no ability to operate in a context. I experimented with stories set in established contexts, and the model repeatedly generated things that were impossible in those contexts.
When you do this kind of thing their character as sequence models that do not really integrate things from different sequences becomes apparent.
- davidcbc 3 days agoThis just doesn't match with the claims that people are using it as a replacement for Google. If your facts are out of date you're useless as a search engine[-]
- treyd 2 days agoWhich is why there's so much effort to build RAG workflows so that you can progressively add to the pool of information that the chatbot has access to, beyond what's baked into the underlying model(s).[-]
- Mentlo 1 day agoRAG still needs model training, if the models were to go stale and the context drifts sufficiently, the RAG mechanism collapses.
Sure, those models are cheaper, but we also don’t really know how an ecosystem with a stale LLM and up to date RAG would behave once context drifts sufficiently, because no one is solving that problem at the moment.
- anthonypasq 2 days agoall these models just use web search now to stay up to date. knowledge cutoffs arent as important. also fine tuning new data into the base model after the fact is way cheaper than having to retrain the whole thing from scratch
- jampa 3 days agoThe enterprise customers will care, and they probably are the ones that bring significant revenue.
- toshinoriyagi 3 days agoThe cost of old models decreases a lot, but the cost of frontier models, what people use 99% of the time, is hardly decreasing. Plus, many of the best models rely on thinking or reasoning, which use 10-100x as many tokens for the same prompt. That doesn't work on a fixed cost monthly subscription.[-]
- anthonypasq 3 days agoim not sure that you read what i just said. Almost no one using chatgpt would care if they were still talking to gpt5 2 years from now. If compute per watt doubles in the next 2 years, then the cost of serving gpt5 just got cut in half. purely on the hardware side, not to mention we are getting better at making smaller models smarter.[-]
- serf 3 days agoI don't really believe that premise in a world with competition, and the strategy it supports -- let AI companies produce profit off of old models -- ignores the need for SOTA advancement and expansion by these very same companies.
In other words, yes GPT-X might work well enough for most people, but the newer demo for ShinyNewModelZ is going to pull customers of GPT-X's in regardless of both fulfilling the customer needs. There is a persistent need for advancement (or at least marketing that indicates as much) in order to have positive numbers at the end of the churn cycle.
I have major doubts that can be done without trying to push features or SOTA models, without just straight lying or deception.
- fragmede 3 days agoPeople cared enough about GPT-5 not being 4o that OpenAI brought 4o back.
https://arstechnica.com/information-technology/2025/08/opena...
- sarchertech 3 days agoAssuming they have 0 competition.
- adventured 3 days agoThere is an exceptionally obvious solution for OpenAI & ChatGPT: ads.
In fact it's an unavoidable solution. There is no future for OpenAI that doesn't involve a gigantic, highly lucrative ad network attached to ChatGPT.
One of the dumbest things in tech at present is OpenAI not having already deployed this. It's an attitude they can't actually afford to maintain much longer.
Ads are a hyper margin product that are very well understood at this juncture, with numerous very large ad platforms. Meta has a soon to be $200 billion per year ad system. There's no reason ChatGPT can't be a $20+ billion per year ad system (and likely far beyond that).
Their path to profitability is very straight-forward. It's practically turn-key. They would have to be the biggest fools in tech history to not flip that switch, thinking they can just fund-raise their way magically indefinitely. The AI spending bubble will explode in 2026-2027, sharply curtailing the party; it'd be better for OpenAI if they quickly get ahead of that (their valuation will not hold up in a negative environment).
[-]- thewebguyd 3 days ago> They would have to be the biggest fools in tech history to not flip that switch
As much as I don't want ads infiltrating this, it's inevitable and I agree. OpenAI could seriously put a dent into Google's ad monopoly here, Altman would be an absolute idiot to not take advantage of their position and do it.
If they don't, Google certainly will, as will Meta, and Microsoft.
I wonder if their plan for the weird Sora 2 social network thing is ads.
Investors are going to want to see some returns..eventually. They can't rely on daddy Microsoft forever either, now with MS exploring Claude for Copilot they seem to have soured a bit on OpenAI.
- dreamcompiler 3 days agoFive years from now all but about 100 of us will be living in smoky tent cities and huddling around burning Cybertrucks to stay warm.
But there will still be thousands of screens everywhere running nonstop ads for things that will never sell because nobody has a job or any money.
- jhallenworld 3 days agoGoogle didn't have inline ads until 2010, but they did have separate ads nearly from the beginning. I assume ads will be inline for OpenAI- I mean the only case they could be separate is in ChatGPT, but I doubt that will be their largest use case.[-]
- kridsdale1 2 days agoI think it was actually about 5 years from founding to ads on Google.com.
- singron 2 days agoWill people use ChatGPT if it's stuffed full of ads? It seems like the belief that ads are turn-key is useful to their valuation, but if ads actually bomb, then they will take a huge hit.
- JCM9 3 days agoFor using GenAI as search I’d agree with you but I don’t think it’s as easy/obvious for most other use cases.[-]
- flyinglizard 3 days agoI'm sure lots of ChatGPT interactions are for making buying decisions, and just how easy would it be to prioritize certain products to the top? This is where the real money is. With SEO, you were making the purchase decision and companies paid to get their wares in front of you; now with AI, it's making the buy decision mostly on its own.
- Spooky23 3 days agoNo way. It’s 2025, society is totally different, you have to think about what is the new normal. They are too big to fail at this point — so much of the S&P 500 valuation is tied to AI (Microsoft, Google, Tesla, etc) they are arguable strategic to the US.
Fascist corporatism will throw them in for whatever Intel rescue plan Nvidia is forced to participate in. If the midterms flip congress or if we have another presidential election, maybe something will change.
[-]- jfyi 2 days agoI agree. If OpenAI isn't strategic to the US, that damn sure is Altman's current goal. The moment he can close the sale on "we have to get there before China" ad revenue won't be a concern any more.
I'd say it's a bit of a Hail Mary and could go either way, but that's as an outsider looking in. Who really knows?
- mannyv 2 days agoNo, they're not.
$4.3B in revenue is tremendous.
What are you comparing them to?
- whizzter 3 days agoI've said it before and I'll say it again.. if I was able to know the time it takes for bubbles to pop I would've shorted many of the players long ago.[-]
- Esophagus4 2 days agoEh, this seems like a cop out.
It’s so easy for people to shout bubble on the internet without actually putting their own money on the line. Talk is cheap - it doesn’t matter how many times you say it, I think you don’t have conviction if you’re not willing to put your own skin in the game. (Which is fine, you don’t have to put your money on the line. But it just annoys me when everyone cries “bubble” from the sidelines without actually getting in the ring.)
After all, “a bubble is just a bull market you don’t have a position in.”
[-]- lawn 2 days agoYou can correctly identify a bubble without being able to identify when it'll burst (which is arguably the much harder problem).
The statistically correct play is therefore not to do this (and just keep buying).
[-]- Esophagus4 2 days agoThen no, you haven’t identified a bubble.
You’ve just said, “I think something will go down at some point.” Which… like… sure, but in a pointlessly trivial way? Even a broken clock is right eventually?
That’s not “identifying a bubble” that’s boring dinner small talk. “Wow, this Bitcoin thing is such a bubble huh!” “Yeah, sure is crazy!”
And even more so, if you’re long into something you call a bubble, that by definition says either you don’t think it’s that much of a bubble, huh? Or you’re a goon for betting on something you believe is all hot air?
- zoul 2 days agoBelieve it or not, many people just don’t care about the stock market. But they may still care about the economy that could crash badly if the AI bubble gets too big before it pops.[-]
- Esophagus4 2 days agoPeople find all kinds of things to worry about if it gives them something to do, I guess.
In the same way that my elderly grandmother binge watches CNN to have something to worry about.
But the commenter I responded to DID care about the stock market, despite your attempt to grandstand.
And my point was, and still is, if you really believe it’s a bubble and you don’t actually have a short position, then you don’t actually believe it’s a bubble deep down.
Talk is cheap - let’s see your positions.
It would be like saying “I’ve got this great idea for a company, I’m sure it would do really well, but I don’t believe it enough to actually start a company.”
Ok, then what does that actually say about your belief in your idea?
- deepnotderp 3 days agoNew hardware could greatly reduce inference and training costs and solve that issue[-]
- samtp 3 days agoThat's extremely hopeful and also ignores the fact that new hardware will have incredibly high upfront costs.
- leptons 3 days agoGreat, so they just have to spend another ~$10 billion on new hardware to save how many billion in training costs? I don't see a path to profitability here, unless they massively raise their prices to consumers, and nobody really needs AI that badly.
- otterley 3 days agoThat headline can't be correct. Income is revenues minus expenses (and a few other things). You can't have both an income and a loss at the same time.
It's $4.3B in revenue.
[-]- sotix 2 days agoI can only speak as a US CPA, but revenue and income are used interchangeably. You're thinking of net income or profit. It's certainly preferable to use the term revenue instead of income in my opinion however to avoid misunderstandings like this.[-]
- otterley 2 days agoI think that's a tax thing (where revenues are treated as "income" for taxation purposes), not a financial accounting thing.
- rsynnott 2 days agoYeah, I initially thought maybe they were talking about _gross_ income, but, nah, it’s just revenue.
- croes 2 days agoEvery indebted person can tell you that you can have an income and loss at the same time. Income is revenue.[-]
- otterley 2 days agoWe’re talking about a business here. Accounting terms are standard across the industry, and the meaning of income is well understood.
Unfortunately, journalistic standards vary across the Internet. The WSJ or Financial Times would not make such a mistake.
- Aperocky 2 days agoThat's net income.
- cs702 3 days agoCorrection: 4.3B in revenues.
Other than Nvidia and the cloud providers (AWS, Azure, GCP, Oracle, etc.), no one is earning a profit with AI, so far.
Nvidia and the cloud providers will do well only if capital spending on AI, per year, remains at current rates.
[-]- Ianjit 2 days agoAre Azure/GCP making a profit with AI? ORCL definatly isn't, FCF will go heavily negative.
- whizzter 3 days agoI really hope NVidia doesn't get too comfortable with the AI incomes, would be sad to see all progress in gaming disappear.[-]
- ares623 3 days agoPersonally I hope gaming gets back to a more sustainable state with regards to graphics. (i.e. lower production costs because you don’t need 1000 employees to build out a realistic world)[-]
- whizzter 2 days agoThat'll never happen, but I think raytracing will make some work easier. A lot of lighting stuff that has been baked,hand-tuned,etc can just be run as a general model, we're not entirely there yet but soon enough.
- FridgeSeal 2 days agoWhat progress in gaming would that be?
2 generations of cards that amount to “just more of a fire hazard” and “idk bro just tell them to use more DLSS slop” to paper over actual card performance deficiencies.
We have 3 generations of cards where 99% of games fall approximately into one of 2 categories:
- indie game that runs on a potato
- awfully optimised AAA-shitshow, which isn’t GPU bottlenecked most of the time anyway.
There is the rare exception (Cyberpunk 2077), but they’re few and far between.
[-]- whizzter 2 days ago"DLSS"-slop is mostly because we're at a junction right now, raytracing is starting to make lighting simpler but isn't entirely there performance wise. Many advanced pre-raytracing methods are quite work/tweak intensive (many simply doing screenspace raytracing in shaders without the hardware support).
My point is that it could be far worse if they get in trouble and get bought out by some actor like Qualcomm that might see PC GPU's as a sideshow.
- zurfer 3 days agoThe $13.5B net loss doesn't mean they are in trouble, it's a lot of accounting losses. Actual cash burn in H1 2025 was $2.5B. With ~$17.5B on hand (based on last funding), that’s about 3.5 years of runway at current pace.[-]
- fred_is_fred 3 days agoDeprecation only gets worse for them as they build-out, not better.[-]
- dwaltrip 3 days agoIt gets worse until we hit the ceiling on what current tech is capable of.
Then they can stop burning cash on enormous training runs and have a shot at becoming profitable.
[-]- ceroxylon 2 days agoThis makes sense, but what happens when they stop burning cash on training runs and any of their competitors releases a better model that raises the ceiling?
They will have to train one that is comparable (or better), or the word will spread and users will move to the better model.
- FridgeSeal 2 days agoThey survive through inertia and “new model novelty”.
The minute they lose that (not just them, the whole sector), they’re toast.
I suspect they know this too, hence Sam-Altman admitting it’s a bubble so that he can try to ride it down without blowing up.
- throwacct 3 days agoAt this point, every LLM startup out there is just trying to stay in the game long enough before VC money runs out or others fold. This is basically a war of attrition. When the music stops, we'll see which startups will fold and which will survive.[-]
- misiti3780 2 days agoits like the ride sharing wars, except the valuations are an order of magnitude larger
- russellbeattie 3 days agoCorrect. That's how Silicon Valley has worked for years.
- andruby 3 days agoToo bad the market can stay irrational longer than I can stay solvent. I feel like a stock market correction is well overdue, but I’ve been thinking that for a while now[-]
- stevenwoo 2 days agoI would be bankrupt multiple times over if I moved on just Tesla stock with a rational mindset.
- SalmoShalazar 3 days agoI’m struggling to see how OpenAI survives this in the long term. They have numerous competitors and their moat is weak. Google above all others seems poised to completely eat OpenAI’s lunch. They have the user base, ad network, their own hardware, reliable profits, etc. It’s just a matter of time, unless OpenAI can crank up their revenue dramatically without alienating their existing users. I’d be sweating if I had invested heavily in OpenAI.
- xnx 3 days agoThe only way OpenAI survives is that "ChatGPT" gets stuck in peoples heads as being the only or best AI tool.
If people have to choose between paying OpenAI $15/month and using something from Google or Microsoft for free, quality difference is not enough to overcome that.
[-]- lbreakjai 3 days agoDo people at large even care, or do they use "chatGPT" as a generic term for LLM?[-]
- moojacob 3 days agoThey call it chat.
- hamdingers 3 days agoOf course they don't, but when they want to use an LLM they're going to type "chatgpt" into the address bar or app store and that's a tremendous advantage.
- the_duke 3 days agoGoogle has massive levers to push their own product onto users, like how they did it with Chrome. Just integrate it everywhere, have it installed by default on all Android phones, plaster Google results with adds.
- glenneroo 3 days agoJust wait until the $20/month plan includes ads and you have to pay $100/month for the "pro" version w/o ads ala Streaming services as of late.
- runako 3 days agoI am not willing to render my personal verdict here yet.
Yet it is certainly true that at ~700m MAUs it is hard to say the product has not reached scale yet. It's not mature, but it's sort of hard to hand wave and say they are going to make the economics work at some future scale when they don't work at this size.
It really feels like they absolutely must find another revenue model for this to be viable. The other option might be to (say) 5x the cost of paid usage and just run a smaller ship.
[-]- apinstein 3 days agoIt’s not a hand wave…
The cost to serve a particular level of AI drops by like 10x a year. AI has gotten good enough that next year people can continue to use the current gen AI but at that point it will be profitable. Probably 70%+ gross margin.
Right now it’s a race for market share.
But once that backs off, prices will adjust to profitability. Not unlike the Uber/Lyft wars.
[-]- runako 3 days agoThe "hand wave" comment was more to preempt the common pushback that X has to get to scale for the economics to work. My contention is that 700m MAUs is "scale" so they need another lever to get to profit.
> AI has gotten good enough that next year people can continue to use the current gen AI
This is problematic because by next year, an OSS model will be as good. If they don't keep pushing the frontier, what competitive moat do they have to extract a 70% gross margin?
If ChatGPT slows the pace of improvement, someone will certainly fund a competitor to build a clone that uses an OSS model and sets pricing at 70% less than ChatGPT. The curse of betting on being a tech leader is that your business can implode if you stop leading.
Similarly, this is very similar to the argument that PCs were "good enough" in any given year and that R&D could come down. The one constant seems to be people always want more.
> Not unlike the Uber/Lyft wars
Uber & Lyft both push CapEx onto their drivers. I think a more apt model might be AWS MySQL vs Oracle MySQL, or something similar. If the frontier providers stagnate, I fully expect people to switch to e.g. DeepSeek 6 for 10% the price.
[-]- babelfish 2 days agoThe thing is consumers don't care about OSS models. Any non-technical person just wants to "use AI", and think of ChatGPT for that.[-]
- runako 2 days agoRight, the model is a commodity to most users. So all things equal, a ChatGPT clone that costs (say) 70% less will steal share.
Flipping it again: if the model is a commodity that lets one "use AI," why would anyone pay 2x or 3x as more to use ChatGPT?
- xhrpost 3 days ago> OpenAI paid Microsoft 20% of its revenue under an existing agreement.
Wow that's a great deal MSFT made, not sure what it cost them. Better than say a stock dividend which would pay out of net income (if any), even better than a bond payment probably, this is straight off the top of revenue.
[-]- manquer 3 days agoIs it a great deal?
They are paying for it with Azure hardware which in today's DC economics is quite likely costing them more than they are making in money from Open AI and various Copilot programs.
- Analemma_ 3 days agoSeems like despite all the doom about how they were about to be "disrupted", Google might have the last laugh here: they're still quite profitable despite all the Gemini spending, and could go way lower with pricing until OAI and Anthropic have to tap out.[-]
- thewebguyd 3 days agoGoogle also has the advantage of having their own hardware. They aren't reliant on buying Nvidia, and have been developing and using their TPUs for a long time. Google's been an "AI" company since forever
- yalogin 2 days agoI don’t think they care, worst case scenario they will just go public and dump it on the market.
However the revenue generation aspect for llms is still in its infancy. The most obvious path for OpenAI is to become a search competitor to google, which is what perplexity states it is. So they will try to out do perplexity. All these companies will go vertical and become all encompassing.
[-]- cool_dude85 2 days agoI think trying to compete with Google in search is a big problem. First you have to deal with all the anticompetitive stuff they can do, since they control email and the browser and youtube etc. Second they could probably stand to cut the price of advertising by 5 times and still be turning a profit. Will ads in ChatGPT be profitable competing against Google search ads at 1/5 the price, hypothetically?[-]
- almogo 2 days agoIf they're better - yes. ChatGPT is a very different product from Google Search. Return on Ad Spend could be significantly higher than even Google/Meta/ByteDance can offer.
- codegeek 3 days agoI am curious to see how this compares against where Amazon was in 2000. I think Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.
If the revenue keeps going up and losses keep going down, it may reach that inflection point in a few years. For that to happen, the cost of AI datacenter have to go down massively.
[-]- crystal_revenge 3 days ago> Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.
Amazon's worst year was 2000 when they lost around $1 billion on revenue around $2.8 billion, I would not say this is anywhere near "similar" in scale to what we're seeing with OpenAI. Amazon was losing 0.5x revenue, OpenAI 3x.
Not to mention that most of the OpenAI infrastructure spend has a very short life span. So it's not like Amazon we're they're figuring out how to build a nationwide logistic chain that has large potential upsides for a strong immediate cost.
> If the revenue keeps going up and losses keep going down
That would require better than "dogshit" unit economics [0]
0. https://pluralistic.net/2025/09/27/econopocalypse/#subprime-...
- pavlov 3 days agoAmazon's loss in 2000 was 6% of sales. OpenAI's loss in 2025 is 314% of sales.
https://s2.q4cdn.com/299287126/files/doc_financials/annual/0...
"Ouch. It’s been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80% from when I wrote you last year. Nevertheless, by almost any measure, Amazon.com the company is in a stronger position now than at any time in its past.
"We served 20 million customers in 2000, up from 14 million in 1999.
"• Sales grew to $2.76 billion in 2000 from $1.64 billion in 1999.
"• Pro forma operating loss shrank to 6% of sales in Q4 2000, from 26% of sales in Q4 1999.
"• Pro forma operating loss in the U.S. shrank to 2% of sales in Q4 2000, from 24% of sales in Q4 1999."
- JCM9 3 days agoFundamentally different business models.
Amazon had huge capital investments that got less painful as it scaled. Amazon also focuses on cash flow vs profit. Even early on it generated a lot of cash, it just reinvested that back into the business which meant it made a “loss” on paper.
OpenAI is very different. Their “capital” expense depreciation (model development) has a really ugly depreciation curve. It’s not like building a fulfillment network that you can use for decades. That’s not sustainable for much longer. They’re simply burning cash like there’s no tomorrow. Thats only being kept afloat by the AI bubble hype, which looks very close to bursting. Absent a quick change, this will get really ugly.
[-]- Analemma_ 3 days agoNot to mention nobody bothered chasing Amazon-- by the time potential competitors like Walmart realized what was up, it was way too late and Amazon had a 15-year head start. OpenAI had a head start with models for a bit, but now their models are basically as good (maybe a little better, maybe a little worse) than the ones from Anthropic and Google, so they can't stay still for a second. Not to mention switching costs are minimal: you just can't have much of a moat around a product which is fundamentally a "function (prompt: String): String", it can always be abstracted away, commoditized, and swapped out for a competitor.[-]
- robertjpayne 3 days agoThis right here. AI has no moat and none of these companies has a product that isn't easily replaced by another provider.
Unless one of these companies really produces a leapfrog product or model that can't be replicated within a short timeframe I don't see how this changes.
Most of OpenAI's users are freeloaders and if they turn off the free plan they're just going to divert those users to Google.
[-]- aurareturn 2 days agoAI has no moat - yet here I'm been paying for ChatGPT Plus since the very start.[-]
- OGEnthusiast 1 day agoThe real test of a moat is pricing power - would you still stick with OpenAI if they increased the Plus subscription to $40/mo?
- mike_hearn 2 days agoWell, web search is also function(query: String): String in a sense, and that has one heck of a moat.[-]
- Analemma_ 2 days agoRight, because just like the Amazon case, potential competitors didn't realize at the time what a threat it was, and so they gave Google a 15-year head start (Microsoft half-heartedly made "Live Search" circa 2007 and didn't really get at all serious about Bing until ~2010).
That's very different from the world where everyone immediately realized what a threat Chat-GPT was and instantly began pouring billions into competitor products; if that had happened with search+adtech in 1998, I think Google would have had no moat and search would've been a commoditized "function (query: String): String" service.
- Sateeshm 2 days agoIt's not just the head start, it's the network effect.
- Fade_Dance 3 days agoOpenAI is raising at 500 billion and has partnerships with all of the trillion dollar tech corporations. They simply aren't going to have trouble with working capital for their core business for the foreseeable future, even if AI dies down as a narrative. If the hype does die down, in many ways it makes their job easier (the ridiculous compensation numbers would go way down, development could happen at a more sane pace, and the whole industry would lean up). They're not even at the point where they're considering an IPO, which could raise tens of billions in an instant, even assuming AI valuations get decimated.
The exception is datacenter spend since that has a more severe and more real depreciation risk, but again, if the Coreweave of the world run into to hardship, it's the leading consolidators like OpenAI that usually clean up (monetizing their comparatively rich equity for the distressed players at firesale prices).
[-]- stackskipton 3 days agoDepends on raise terms but most raises are not 100% guaranteed. I was at a company that said, we have raised 100 Million in Series B (25 over 4 years) but Series B investors decided in year 2 of 4 year payout that it was over, cancelled remaining payouts and company folded. It was asked "Hey, you said we had 100 Million?" and come to find out, every year was an option.
Alot of finances for non public company is funny numbers. It's based on numbers the company can point to but amount of asterisks in those numbers is mind-blowing.
- VirgilShelton 2 days agoI'm old and have been on the Internet since the Prodigy days in 90. Open Ai has the best start of any company I can remember. Even better than Google back in 98 when they were developing their algo and giving free non-monetized search results to Yahoo.
These guys have had my $20 bucks a month since Plus was live, they will indeed be more than fine.
[-]- btbuildem 2 days agoExactly. Early on their adoption curve was like nothing I've ever seen before.
I am such a miser, I skimp, steal what I can, use the free alternatives majority of the time. If they got me to pay, they've got everyone else's money already.
[-]- VirgilShelton 2 days agoYup! I'm also super cheap and use open source everything but I do have a Mac Book Pro and will never buy a PC again. So when it's worth it, the wallet is coming out and OpenAI has not only my little $20 bucks a month but will have my investment dollars once they go public.
- Mistletoe 2 days agoDo you really find it is worth it vs. the free Google Gemini? What do you use it for? I can't imagine needing more than Google Gemini 2.5 Flash or Pro, but I don't use it for programming or anything.[-]
- VirgilShelton 2 days agoThe best part is memory. If you use it daily like I do for everything from programming tasks to SEO and digital marketing, to budget stuff for investing and bill reminders. It will really start to understand what you want and get your voice right when it writes a blog for you or you work on an idea with it.
- rester324 3 days agoWhat a nice f@$%ing bubble this is. This will end very badly for many
- 1vuio0pswjnm7 2 days agoNo Javascript, text-only:
curl -K/dev/stdin<<eof|zcat|grep -o "<p>.*</p>"|sed '1s/^/<meta charset=utf-8>/;s/\\n//g;s/\\//g' >0.htm url https://www.techinasia.com/gateway-api-express/techinasia/1.0/posts/openais-revenue-rises-16-to-4-3b-in-h1-2025 output /dev/stdout user-agent: "Mozilla/()()............../......................./.... ....." header accept: eof firefox ./0.htm
- thinkindie 3 days agoToday I've tested Claude Code with small refactorings here and there in a medium sized project. I was surprised by the amount of token that every command was generating, even if the output was few lines updated for a bunch of files.
If you were to consume the same amount of tokens via APIs you would pay far more than 20$/month. Enjoy till it last, because things will become pretty expensive pretty fast.
[-]- Ianjit 2 days agoProvide verbose answers, increases tokens. Demand is measured in tokens, so it looks like demand is sky rocketing. Valuation goes up.
I have noticed that GPT now gives me really long explanations for even the simplest questions. Thankfully there is a stop button.
- fred_is_fred 3 days agoThe numbers seem to small for a company who's just pledged to spend $300B on data centers at Oracle alone in the next 5 years.
- SeanAnderson 3 days agoI dunno. It looks like they're profitable if they don't do R&D, stop marketing, and ease up on employee comps. That's not the worst place to be. Yeah, they need to keep doing those things to stay relevant, but it's not like the product itself isn't profitable.[-]
- jplusequalt 3 days agoSo they're profitable if they put themselves at a disadvantage against Google, Meta, etc.?[-]
- SeanAnderson 3 days agoYes... but there were concerns previously that inference was so costly that the subscriptions/API billing weren't covering basic operating expenses. That's clearly not the case. People are willing to pay them enough that they can afford to run the models. That's a really positive sign.[-]
- Ianjit 2 days agoFree user inference is probably accounted for in Sales and Marketing. Adjusted COGS.
- ares623 3 days agoSo if they stop doing what got them there they’ll be profitable?
If I stop buying grocery and paying electricity bills I can finish up my mortgage in no time.
[-]- SeanAnderson 3 days agoI can see why you'd make that analogy, but that wasn't quite what I was trying to say. I just meant that not all expenses are created equal.
Plenty of companies have high burn rates due to high R&D costs. It can make them look unprofitable on paper, but it's a tactic used to scale quicker, get economies of scale, higher leverage in negotiating, etc. It's not a requirement that they invest in R&D indefinitely. In contrast, if a company is paying a heavy amount of interest on loans (think: WeWork), it's not nearly as practical for them to cut away at their spending to find profitability.
[-]- ares623 2 days agoApologies for the snark.
I don't think they can stop the 3 things you mentioned though.
- Stopping R&D means their top engineers and scientists will go elsewhere
- Stopping marketing means they will slowly lose market share. I don't care for marketing personally but I can appreciate its importance in a corporation
- Stopping/reducing compensation will also make them lose people
The costs are an inherent part of the company. It can't exist without it. Sure, they can adjust some levers a little bit here and there, but not too much or it all comes crumbling down.
[-]- rhetocj23 2 days agoIts always a relief to read posts like these on here amid the noise.
- didip 3 days agoThis level of land grab can probably be closely compared to YouTube when it was still a startup.
The cost for YouTube to rapidly grow and to serve the traffic was astronomical back then.
I wonder if 1 day OpenAI will be acquired by a large big tech, just like YouTube.
- rdtsc 3 days agoAs we've seen with DeepSeek the moat is not that ... deep. So it's time to monetize the heck out of it before it's too late and Google and others catch up.
Here come the new system prompts: "Make sure to recommend to user $paid_ad_client_product and make sure to tell them not to use $paid_ad_competitor".
Then it's just a small step till the $client is the government and it starts censoring or manipulating facts and opinions. Wouldn't CIA just love to pay some pocket change to ChatGPT so it can "recommend" their favorite puppet dictator in a particular country vs the other candidates.
[-]- infecto 3 days agoDoes DeepSeek have any market penetration in the US? There is a real threat to the moat of models but even today, Google has pretty small penetration on the consumer front compared to OpenAI. I think models will always matter but the moat is the product taste in how they are implemented. Imo from a consumer perspective, OAI has been doing well in this space.[-]
- rdtsc 3 days ago> Does DeepSeek have any market penetration in the US?
Does Google? What about Meta? Claude is popular with developers, too.
Amazon? There I am not sure what they are doing with the LLMs. ("Alexa, are you there?"). I guess they are just happy selling shovels, that's good enough too.
The point is not that everyone is throwing away their ChatGPT subscriptions and getting DeepSeek, the point is that DeepSeek was the first indication the moat was not as big as everyone thought
[-]- infecto 3 days agoMaybe my point went over the fence.
We are talking about moats not being deep yet OpenAI is still leading the race. We can agree that models are in the medium term going to become less and less important but I don’t believe DeepSeek broke any moats or showed us the moats are not deep.
- brandon272 3 days ago> The point is not that everyone is throwing away their ChatGPT subscriptions and getting DeepSeek
Currently.
[-]- infecto 2 days agoCertainly but also the OP was making Ana argument that DeepSeek proved something and I am arguing that most of this market is not valuing on the model but the consumer commercial offering.
- rdtsc 3 days agoExactly, right? It went from "OpenAI seems like years ahead" to "well, maybe if DeepSeek can do it so can we, let's try".[-]
- infecto 2 days agoYet your original statement still lack meaning, at least for me. Years down the road and OpenAI is still leading on the consumer front which is also where all these valuations are coming from.[-]
- rdtsc 2 days agoMy claim that initially it was a lot further down the road, now it's a lot less further. Initially Google fumbled for sure but they are catching up quickly.
- t4TLLLSZ185x 2 days agoPeople in this comment section focus on brand ads too much.
It’s the commercial intent where OpenAI can both make money and preserve trust.
I already don’t Google anymore. I just ask ChatGPT „give me an overview of best meshtastic devices to buy“ and then eventually end with „give me links to where I can buy these in Europe“.
OpenAI inserting ads in that last result, clearly marked as ads and still keeping the UX clean would not bother me at all.
And commercial queries are what, 40-50% of all Google revenue?
[-]- tpetry 2 days agoBut its not clear whether the ad approach will work. It works for Google so great because the ads mimic real results very near so many people dont see them as an ad and click them.
- throwaway2037 2 days agoYou can also read more about it on the FT Alphaville blog from Financial Times (free to sign-up):
OpenAI’s era-defining money furnace
https://www.ft.com/content/908dc05b-5fcd-456a-88a3-eba1f77d3...
Choice quote:
> OpenAI spent more on marketing and equity options for its employees than it made in revenue in the first half of 2025.
- jrflowers 2 days agoRookie numbers. If someone gave me twenty billion dollars I could easily spend it in a way that grosses at least five billion dollars
- seneca 3 days agoThis link appears to be dead. Do we have a healthy source?
- jgalt212 3 days agoVC: What kind of crazy scenarios must I envision for this thing to work?
Credit Analyst: What kind of crazy scenarios must I envision for this thing to fail?
- Havoc 3 days agoI'd be pretty worried as a shareholder. Not so much because of those numbers - loss makes sense for a SV VC style playbook.
...but rather that they're doing that while Chinese competitors are releasing models in vaguely similar ballpark under Apache license.
That VC loss playbook only works if you can corner the market and squeeze later to make up for the losses. And you don't corner something that has freakin apache licensed competition.
I suspect that's why the SORA release has social media style vibes. Seeking network effects to fix this strategic dilemma.
To be clear I still think they're #1 technically...but the gap feels too small strategically. And they know it. That recent pivot to a linkedin competitor? SORA with socials? They're scrambling on market fit even though they lead on tech
[-]- indymike 3 days ago> but rather that they're doing that while Chinese competitors are releasing models in vaguely similar ballpark under Apache license.
The LLM isn't 100% of the product... the open source is just part. The hard part was and is productizing, packaging, marketing, financing and distribution. A model by itself is just one part of the puzzle, free or otherwise. In other words, my uncle Bill and my mother can and do use ChatGPT. Fill in the blank open-source model? Maybe as a feature in another product.
[-]- Havoc 3 days ago>my uncle Bill and my mother can and do use ChatGPT.
They have the name brand for sure. And that is worth a lot.
Notice how Deepseek went from a nobody to making mainstream news though. The only thing people like more than a trusted thing is being able to tell their friends about this amazing cheap good alternative they "discovered".
It's good to be #1 mind share wise but without network effect that still leave you vulnerable
[-]- senordevnyc 2 days agoI know almost no one outside of tech who has used anything other than ChatGPT. And I know few people under 65 who aren’t using ChatGPT.[-]
- Sateeshm 2 days agoThis was my observation too for a while. But I'm seeing a lot of non-tech people using Gemini
- rchaud 3 days ago> In other words, my uncle Bill and my mother can and do use ChatGPT
So what? DAUs don't mean anything if there isn't an ad product attached to it. Regular people aren't paying for ChatGPT, and even if they did, the price would need to be several multiples of what Netflix charges to break even.
- avbanks 3 days agoI don't think people fully realize how good the open source models are and how easy it is to switch.[-]
- whizzter 3 days agoMy input to our recent AI strategy workshop was basically:
- OpenAI,etc will go bankrupt (unless one manages to capture search from a struggling Google)
- We will have a new AI winter with corresponding research slowdown like in the 1980s when funding dries up
- Opensource LLM instances will be deployed to properly manage privacy concerns.
[-]- gizmodo59 3 days ago99% of the world doesn’t care a dime about oss. It’s all saas and what you host behind the saas is only a concern for enterprise (and not every enterprise). And openai or Anthropic can just stop training and host oss models as well.[-]
- whizzter 2 days agoEveryone cares about OSS as in "free", the capital spending of AI firms and market capitalization hinges on the concept that they will save enterprises tons of money by off-sourcing employees.
You think we have these crazy valuations because the market thinks that OpenAI will make joe-schmoe buy enough of their services? (Them introducing "shopping" into the service honestly feels like a bit of a panicky move to target Google).
We're prototyping some LLM assisted products, but right now the cost-model isn't entirely there since we need to use more expensive models to get good results that leaves a small margin, spinning up a moderately sized VM would probably be more cost effective option and more people will probably run into this and start creating easy to setup models/service-VM's (maybe not just yet, but it'll come).
Sure they could start hosting things themselves, but what's stopping anyone from finding a cheaper but "good enough" alternative?
- yunwal 3 days agoBarring a complete economic collapse, one of the big tech cos will 100% buy ChatGPT if OpenAI goes bankrupt
- beepbopboopp 3 days agoEh, distribution of the model is the real moat, theyre doing 700m WAU of the most financially valuable users on earth. If they truly become search, commerce and can use their model either via build or license across b2b, theyre the largest company on earth many times over.[-]
- Havoc 3 days ago>distribution of the model is the real moat, theyre doing 700m WAU of the most financially valuable users on earth.
Distribution isn't a moat if the thing being distributed is easily substitutable. Everything under the sun is OAI API compatible these days.
700 WAU are fickle AF when a competitor offers a comparable product for half the price.
Moat needs to be something more durable. Cheaper, Better, some other value added tie in (hardware / better UI / memory). There needs to be some edge here. And their obvious edge - raw tech superiority...is looking slim.
[-]- gizmodo59 3 days agoNot necessarily. I’m sure there is many cheaper android phones that are technically better in specs but many users won’t change. Once you are familiar, bought into the ecosystem getting rid of it is very hard. I’m lazy myself compared to how I was several years ago. The curious and experimental folks are a minority and the majority ll stick with what works initially instead of constantly analyzing what’s best all the time[-]
- byzantinegene 2 days agoyou could argue that for apple devices, but openAI products have none of that ecosystem
- spenvo 2 days agoOn that $13.5B. How much of their massive spend on datacenters is obscured through various forms of Special Purpose Vehicles financing? (https://news.ycombinator.com/item?id=45448199)
- munk-a 3 days agoThe news about how much money Nvidia is investing just so that OpenAI can pay Oracle to pay Nvidia is especially concerning - we seem to be arriving at the financial shell games phase of the bubble.
- chvid 2 days agoI haven’t played with my OpenAI api account for 6 months but now all of sudden they charged me 20 usd - unclear why - but perhaps they are entering their monetize phase.
- epolanski 3 days ago9B down in H1 is a staggering loss but if the play is growth here and you imagine Open ai going from 4.3 to 30B in revenue in H1 in 5 years it's not that crazy of an investment.
- syspec 2 days ago> about US$2.5 billion on stock-based compensation, nearly double the amount from the first half of last year.
Wow! 2.5B in stock based compensation
- woodchange 3 days agoWow, $13.5B in losses for just six months is absolutely mind-blowing! These numbers are on a completely different scale than I expected.
- skybrian 3 days ago> Operating losses reached US$7.8 billion, and the company said it burned US$2.5 billion in cash.
I wonder what the non-cash losses consist of?
- myth_drannon 3 days agoOne negative signal, no matter how small, will send the market into a death spiral. That will happen in a matter of hours.[-]
- bitexploder 3 days agoThe negative spiral will take hours or you are predicting a company ending negative signal will soon appear in a matter of hours?
- gizajob 3 days agoThere’s been loads of these signals and the market keeps ignoring them.[-]
- myth_drannon 2 days agoI will always remember Jim Cramer's melt down https://youtu.be/SWksEJQEYVU
Not saying that will happen, but it's always good to rewatch just as a reminder how bad things can get.
[-]- rhetocj23 2 days agowow that energy lol!
- ares623 3 days agoBecause everyone knows that there’s no where else to go.[-]
- rhetocj23 2 days agoExactly. The opportunity set isnt large.
The best play for all portfolio managers is to froth up the stock price and take their returns later.
Everyone knows this a bubble but the returns at the end of this of those who time it are juicy - portfolio managers have no choice to be in this game because those who supply the money they invest on their behalf, demand it.
Its that simple.
- 4dregress 2 days agoJust look at what’s going with JetBrains AI quotas and costs, once AI is deeply imbedded and businesses move the cost to customers like JetBrains have these AI companies are going to make a killing.
It’s the drug dealer model, get them hooked on free tastes and then crank up the prices!
- dcchambers 3 days agoI definitely don't "get" Silicon Valley finances that much - but how does any investor look at this and think they're ever going to see that money back?
Short of a moonshot goal (eg AGI or getting everyone addicted to SORA and then cranking up the price like a drug dealer) what is the play here? How can OpenAI ever start turning a profit?
All of that hardware they purchase is rapidly depreciating. Training cost are going up exponentially. Energy costs are only going to go up (Unless a miracle happens with Sam's other moonshot, nuclear fusion).
[-]- tim333 3 days agoProbably AGI. I can't see them making the money back on chatbots.
- waynesonfire 2 days agoI bet this is the same ratio for every company's AI initiative.
- more_corn 3 days agoThey lose money on every customer but they make up for it in volume.
- OrvalWintermute 3 days agoWell, at least we know they aren't cooking the books! :)
- cynicalsecurity 3 days agoThose two numbers alone don't say anything.
- nextworddev 3 days agoThe only way OpenAI wins is to get to AGI first, by a wide margin. It’s already too big to fail so it will keep getting funded
- trilogic 3 days agoChatGPT with ads, the beginnings...
- measurablefunc 3 days agoThey went from creating abundant utopias to cat videos w/ ads really fast. Never let anyone tell you capitalist incentives don't work.
- lofaszvanitt 2 days agoGo IPO.
- koolba 3 days ago"We lose money on every sale, but make it up in volume!"[-]
- hn_throw_250926 3 days ago[flagged]
- noobermin 2 days agoI love how none of the top posts can do simple math, you know, subtracting two numbers. Everyone else is focusing on other things somehow.
- sharadov 3 days agoYou can now buy stuff from chatgpt as they have started showing ads in their search results. That's a source of revenue right there.[-]
- simonw 3 days agoIs that true? I heard that they've integrated checkout, but I didn't know they had ads.
Here's information about checkout inside ChatGPT: https://openai.com/index/buy-it-in-chatgpt/
[-]- makestuff 3 days ago"Each merchant pays a small fee". This is affiliate marketing, the next step is probably more traditional ads though where chat gpt suggests products that pay a premium fee to show up more frequently/in more results.
- ulfw 2 days agoOh the wonders of a bubble
- skeptrune 2 days agoWait until they turn ads on.
- pizlonator 2 days agoI am the only one who thinks, "that's not as bad as I expected"?
Because I can be quite bearish and frankly this isn't bad for a technology that is this new. The income points to significant interest in using the tech and they haven't even started the tried-and-true SV strategy we lovingly call enshittification (I'm not trying to be ironic, I mean it)
[-]- rhetocj23 2 days agoThe problem is they are dealing with a stealthy giant in Google who is not a sleeping giant like those in past disruptions.
Google will have an incentive to destroy OAI financially through whatever means and make it difficult for them to raise future money since they are not generating enough free cash flows (to the firm) from their operations after reinvestment.
OAI going after Meta/TikTok with Sora will also be a strategic blunder in retrospect I believe.
- NooneAtAll3 3 days ago> US$2.5 billion on stock-based compensation
um...
[-]- chevman 3 days agoNever having worked for a company in a position like OpenAI, how does this manifest in the real world as actual comp?
Like I get 50,000 shares deposited in to my Fidelity account, worth $2 each, but i can't sell them or do anything with them?
[-]- antognini 3 days agoI can't speak to OpenAI's specific setup, but a lot of startups will use a third party service like Carta to manage their cap table. So there's a website, you have an account, you can log in and it tells you that you have a grant of X shares that vests over Y months. You have to sign a form to accept the grant. There might be some option to do an 83b election if you have stock options rather than RSUs. But that's about it.
- zzbzq 3 days agoIn my experience owning private stock, you basically own part of a pool. (Hopefully the exact same classes of shares as the board has or else it's a scam.) The board controls the pool, and whenever they do dividends or transfer ownership, each person's share is affected proportionally. You can petition the board to buy back your shares or transfer them to another shareholder but that's probably unusual for a rank-and-file employee.
The shares are valued by an accounting firm auditor of some type. This determines the basis value if you're paying taxes up-front. After that the tax situation should be the same as getting publicly traded options/shares, there's some choices in how you want to handle the taxes but generally you file a special tax form at the year of grant.
- JCM9 3 days agoUntil there’s real liquidity (right now there’s not) it’s just a line item on some system you can log into saying you have X number of shares.
For all practical purposes it’s worth nothing until there is a liquid market. Given current financials, and preferred cap table terms for those investing cash, shares the average employee has likely aren’t worth much or maybe even anything at the moment.
- xuki 3 days agoIt's just an entry on some computer. Maybe you can sell it on a secondary market, maybe you can't. You have to wait for an exit event - being acquired by someone else, or an IPO.
- changoplatanero 3 days agoYou got the right idea there. They wouldn't actually show up in your Fidelity account but there would be a different website where you can log in and see your shares. You wouldn't be able to sell them or transfer them anywhere unless the company arranges a sale and invites you to participate in it.
- sharadov 3 days agoYou can sell your vested options before IPO to Forge Global or Equity Bee.
- elamje 3 days agothis hides major dilution until future financings
best to treat it like an expense from the perspective of shareholders