GPT‑5.3‑Codex‑Spark(openai.com)
429 points by meetpateltech 5 hours ago | 190 comments
- beklein 4 hours agoI love this! I use coding agents to generate web-based slide decks where “master slides” are just components, and we already have rules + assets to enforce corporate identity. With content + prompts, it’s straightforward to generate a clean, predefined presentation. What I’d really want on top is an “improv mode”: during the talk, I can branch off based on audience questions or small wording changes, and the system proposes (say) 3 candidate next slides in real time. I pick one, present it, then smoothly merge back into the main deck. Example: if I mention a recent news article / study / paper, it automatically generates a slide that includes a screenshot + a QR code link to the source, then routes me back to the original storyline. With realtime voice + realtime code generation, this could turn the boring old presenter view into something genuinely useful.[-]
- sva_ 4 hours agoI love the probabilistic nature of this. Presentations could be anywhere from extremely impressive to hilariously embarrassing.[-]
- clickety_clack 3 hours agoIt would be so cool if it generated live in the presentation and adjusted live as you spoke, so you’d have to react to whatever popped on screen![-]
- crystal_revenge 1 hour agoThere was a pre-LLM version of this called "battledecks" or "PowerPoint Karaoke"[0] where a presenter is given a deck of slides they've never seen and have to present on it. With a group of good public speakers it can be loads of fun (and really impressive the degree that some people can pull it off!)[-]
- bsharper 1 hour agoThere is a Jackbox game called "Talking Points" that's like this: the players come up with random ideas for presentations, your "assistant" (one of the other players) picks what's on each slide while you present: https://www.youtube.com/watch?v=gKnprQpQONw
- nikcub 5 minutes agoand with neuralink it would generate slides of the audience naked
- Etheryte 2 hours agoSome consulting firms do this, one guy is giving the presentation live while others are in the next meeting room still banging out the slides.
- onionisafruit 2 hours agoEvery presentation becomes improv[-]
- deepGem 2 hours agoIsn't that such a great outcome. No more robotic presentations. The best part is that you can now practice Improv at the comfort of your home.[-]
- mbreese 2 hours agoAnd this product will work great for any industry... can I get a suggestion for an industry from the crowd?
Audience: Transportation... Education... Insurance...
Speaker: Great! I heard "Healthcare".
Right... as we can see from this slide, this product fits the "Healthcare" industry great because of ...
[-]- lelandfe 42 minutes agoCaro’s first LBJ biography tells of how the future president became a congressman in Texas in his 20s, by carting around a “claque” of his friends to various stump speeches and having them ask him softball questions and applauding loudly after
Well, hey, who needs friends?
- DonHopkins 2 hours agoI had a butterfly take over my live DreamScape slide show demo at the 1995 WWDC.
- m_mueller 41 minutes agoYou're describing almost verbatim what we're building at Octigen [1]! Happy to provide a demo and/or give you free access to our alpha version already online.
- deepGem 2 hours agoI built something similar at a hackathon, a dynamic teleprompter that adjusts the speed of tele-prompting based on speaker tonality and spoken wpm. I can see extending the same to an improv mode. This is a super cool idea.
- jorgenveisdal 3 hours agoAs an associate professor who spends a ridiculous amount of time preparing for lectures, I would love to try this in one of my courses
- esafak 3 hours agoCan you show one?[-]
- beklein 2 hours agoThe end result would be a normal PPT presentation, check https://sli.dev as an easy start, ask Codex/Claude/... to generate the slides using that framework with data from something.md. The interesting part here is generating these otherwise boring slide decks not with PowerPoint itself but with AI coding agents and a master slides, AGENTS.md context. I’ll be showing this to a small group (normally members only) at IPAI in Heilbronn, Germany on 03/03. If you’re in the area and would like to join, feel free to send me a message I will squeeze you in.
- orochimaaru 4 hours agoHow do you handle the diagrams?[-]
- beklein 4 hours agoIn my AGENTS.md file i have a _rule_ that tells the model to use Apache ECharts, the data comes from the prompt and normally .csv/.json files. Prompt would be like: "After slide 3 add a new content slide that shows a bar chart with data from @data/somefile.csv" ... works great and these charts can be even interactive.[-]
- orochimaaru 3 hours agoWhat about other ad hoc diagrams like systems architecture, roadmaps, mind maps, etc.
These are the bane of any staff engineers life - lol. Because people above need to know a plan in art form.
So seriously interested on how I can make it easier
[-]- beklein 2 hours agoNot my normal use-case, but you can always fall back and ask the AI coding agent to generate the diagram as SVG, for blocky but more complex content like your examples it will work well and still is 100% text based, so the AI coding agents or you manually can fix/adjust any issues. An image generation skill is a valid fallback, but in my opinion it's hard to change details (json style image creation prompts are possible but hard to do right) and you won't see changes nicely in the git history. In your use case you can ask the AI coding agent to run a script.js to get the newest dates for the project from a page/API, then it should only update the dates in the roadmap.svg file on slide x with the new data. This way you will automagically have the newest numbers and can track everything within git in one prompt. Save this as a rule in AGENTS.md and run this every month to update your slides with one prompt.
- mcamac 3 hours agoYou could try something like mermaid (or ASCII) -> nano banana. You can also go the other way and turn images into embedded diagrams (which can be interactive depending on how you're sharing the presentation)
- sleazebreeze 2 hours agoClaude code can output Excalidraw format files which can be imported directly into the webapp. You can MCP it too if you want.
- turnsout 4 hours agoI love the idea of a living slide deck. This feels like a product that needs to exist!
- postalcoder 3 hours agoFirst thoughts using gpt-5.3-codex-spark in Codex CLI:
Blazing fast but it definitely has a small model feel.
It's tearing up bluey bench (my personal agent speed benchmark), which is a file system benchmark where I have the agent generate transcripts for untitled episodes of a season of bluey, perform a web search to find the episode descriptions, and then match the transcripts against the descriptions to generate file names and metadata for each episode.
Downsides:
- It has to be prompted to do actions in my media library AGENTS.md that the larger models adhere to without additional prompting.
- It's less careful with how it handles context which means that its actions are less context efficient. Combine that with the smaller context window and I'm seeing frequent compactions.
Bluey Bench* (minus transcription time): Codex CLI gpt-5.3-codex-spark low 20s gpt-5.3-codex-spark medium 41s gpt-5.3-codex-spark xhigh 1m 09s (1 compaction) gpt-5.3-codex low 1m 04s gpt-5.3-codex medium 1m 50s gpt-5.2 low 3m 04s gpt-5.2 medium 5m 20s Claude Code opus-4.6 (no thinking) 1m 04s Antigravity gemini-3-flash 1m 40s gemini-3-pro low 3m 39s *Season 2, 52 episodes[-]- alexdobrenko 2 hours agocan we plese make the bluey bench the gold standard for all models always
- mnicky 3 hours agoCan you compare it to Opus 4.6 with thinking disabled? It seems to have very impressive benchmark scores. Could also be pretty fast.[-]
- postalcoder 2 hours agoAdded a thinking-disabled Opus 4.6 timing. It took 1m 4s – coincidentally the same as 5.3-codex-low.
- Squarex 3 hours agoI wonder why they named it so similiarly to the normal codex model while it much worse, while cool of course.
- pjs_ 4 hours agoContinue to believe that Cerebras is one of the most underrated companies of our time. It's a dinner-plate sized chip. It actually works. It's actually much faster than anything else for real workloads. Amazing[-]
- onlyrealcuzzo 3 hours agoNvidia seems cooked.
Google is crushing them on inference. By TPUv9, they could be 4x more energy efficient and cheaper overall (even if Nvidia cuts their margins from 75% to 40%).
Cerebras will be substantially better for agentic workflows in terms of speed.
And if you don't care as much about speed and only cost and energy, Google will still crush Nvidia.
And Nvidia won't be cheaper for training new models either. The vast majority of chips will be used for inference by 2028 instead of training anyway.
Nvidia has no manufacturing reliability story. Anyone can buy TSMC's output.
Power is the bottleneck in the US (and everywhere besides China). By TPUv9 - Google is projected to be 4x more energy efficient. It's a no-brainer who you're going with starting with TPUv8 when Google lets you run on-prem.
These are GW scale data centers. You can't just build 4 large-scale nuclear power plants in a year in the US (or anywhere, even China). You can't just build 4 GW solar farms in a year in the US to power your less efficient data center. Maybe you could in China (if the economics were on your side, but they aren't). You sure as hell can't do it anywhere else (maybe India).
What am I missing? I don't understand how Nvidia could've been so far ahead and just let every part of the market slip away.
[-]- sailingparrot 3 hours ago> let every part of the market slip away.
Which part of the market has slept away, exactly ? Everything you wrote is supposition and extrapolation. Nvidia has a chokehold on the entire market. All other players still exist in the small pockets that Nvidia doesn’t have enough production capacity to serve. And their dev ecosystem is still so far ahead of anyone else. Which providers gets chosen to equip a 100k chips data center goes so far beyond the raw chip power.
[-]- onlyrealcuzzo 2 hours ago> Nvidia has a chokehold on the entire market.
You're obviously not looking at expected forward orders for 2026 and 2027.
[-]- louiereederson 36 minutes agoI think most estimates have Nvidia at more or less stable share of CoWoS capacity (around 60%), which is ~doubling in '26.
- mnicky 3 hours ago> What am I missing?
Largest production capacity maybe?
Also, market demand will be so high that every player's chips will be sold out.
[-]- onlyrealcuzzo 2 hours ago> Largest production capacity maybe?
Anyone can buy TSMC's output...
[-]- Keyframe 2 hours agoCan anyone buy TSMC though?[-]
- louiereederson 31 minutes agoNo. TSMC will not take the risk on allocating capacity to just anyone given the opportunity cost.
- wing-_-nuts 2 hours agoMan I hope someone drinks Nvidia's milk shake. They need to get humbled back to the point where they're desperate to sell gpus to consumers again.
Only major road block is cuda...
- whism 3 hours agoI believe they licensed smth from groq
- Handy-Man 2 hours agoWell they `acquired` groq for a reason.
- zozbot234 4 hours agoIt's "dinner-plate sized" because it's just a full silicon wafer. It's nice to see that wafer-scale integration is now being used for real work but it's been researched for decades.
- arcanemachiner 4 hours agoJust wish they weren't so insanely expensive...[-]
- azinman2 3 hours agoThe bigger the chip, the worse the yield.[-]
- speedgoose 3 hours agoI suggest to read their website, they explain pretty well how they manage good yield. Though I’m not an expert in this field. I does make sense and I would be surprised if they were caught lying.
- moralestapia 3 hours agoThis comment doesn't make sense.[-]
- Sohcahtoa82 3 hours agoOne wafer will turn into multiple chips.
Defects are best measured on a per-wafer basis, not per-chip. So if if your chips are huge and you can only put 4 chips on a wafer, 1 defect can cut your yield by 25%. If they're smaller and you fit 100 chips on a wafer, then 1 defect on the wafer is only cutting yield by 1%. Of course, there's more to this when you start reading about "binning", fusing off cores, etc.
There's plenty of information out there about how CPU manufacturing works, why defects happen, and how they're handled. Suffice to say, the comment makes perfect sense.
[-]- snovv_crash 2 hours agoThat's why you typically fuse off defective sub-units and just have a slightly slower chip. GPU and CPU manufacturers have done this for at least 15 years now, that I'm aware of.
- azinman2 3 hours agoSure it does. If it’s many small dies on a wafer, then imperfections don’t ruin the entire batch; you just bin those components. If the entire wafer is a single die, you have much less tolerance for errors.[-]
- dekhn 3 hours agoAlthough, IIUC, Cerebras expects some amount of imperfection and can adjust the hardware (or maybe the software) to avoid those components after they're detected. https://www.cerebras.ai/blog/100x-defect-tolerance-how-cereb...
- pertymcpert 3 hours agoYou can just do dynamic binning.
- louiereederson 29 minutes agoYou say this with such confidence and then ask if smaller chips require smaller wafers.
- DocJade 3 hours agoBigger chip = more surface area = higher chance for somewhere in the chip to have a manufacturing defect
Yields on silicon are great, but not perfect
[-]- moralestapia 2 hours agoDoes that mean smaller chips are made from smaller wafers?[-]
- Sohcahtoa82 4 minutes agoNope. They use the same size wafers and then just put more chips on a wafer.
- dalemhurley 2 hours agoYet investors keep backing NVIDIA.[-]
- vimda 1 hour agoAt this point Tech investment and analysis is so divorced from any kind of reality that it's more akin to lemmings on the cliff than careful analysis of fundamentals
- latchkey 3 hours agoNot for what they are using it for. It is $1m+/chip and they can fit 1 of them in a rack. Rack space in DC's is a premium asset. The density isn't there. AI models need tons of memory (this product annoucement is case in point) and they don't have it, nor do they have a way to get it since they are last in line at the fabs.
Their only chance is an aquihire, but nvidia just spent $20b on groq instead. Dead man walking.
[-]- p1esk 3 hours agoThe real question is what’s their perf/dollar vs nvidia?[-]
- zozbot234 3 hours agoI guess it depends what you mean by "perf". If you optimize everything for the absolutely lowest latency given your power budget, your throughput is going to suck - and vice versa. Throughput is ultimately what matters when everything about AI is so clearly power-constrained, latency is a distraction. So TPU-like custom chips are likely the better choice.[-]
- p1esk 3 hours agoBy perf I mean how much does it cost to serve 1T model to 1M users at 50 tokens/sec.[-]
- zozbot234 2 hours agoAll 1T models are not equal. E.g. how many active parameters? what's the native quantization? how long is the max context? Also, it's quite likely that some smaller models in common use are even sub-1T. If your model is light enough, the lower throughput doesn't necessarily hurt you all that much and you can enjoy the lightning-fast speed.[-]
- p1esk 2 hours agoJust pick some reasonable values. Also, keep in mind that this hardware must still be useful 3 years from now. What’s going to happen to cerebras in 3 years? What about nvidia? Which one is a safer bet?
On the other hand, competition is good - nvidia can’t have the whole pie forever.
[-]- zozbot234 2 hours ago> Just pick some reasonable values.
And that's the point - what's "reasonable" depends on the hardware and is far from fixed. Some users here are saying that this model is "blazing fast" but a bit weaker than expected, and one might've guessed as much.
> On the other hand, competition is good - nvidia can’t have the whole pie forever.
Sure, but arguably the closest thing to competition for nVidia is TPUs and future custom ASICs that will likely save a lot on energy used per model inference, while not focusing all that much on being super fast.
[-]- latchkey 1 hour agoAMD
- wiredpancake 41 minutes ago[dead]
- fragmede 2 hours ago> Throughput is ultimately what matters
I disagree. Yes it does matter, but because the popular interface is via chat, streaming the results of inference feels better to the squishy messy gross human operating the chat, even if it ends up taking longer. You can give all the benchmark results you want, humans aren't robots. They aren't data driven, they have feelings, and they're going to go with what feels better. That isn't true for all uses, but time to first byte is ridiculously important for human-computer interaction.
[-]- zozbot234 2 hours agoYou just have to change the "popular interface" to something else. Chat is OK for trivia or genuinely time-sensitive questions, everything else goes through via email or some sort of webmail-like interface where requests are submitted and replies come back asynchronously. (This is already how batch APIs work, but they only offer a 50% discount compared to interactive, which is not enough to really make a good case for them - especially not for agentic workloads.)
- xnx 3 hours agoOr Google TPUs.[-]
- latchkey 3 hours agoTPUs don't have enough memory either, but they have really great interconnects, so they can build a nice high density cluster.
Compare the photos of a Cerebras deployment to a TPU deployment.
https://www.nextplatform.com/wp-content/uploads/2023/07/cere...
https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iOLs2FEQxQv...
The difference is striking.
[-]- p1esk 3 hours agoOh wow the cabling in the first link is really sloppy!
- latchkey 3 hours agoExactly. They won't ever tell you. It is never published.
Let's not forget that the CEO is an SEC felon who got caught trying to pull a fast one.
- spwa4 3 hours agoOh don't worry. Ever since the power issue started developing rack space is no longer at a premium. Or at least, it's no longer the limiting factor. Power is.[-]
- latchkey 3 hours agoThe dirty secret is that there is plenty of power. But, it isn't all in one place and it is often stranded in DC's that can't do the density needed for AI compute.
Training models needs everything in one DC, inference doesn't.
- femiagbabiaka 3 hours agoyep
- xnx 3 hours agoCerebras is a bit of a stunt like "datacenters in spaaaaace".
Terrible yield: one defect can ruin a whole wafer instead of just a chip region. Poor perf./cost (see above). Difficult to program. Little space for RAM.
[-]- the_duke 3 hours agoThey claim the opposite, though, saying the chip is designed to tolerate many defects and work around them.
- perdomon 1 hour agoThis has been the industry standard for the last 20 minutes. I can't believe people are still using GPT-5.3-Codex.[-]
- sam_goody 48 minutes agoI read this headline and was like, "A look, an announcement by GPT!! That means that Google or Anthropic must have had a release today!"
And, yup, there is Gemini in item 3!
- simonw 2 hours agoMy stupid pelican benchmark proves to be genuinely quite useful here, you get a visual representation of the quality difference between GPT-5.3-Codex-Spark and full GPT-5.3-Codex: https://simonwillison.net/2026/Feb/12/codex-spark/[-]
- lacoolj 2 hours agoThese are the ones I look for every time a new model is released. Incorporates so many things into one single benchmark.
Also your blog is tops. Keep it up, love the work.
- jryio 5 hours agoThis is interesting for offloading "tiered" workloads / priority queue with coding agents.
If 60% of the work is "edit this file with this content", or "refactor according to this abstraction" then low latency - high token inference seems like a needed improvement.
Recently someone made a Claude plugin to offload low-priority work to the Anthropic Batch API [1].
Also I expect both Nvidia and Google to deploy custom silicon for inference [2]
1: https://github.com/s2-streamstore/claude-batch-toolkit/blob/...
2: https://www.tomshardware.com/tech-industry/semiconductors/nv...
[-]- zozbot234 4 hours agoNote that Batch APIs are significantly higher latency than normal AI agent use. They're mostly intended for bulk work where time constraints are not essential. Also, GPT "Codex" models (and most of the "Pro" models also) are currently not available under OpenAI's own batch API. So you would have to use non-agentic models for these tasks and it's not clear how well they would cope.
(Overall, batches do have quite a bit of potential for agentic work as-is but you have to cope with them taking potentially up to 24h for just a single roundtrip with your local agent harness.)
- dehugger 5 hours agoI built something similar using an MCP that allows claude to "outsource" development to GLM 4.7 on Cerebras (or a different model, but GLM is what I use). The tool allows Claude to set the system prompt, instructions, specify the output file to write to and crucially allows it to list which additional files (or subsections of files) should be included as context for the prompt.
Ive had great success with it, and it rapidly speeds up development time at fairly minimal cost.
[-]- cheema33 4 hours agoWhy use MCP instead of an agent skill for something like this when MCP is typically context inefficient?[-]
- pertymcpert 2 hours agoMCP is fine if your tool definition is small. If it's something like a sub-agent harness which is used very often, then in fact it's probably more context efficient because the tools are already loaded in context and the model doesn't have to spend a few turns deciding to load the skill, thinking about it and then invoking another tool/script to invoke the subagent.
- wahnfrieden 4 hours agoModels haven't been trained enough on using skills yet, so they typically ignore them[-]
- andai 4 hours agoIs that true? I had tool use working with GPT-4 in 2023, before function calling or structured outputs were even a thing. My tool instructions were only half a page though. Maybe the long prompts are causing problems?[-]
- pertymcpert 3 hours agoThey're talking about "skills" which are not the same thing as tools. Most models haven't been trained on the open SKILL spec, and therefore aren't tuned to invoke them reliable when the need occurs.
- nikkwong 4 hours ago> Our latest frontier models have shown particular strengths in their ability to do long-running tasks, working autonomously for hours, days or weeks without intervention.
I have yet to see this (produce anything actually useful).
[-]- simonw 4 hours agoHow hard have you tried?
I've been finding that the Opus 4.5/4.6 and GPT-5.2/5.3 models really have represented a step-change in how good they are at running long tasks.
I can one-shot prompt all sorts of useful coding challenges now that previously I would have expected to need multiple follow-ups to fix mistakes the agents made.
I got all of this from a single prompt, for example: https://github.com/simonw/research/tree/main/cysqlite-wasm-w... - including this demo page: https://simonw.github.io/research/cysqlite-wasm-wheel/demo.h... - using this single prompt: https://github.com/simonw/research/pull/79
[-]- aeyes 4 hours agoWhat do you mean? The generated script just downloads the sources and runs pyodide: https://github.com/simonw/research/blob/main/cysqlite-wasm-w...
There is maybe 5 relevant lines in the script and nothing complex at all that would require to run for days.
[-]- andai 3 hours agoMaybe so, but I did once spend 12 hours straight debugging an Emscripten C++ compiler bug! (After spending the first day of the jam setting up Emscripten, and the second day getting Raylib to compile in it. Had like an hour left to make the actual game, hahah.)
I am a bit thick with such things, but just wanted to provide the context that Emscripten can be a fickle beast :)
I sure am glad I can now deploy Infinite Mechanized Autistic Persistence to such soul-crushing tasks, and go make a sandwich or something.
(The bug turned out to be that if I included a boolean in a class member, the whole game crashed, but only the Emscripten version. Sad. Ended up switching back to JS, which you basically need anyway for most serious web game dev.)
- simonw 4 hours agoNo, not for days - but it churned away on that one for about ten minutes.
I don't think I've got any examples of multi-hour or multi-day sessions that ran completely uninterrupted - this one back in December took 4.5 hours but I had to prompt it to keep going a few times along the way: https://simonwillison.net/2025/Dec/15/porting-justhtml/
- basilgohar 4 hours agoCan you share any examples of these one-shot prompts? I've not gotten to the point where I can get those kind of results yet.[-]
- simonw 3 hours agoIf you look through the commit logs on simonw/research and simonw/tools on GitHub most commits should either list the prompt, link to a PR with the prompt or link to a session transcript.
- gamegoblin 4 hours agoI routinely leave codex running for a few hours overnight to debug stuff
If you have a deterministic unit test that can reproduce the bug through your app front door, but you have no idea how the bug is actually happening, having a coding agent just grind through the slog of sticking debug prints everywhere, testing hypotheses, etc — it's an ideal usecase
[-]- nikkwong 4 hours agoI have a hard time understanding how that would work — for me, I typically interface with coding agents through cursor. The flow is like this: ask it something -> it works for a min or two -> I have to verify and fix by asking it again; etc. until we're at a happy place with the code. How do you get it to stop from going down a bad path and never pulling itself out of it?
The important role for me, as a SWE, in the process, is verify that the code does what we actually want it to do. If you remove yourself from the process by letting it run on its own overnight, how does it know it's doing what you actually want it to do?
Or is it more like with your usecase—you can say "here's a failing test—do whatever you can to fix it and don't stop until you do". I could see that limited case working.
[-]- woah 3 hours agoFor some reason setting up agents in a loop with a solid prompt and new context each iteration seems to result in higher quality work for larger or more difficult tasks than the chat interface. It's like the agent doesn't have to spend half its time trying to guess what you want
- gamegoblin 37 minutes agoI use Codex CLI or Claude Code
I don't even necessarily ask it to fix the bug — just identify the bug
Like if I've made a change that is causing some unit test to fail, it can just run off and figure out where I made an off-by-one error or whatever in my change.
- zem 1 hour agoit's more like "this function is crashing with an inconsistent file format error. can you figure out how a file with the wrong format got this far into the pipeline?". in cases like that the fix is usually pretty easy once you have the one code path out of several thousands nailed down.
- p1esk 3 hours ago“here's a failing test—do whatever you can to fix it”
Bad idea. It can modify the code that the test passes but everything else is now broken.
[-]- SatvikBeri 24 minutes agoI've heard this said a lot but never had this problem. Claude has been decent at debugging tests since 4.0 in my experience (and much better since 4.5)
- vel0city 3 hours agoYou do things like ralph loops.
https://github.com/snarktank/ralph
Its constantly restarting itself, looking at the current state of things, re-reading what was the request, what it did and failed at in the past (at a higher level), and trying again and again.
- addaon 4 hours ago> it's an ideal usecase
This is impressive, you’ve completely mitigated the risk of learning or understanding.
[-]- arcanemachiner 4 hours agoOr, they have freed up time for more useful endeavours, that may otherwise have spent on drudgery.
I don't discount the value of blood, sweat and tears spent on debugging those hard issues, and the lessons learned from doing so, but there is a certain point where it's OK to take a pass and just let the robots figure it out.
- XCSme 4 hours agoTheir ability to burn through tokens non-stop for hours, days or weeks without intervention.[-]
- raw_anon_1111 4 hours agoYou’re mixing up Open AI for Anthropic.
Anthropic is actually sort of concerned with not burning through cash and charging people a reasonable price. Open AI doesn’t care. I can use Codex CLI all day and not approach any quotas with just my $20 a month ChatGPT subscription.
I treat coding agents like junior developers and never take my hand off the wheel except for boilerplate refactoring.
- TheMuenster 2 hours agoCan I just say how funny this metric is?
"Our model is so slow and our tokens/second is so low that these tasks can take hours!" is not the advertising they think it is.
- johnfn 4 hours agoThe other day I got Codex to one-shot an upgrade to Vite 8 at my day job (a real website with revenue). It worked in this for over 3 hours without intervention (I went to sleep). This is now in production.[-]
- wahnfrieden 4 hours agoIt worked for me several times.
It's easy to say that these increasingly popular tools are only able to produce useless junk. You haven't tried, or you haven't "closed the loop" so that the agent can evaluate its own progress toward acceptance criteria, or you are monitoring incompetent feeds of other users.
[-]- nikkwong 4 hours agoI'm definitely bullish on LLM's for coding. It sounds to me as though getting it to run on its own for hours and produce something usable requires more careful thought and setup than just throwing a prompt at it and wishing for the best—but I haven't seen many examples in the wild yet[-]
- foobar10000 3 hours agoIt needs a closed loop.
Strategy -> [ Plan -> [Execute -> FastVerify -> SlowVerify] -> Benchmark -> Learn lessons] -> back to strategy for next big step.
Claude teams and a Ralph wiggum loop can do it - or really any reasonable agent. But usually it all falls apart on either brittle Verify or Benchmark steps. What is important is to learn positive lessons into a store that survives git resets, machine blowups, etc… Any telegram bot channel will do :)
The entire setup is usually a pain to set up - docker for verification, docker for benchmark, etc… Ability to run the thing quickly, ability for the loop itself to add things , ability to do this in worktree simultaneously for faster exploration - and got help you if you need hardware to do this - for example, such a loop is used to tune and custom-fuse CUDA kernels - which means a model evaluator, big box, etc….
[-]- wahnfrieden 2 hours agoI do it easily just by asking Codex
- rcarmo 3 hours agowell, you can start with https://github.com/rcarmo/go-textile, https://github.com/rcarmo/go-rdp, https://github.com/rcarmo/go-ooxml, https://github.com/rcarmo/go-busybox (still WIP). All of these are essentially SPEC and test-driven and they are all working for me (save a couple of bugs in go-rdp I need to fix myself, and some gaps in the ECMA specs for go-ooxml that require me to provide actual manually created documents for further testing).
I am currently porting pyte to Go through a similar approach (feeding the LLM with a core SPEC and two VT100/VT220 test suites). It's chugging along quite nicely.
- bitwize 3 hours agoPEBKAC
- raahelb 2 hours agoInteresting to note that the reduced latency is not just due to the improved model speed, but also because of improvements made to the harness itself:
> "As we trained Codex-Spark, it became apparent that model speed was just part of the equation for real-time collaboration—we also needed to reduce latency across the full request-response pipeline. We implemented end-to-end latency improvements in our harness that will benefit all models [...] Through the introduction of a persistent WebSocket connection and targeted optimizations inside of Responses API, we reduced overhead per client/server roundtrip by 80%, per-token overhead by 30%, and time-to-first-token by 50%. The WebSocket path is enabled for Codex-Spark by default and will become the default for all models soon."
I wonder if all other harnesses (Claude Code, OpenCode, Cursor etc.,) can make similar improvements to reduce latency. I've been vibe coding (or doing agentic engineering) with Claude Code a lot for the last few days and I've had some tasks take as long as 30 minutes.
[-]- 2001zhaozhao 1 hour agoThis might actually be hard for open source agents (e.g. Opencode) to replicate, barring a standardized WebSocket LLM API being widely adopted.
- kachapopopow 4 hours agoIs this the first time one of the big 3 using Cerebras? I've been waiting for this day...[-]
- arisAlexis 4 hours agoThey were afraid for the untested tech but it looks like a leap in speed now[-]
- rvz 4 hours agoThis is nonsense what do you mean? Mistral uses Cerebras for their LLMs as well. [0]
It's certainly not "untested".
[-]- lemming 4 hours agoTested at Mistral’s scale is a very different thing to tested at OpenAI’s scale.[-]
- rvz 3 hours agoThe scale of being "tested" clearly convinced Meta (beyond OpenAI's scale) [0] HuggingFace [1], Perplexity [2] and unsuprisingly many others in the AI industry [3] that require more compute than GPUs can deliver.
So labelling it "untested" even at Meta's scale as a customer (which exceeds OpenAI's scale) is quiet nonsensical and frankly an uninformed take.
[0] https://www.cerebras.ai/customer-spotlights/meta
[1] https://www.cerebras.ai/news/hugging-face-partners-with-cere...
[2] https://www.cerebras.ai/press-release/cerebras-powers-perple...
- mudkipdev 5 hours agoOff topic but how is it always this HN user sharing model releases within a couple of minutes of their announcement?[-]
- casefields 4 hours agoThe account isn’t a normal user. They literally only post stuff like this. Their comments are just official links back to said announcements.
- lacoolj 2 hours agoGoogle Alerts
- pdeva1 5 hours agoThis is closer to 5.1 mini it seems and tied to Pro account. GLM 4.7 is available on-demand on Cerebras today [1] and performs better and cheaper... [1] https://www.cerebras.ai/blog/glm-4-7[-]
- ehzb2827 4 hours agoGLM 4.7 scores 41.0% on Terminal Bench 2.0 [1] compared to 58.4% for GPT-5.3-Codex-Spark [2].
[1] https://z.ai/blog/glm-4.7 [2] https://openai.com/index/introducing-gpt-5-3-codex-spark/
- mbm 1 hour agoWorks pretty well as a general-purpose computer. The speed is really enjoyable. Could replace some of my Claude Code use actually. For coding, set to xhigh and use it for personal tools or small projects.
Example repo that Codex with spark made in about 15 minutes for me since `claude --resume` has been finicky lately: https://github.com/mzxrai/claude-sessions
- ttul 2 hours agoGreat move by OpenAI. With coding agents, if you have access to a fast and cheap model, you can afford to let it rip, making lots of mistakes, and iterate until it gets things right. With the right scaffolding (AGENTS.md, SKILLS.md, etc.), a fast and light model can do great things. And when it's done, you can still have the heavyweight model come in to clean up any messes.
- jbellis 29 minutes agoreally too bad that the codex models are so tightly coupled to the codex harness as to be useless for everything else
- alecco 1 hour agoThis could probably work amazingly with an orchestrator on 5.3-high and coding agents with Spark. But it would need some decent instructions for both.
- antirez 5 hours agoThe search for speed is vain. Often Claude Code Opus 4.6, on hard enough problems, can do the impression of acting fast without really making progresses because of lack of focus on what matters. Then you spin the much slower GPT 5.3-Codex and it fixes everything in 3 minutes of doing the right thing.[-]
- mickeyp 5 hours agoI disagree. This is great for bulk tasks: renaming, finding and searching for things, etc[-]
- ghosty141 47 minutes agoWhat codex often does for this, write a small python script and execute that to bulk rename for example.
I agree that there is use for fast "simpler" models, there are many tasks where the regular codex-5.3 is not necessary but I think it's rarely worth the extra friction of switching from regular 5.3 to 5.3-spark.
- Aurornis 4 hours agoI will always take more speed. My use of LLMs always comes back to doing something manually, from reviewing code to testing it to changing direction. The faster I can get the LLM part of the back-and-forth to complete, the more I can stay focused on my part.
- jusgu 4 hours agodisagree. while intelligence is important, speed is especially important when productionizing AI. it’s difficult to formalize the increase in user experience per increase in TPS but it most definitely exists.
- capevace 4 hours agoSeems like the industry is moving further towards having low-latency/high-speed models for direct interaction, and having slow, long thinking models for longer tasks / deeper thinking.
Quick/Instant LLMs for human use (think UI). Slow, deep thinking LLMs for autonomous agents.
[-]- gaigalas 4 hours agoYou always want faster feedback. If not a human leveraging the fast cycles, another automated system (eg CI).
Slow, deep tasks are mostly for flashy one-shot demos that have little to no practical use in the real world.
[-]- foobar10000 3 hours agoI mean, yes, one always does want faster feedback - cannot argue with that!
But some of the longer stuff - automating kernel fusion, etc, are just hard problems. And a small model - or even most bigger ones, will not get the direction right…
[-]- gaigalas 2 hours agoFrom my experience, larger models also don't get the direction right a surprising amount of times. You just take more time to notice when it happens, or start to be defensive (over-specing) to account for the longer waits. Even the most simple task can appear "hard" with that over spec'd approach (like building a react app).
Iterating with a faster model is, from my perspective, the superior approach. Doesn't matter the task complexity, the quick feedback more than compensates for it.
- varispeed 4 hours agoAre they really thinking or are they sprinkling them with Sleep(x)?
- storus 3 hours agoAnyone using OpenClaw to manage a bunch of coding agents so that you only set the high-level vision and leave all the prompting, testing, debugging, forking to agents? If yes, how did you glue it all together? Are you using local models? What is the SOTA for what I can run locally with a 512GB M3 Ultra, 2x DGX Spark, 2x RTX Pro 6000 Max-Q in one machine and 1x RTX Pro 6000 WS in another machine?
- OsrsNeedsf2P 5 hours agoNo hint on pricing. I'm curious if faster is more expensive, given a slight trade-off in accuracy[-]
- sauwan 3 hours agoIt's either more expensive or dumber.
- wxw 4 hours agoGreat stuff. People are getting used to agents as the interface for everything, even work as simple as "change label X to label Y". More speed on that front is welcome. The Codex "blended mode" they refer to will be useful (similar to Claude Code bouncing between haiku and opus).
I imagine it's a win-win. This could significantly help their tokenomics.
The example showing a plan being generated instantaneously is interesting. Human understanding will end up as the last, true bottleneck.
- dalemhurley 2 hours agoThis is a win for agents, speed and intelligence is crucial to the loop. If the time and token cost is small you can iterate many times to correct mistakes.
Got to wonder why Wall Street is dumping NVIDIA.
[-]- SamDc73 2 hours agoI mean they are only running a small version of codex can they run the full one? Or the technology isn't there yet?
- mynti 3 hours agoWith the rough numbers from the blog post at ~1k tokens a second in Cerebras it should put it right at the same size as GLM 4.7, which also is available at 1k tokens a second. And they say that it is a smaller model than the normal Codex model
- Aeroi 57 minutes agoopen ai naming is a meme at this point
- hchak 3 hours agoCerebras out here catching dubs. Does anyone know if Groq is running DGX Cloud inference or am I tripping?
- cjbarber 5 hours agoIt'll be nice when there's smarter routing between models, or easier routing, so some things get sent to the fast model, some get sent to the cheap model, some get sent to the smart model, etc.
- rprend 3 hours agoDamn, this is the first thing to make me decide to try Codex, as a loyal Claude Code user.
- jannniii 2 hours agoThis would be interesting if it was an open weights model.
- alexhans 5 hours agoWhen I saw Spark my mind went to Apache Spark and wondered if we were learning all the lessons in orchestration of driver/worker and data shuffling from that space.
- Computer0 40 minutes ago128k context window!
- modeless 4 hours agoWhy are they obscuring the price? It must be outrageously expensive.[-]
- chaos_emergent 3 hours agoI think it's a beta so they're trying to figure out pricing by deploying it.
- throwup238 5 hours agoYour move, Anthropic.
(Yes I know they released /fast last week but I’m loving the constant oneupsmanship)
[-]- bearjaws 3 hours ago/fast is insanely expensive.
Last night it got stuck in a loop (in plan mode, I use vanilla CC) and burnt through $22 in 15 minutes.
- dude250711 5 hours agoThey asked Google to cover them this time. They will owe them a reciprocal favour.
- rvz 4 hours ago
- anonzzzies 4 hours agoBeen using glm 4.7 for this with opencode. Works really well.
- system2 3 hours agoI stopped using OpenAI tools recently after they increased the censorship. I can't even tell it to read a screencapture software I am building because it thinks I might use it for evil purposes.
- nusl 4 hours agoThese graphs are really weird. One only shows 30-60% range with the model(s) close to 60%, the other shows 80% but the top model is at 77%.[-]
- desireco42 2 hours agoIs it not available in Codex? I think this is fantastic and can't wait to try it, this is exactly the usecase I need, something fast, perform based on my instruction.
Cerebras is a winner here.
[-]- arpinum 2 hours agoupdate codex, it's there.
- tsss 4 hours agoDoes anyone want this? Speed has never been the problem for me, in fact, higher latency means less work for me as a replaceable corporate employee. What I need is the most intelligence possible; I don't care if I have to wait a day for an answer if the answer is perfect. Small code edits, like they are presented as the use case here, I can do much better myself than trying to explain to some AI what exactly I want done.[-]
- vessenes 3 hours agoYes, we want this.
- cjbarber 5 hours agoFor a bit, waiting for LLMs was like waiting for code to compile: https://xkcd.com/303/
> more than 1000 tokens per second
Perhaps, no more?
(Not to mention, if you're waiting for one LLM, sometimes it makes sense to multi-table. I think Boris from Anthropic says he runs 5 CC instances in his terminal and another 5-10 in his browser on CC web.)
- deskithere 4 hours agoAnyway token eaters are upgrading their consumption capabilities.
- allisdust 5 hours agoNormal codex it self is sub par compared to opus. This might be even worse
- cactusplant7374 3 hours agoI was really hoping it would support codex xhigh first.
- jauntywundrkind 5 hours agoWasn't aware there was an effort to move to websockets. Is there any standards work for this, or is this just happening purely within the walled OpenAI garden?
> Under the hood, we streamlined how responses stream from client to server and back, rewrote key pieces of our inference stack, and reworked how sessions are initialized so that the first visible token appears sooner and Codex stays responsive as you iterate. Through the introduction of a persistent WebSocket connection and targeted optimizations inside of Responses API, we reduced overhead per client/server roundtrip by 80%, per-token overhead by 30%, and time-to-first-token by 50%. The WebSocket path is enabled for Codex-Spark by default and will become the default for all models soon.
- behnamoh 5 hours agoIn my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).
When they partnered with Cerebras, I kind of had a gut feeling that they wouldn't be able to use their technology for larger models because Cerebras doesn't have a track record of serving models larger than GLM.
It pains me that five days before my Codex subscription ends, I have to switch to Anthropic because despite getting less quota compared to Codex, at least I'll be able to use my quota _and_ stay in the flow.
But even Codex's slowness aside, it's just not as good of an "agentic" model as Opus: here's what drove me crazy: https://x.com/OrganicGPT/status/2021462447341830582?s=20. The Codex model (gpt-5.3-xhigh) has no idea about how to call agents smh
[-]- properbrew 4 hours agoI was using a custom skill to spawn subagents, but it looks like the `/experimental` feature in codex-cli has the SubAgent setting (https://github.com/openai/codex/issues/2604#issuecomment-387...)[-]
- behnamoh 4 hours agoYes, I was using that. But the prompt given to the agents is not correct. Codex sends a prompt to the first agent and then sends the second prompt to the second agent, but then in the second prompt, it references the first prompt. which is completely incorrect.
- kachapopopow 4 hours agoThat's why I built oh-my-singularity (based on oh-my-pi - see the front page from can.ac): https://share.us-east-1.gotservers.com/v/EAqb7_Wt/cAlknb6xz0...
video is pretty outdated now, this was a PoC - working on a dependency free version.
- cjbarber 5 hours ago> In my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).
It's entirely possible that this is the first step and that they will also do faster better models, too.
[-]- behnamoh 5 hours agoI doubt it; there's a limit on model size that can be supported by Cerebras tech. GPT-5.3 is supposedly +1T parameters...[-]
- joshuastuden 1 hour agoUm, no. There's no limit on model size for Cerebras hardware. Where do you come up with this stuff?
- re-thc 5 hours ago> In my opinion, they solved the wrong problem
> I don't want a faster, smaller model. I want a faster, better model
Will you pay 10x the price? They didn't solve the "wrong problem". They did what they could with the resources they have.
- cowpig 3 hours ago> Today, we’re releasing
Releasing for real? Is it an open model?
- rvz 4 hours ago> Today, we’re releasing a research preview of GPT‑5.3-Codex-Spark, a smaller version of GPT‑5.3-Codex, and our first model designed for real-time coding. Codex-Spark marks the first milestone in our partnership with Cerebras, which we announced in January .
Nevermind. [0]