Making WebAssembly a first-class language on the Web(hacks.mozilla.org)

281 points by mikece 16 hours ago | 116 comments

  • mananaysiempre 4 hours ago
    This (appears as though it) all could have happened half a decade ago had the interface-types people not abandoned[1,2] their initial problem statement of WebIDL support in WebAssembly in favour of building Yet Another IDL while declaring[3] the lack of DOM access a non-issue. (I understand the market realities that led to this, I think. This wasn’t a whim or pure NIH. Yet I still cannot help but lament the lost time.)

    Better late than never I guess.

    [1] https://github.com/WebAssembly/interface-types/commit/f8ba0d...

    [2] https://wingolog.org/archives/2023/10/19/requiem-for-a-strin...

    [3] https://queue.acm.org/detail.cfm?id=3746174

    [-]
    • eqrion 4 hours ago
      I worked on the original interface-types proposal a little bit before it became the component model. Two goals that were added were:

        1. Support non-Web API's
        2. Support limited cross language interop
      
      WebIDL is the union of JS and Web API's, and while expressive, has many concepts that conflict with those goals. Component interfaces take more of an intersection approach that isn't as expressive, but is much more portable.

      I personally have always cared about DOM access, but the Wasm CG has been really busy with higher priority things. Writing this post was sort of a way to say that at least some people haven't forgotten about this, and still plan on working on this.

      [-]
      • mananaysiempre 3 hours ago
        > Two goals that were added were: 1. Support non-Web API's. 2. Support limited cross language interop.

        I mean, surely it does not come to a surprise to anyone that either of these is a huge deal, let alone both. It seems clear that non-Web runtimes have had a huge influence on the development priorities of WebAssembly—not inherently a bad thing but in this case it came at the expense of the actual Web.

        > WebIDL is the union of JS and Web API's, and while expressive, has many concepts that conflict with those goals.

        Yes, another part of the problem, unrelated to the WIT story, seems to have been the abandonment of the idea that <script> could be something other than JavaScript and that the APIs should try to accomodate that, which had endured for a good while based on pure idealism. That sure would have come useful here when other languages became relevant again.

        (Now with the amputation of XSLT as the final straw, it is truly difficult to feel any sort of idealism from the browser side, even if in reality some of the developers likely retain it. Thank you for caring and persisting in this instance.)

    • davexunit 3 hours ago
      I really want stringref to make a comeback.
      [-]
      • spankalee 12 minutes ago
        My god yes.

        I'm building a new Wasm GC-based language and I'm trying to make as small as binaries as possible to target use cases like a module-per-UI-component, and strings are the biggest hinderance to that. Both for the code size and the slow JS interop.

  • ilaksh 1 hour ago
    I love WebAssembly components and that's great progress. But I feel like everyone is missing a golden opportunity here to take apart the giant OS-sized web API and break some of it out into smaller standard or subscribable subsets that also don't try to mix information presentation and applications in a forced way.

    Example subsets:

    - (mainly textual) information sharing

    - media sharing

    - application sharing with, small standard interface like WASI 2 or better yet including some graphics

    - complex application sharing with networking

    Smaller subsets of the giant web API would make for a better security situation and most importantly make it feasible for small groups to build out "browser" alternatives for information sharing, media or application sharing.

    This is likely to not be pursued though because the extreme size of the web API (and CSS etc.) is one of the main things that protects browser monopolies.

    Even further, create a standard webassembly registry and maybe allow people to easily combine components without necessarily implementing full subsets.

    Do webassembly components track all of their dependencies? Will they assume some giant monolithic API like the DOM will be available?

    What you're doing is essentially creating a distributed operating system definition (which is what the web essentially is). It can be designed in such a way that people can create clients for it without implementing massive APIs themselves.

  • steve_adams_86 4 hours ago
    The WASM cliff is very real. Every time I go to use it, because of the complexity of the tool chain and process of going from zero to anything at all, I feel like I'm already paying a cognitive tax. I worry that I should update my tooling, look into the latest and greatest, understand the tooling better, etc... It would be incredible to see that improved.

    The difference in perf without glue is crazy. But not surprising at all. This is one of the things I almost always warn people about, because it's such a glaring foot gun when trying to do cool stuff with WASM.

    The thing with components that might be addressed (maybe I missed it) is how we'd avoid introducing new complexity with them. Looking through the various examples of implementing them with different languages, I get a little spooked by how messy I can see this becoming. Given that these are early days and there's no clearly defined standard, I guess it's fair that things aren't tightened up yet.

    The go example (https://component-model.bytecodealliance.org/language-suppor...) is kind of insane once you generate the files. For the consumer the experience should be better, but as a component developer, I'd hope the tooling and outputs were eventually far easier to reason about. And this is a happy path, without any kind of DOM glue or interaction with Web APIs. How complex will that get?

    I suppose I could sum up the concern as shifting complexity rather than eliminating it.

    [-]
    • eqrion 4 hours ago
      I agree that a lot of the tooling is still early days. There has also been a lot of churn as the wasm component spec has changed. We personally have a goal that in most cases web developers won't need to write WIT and can just use Web API's as if they were a library. But it's early days.
      [-]
      • davexunit 3 hours ago
        I am excited by the prospect of booting Wasm binaries without any JS glue, but when I've looked at the documentation for the component model and WIT it says that resources are references passed using a borrow checking model. That would be a serious downgrade compared to the GC-managed reference passing I can do today with Wasm GC. Do you know if there are any plans to resolve this mismatch?
      • j45 1 hour ago
        The tooling has been in it's early days for a long time. As quickly as that can improve, so will the uptake. The technology itself is quite capable.
  • ventuss_ovo 1 hour ago
    The phrase "first-class" matters here because most developers do not reject a platform over peak performance, they reject it over friction. If the happy path still requires language-specific glue, generated shims, and a mental model of two runtimes, then WebAssembly remains something you reach for only when the pain is already extreme.

    What would really change perception is not just better benchmarks, but making the boring path easy: compile with the normal toolchain, import a Web API naturally, and not have to become a part-time binding engineer to build an ordinary web app.

  • koenschipper 2 hours ago
    This article perfectly captures the frustration of the "WebAssembly wall." Writing and maintaining the JS glue code—or relying on opaque generation tools—feels like a massive step backward when you just want to ship a performant module.

    The 45% overhead reduction in the Dodrio experiment by skipping the JS glue is massive. But I'm curious about the memory management implications of the WebAssembly Component Model when interacting directly with Web APIs like the DOM.

    If a Wasm Component bypasses JS entirely to manipulate the DOM, how does the garbage collection boundary work? Does the Component Model rely on the recently added Wasm GC proposal to keep DOM references alive, or does it still implicitly trigger the JS engine's garbage collector under the hood?

    Really excited to see this standardize so we can finally treat Wasm as a true first-class citizen.

    [-]
    • hinkley 25 minutes ago
      I’m wondering if the recent improvements in sending objects through sendMessage in v8 and Bun change the math here enough to be good enough.

      SendMessage itself is frustratingly dumb. You have excessively bit fiddly or obnoxiously slow as your options. I think for data you absolutely know you’re sending over a port there should be an arena allocator so you can do single copy sends, versus whatever we have now (3 copy? Four?). It’s enough to frustrate use of worker threads for offloading things from the event loop. It’s an IPC wall, not a WASM wall.

      Instead of sending bytes you should transfer a page of memory, or several.

  • koolala 4 hours ago
    Every new standard today doesn't care about being clean and simple to use. They all maximize the JS boilerplate needed to make a basic example work. Everything is designed today for 'engineers' and not 'authors' without any friendly default workflow. I'm glad they still care about this.
  • lich_king 3 hours ago
    The web is fascinating: we started with a seemingly insane proposition that we could let anyone run complex programs on your machine without causing profound security issues. And it turned out that this was insane: we endured 20 years of serious browser security bugs caused chiefly by JavaScript. I'm not saying it wasn't worth it, but it was also crazy.

    And now that we're getting close to have the right design principles and mitigations in place and 0-days in JS engines are getting expensive and rare... we're set on ripping it all out and replacing it with a new and even riskier execution paradigm.

    I'm not mad, it's kind of beautiful.

    [-]
    • leptons 2 hours ago
      >20 years of serious browser security bugs caused chiefly by JavaScript

      I think you may be confusing Javascript the language, with browser APIs. Javascript itself is not insecure and hasn't been for a very long time, it's typically the things it interfaces with that cause the security holes. Quite a lot of people still seem to confuse Javascript with the rest of the stuff around it, like DOM, browser APIs, etc.

      [-]
    • traderj0e 3 hours ago
      I only got mad when people wanted to add browser features that clearly break sandboxing like WebUSB. How does wasm break this?
    • Retr0id 3 hours ago
      What makes WASM execution riskier than JS?
      [-]
      • observationist 2 hours ago
        Novelty - JS has had more time and effort spent in hardening it, across the browsers, WASM isn't as thoroughly battle-tested, so there will be novel attacks and exploits.
        [-]
        • JoshTriplett 2 hours ago
          That would be more true if WebAssembly didn't share so much sandboxing infrastructure with JS. If anything, I'd argue that WebAssembly is a much smaller surface area than JavaScript, and I think that will still be true even when DOM is directly exposed to WebAssemly.
          [-]
          • lich_king 2 hours ago
            I don't think it's "much smaller" once you aim for feature parity (DOM). It might be more regular than an implementation of a higher-level language, but we're not getting rid of JS.

            By the same token, was Java or Flash more dangerous than JS? On paper, no - all the same, just three virtual machines. But having all three in a browser made things fun back in the early 2000s.

            [-]
            • cogman10 1 hour ago
              It is much smaller.

              WASM today has no access to anything that isn't given to it from JS. That means that the only possible places to exploit are bugs in the JIT, something that exists as well for JavaScript.

              Even WASM gets bindings to the DOM, it's surface area is still smaller as Javascript has access to a bunch more APIs that aren't the DOM. For example, WebUSB.

              And even if WASM gets feature parity with Javascript, it will only be as dangerous as Javascript itself. The main actual risk for WASM would be the host language having memory safety bugs (such as C++).

              So why was Java and Flash dangerous in the browser (and activex, NaCL).

              The answer is quite simple. Those VMs had dangerous components in them. Both Java and Flash had the ability to reach out and scribble on a random dll in the operating system or to upload a random file from the user folder. Java relied HEAVILY on the security manager stopping you from doing that, IDK what flash used. Javascript has no such capability (well, at least it didn't when flash and Java were in the browser, IDK about now). For Java, you were running in a full JVM which means a single exploit gave you the power to do whatever the JVM was capable of doing. For Javascript, an exploit on Javascript still bound you to the javascript sandbox. That mostly meant that you might expose information for the current webpage.

        • kccqzy 2 hours ago
          There is very significant overlap between browsers’ implementation of JS and WASM. For example in V8, the TurboFan compiler works for both JS and WASM. Compilation aside, all the sandboxing work done on JS apply to WASM too. This isn’t NaCl.
        • embedding-shape 2 hours ago
          > Novelty - JS has had more time and effort spent in hardening it

          Taking this argument to its extreme, does this mean that introducing new technology always decreases technology? Because even if the technology would be more secure, just the fact that it's new makes it less secure in your mind, so then the only favorable move is to never adopt anything new?

          Supposedly you have to be aware of some inherent weakness in WASM to feel like it isn't worth introducing, otherwise shouldn't we try to adopt more safe and secure technologies?

          [-]
          • seangrogg 46 minutes ago
            > Taking this argument to its extreme, does this mean that introducing new technology always decreases technology?

            I assume you mean "decreases security" by context. And in that case - purely from a security standpoint - generally speaking the answer is yes. This is why security can often be a PITA when you're trying to adopt new things and innovate, meanwhile by default security wants things that have been demonstrated to work well. It's a known catch-22.

          • fenykep 2 hours ago
            To be fair I think this could be true for certain industries/applications. And while I obviously don't agree with the extreme example, any new technology, especially if it brings a new paradigm has more unknown unknowns which carries potential voulnerabilities.
        • Retr0id 2 hours ago
          On one hand, yes, new attack surface is new attack surface. But WASM has been in browsers for almost a decade now.
          [-]
          • lich_king 2 hours ago
            Without the bindings this talks about, so it really couldn't do nearly as much.
  • thefounder 4 hours ago
    This is the right direction. Another important bit I think it’s the GC integration. Many languages such Go, C# don’t do well on wasm due the GC. They have to ship a GC as well due the lack of various GC features(I.e interior pointers)
    [-]
    • traderj0e 3 hours ago
      Probably needs to be fixed by bundling runtimes for things like Go, or bringing back cross-website caching in some secure way if that's possible
      [-]
      • JoshTriplett 2 hours ago
        That's an orthogonal problem. First it needs to be possible and straightforward to write GCed languages in the sandbox. Second, GCed languages need to be willing to fit with the web/WASM GC model, which may not exactly match their own GC and which won't use their own GC. And after that, languages with runtimes could start trying to figure out how they might reduce the overhead of having a runtime.
        [-]
        • cogman10 1 hour ago
          > Second, GCed languages need to be willing to fit with the web/WASM GC model

          I think most languages could pretty easily use WASM GC. The main issue comes around FFI. That's where things get nasty.

          [-]
          • pjmlp 18 minutes ago
            WasmGC doesn't support interior pointers, and is quite primitive in available set of operations, this is quite relevant if you care about performance, as it would be a regression in many languages, hence why it has largely been ignored, other than the runtimes that were part of the announcement.
  • swiftcoder 5 hours ago
    Nice to see momentum here. Even outside of direct access to WebAPIs, having the ability to specify interfaces for WASM modules is a big deal, and unlocks all sort of cool options, like sandboxed WASM plugins for native apps...
  • skybrian 4 hours ago
    At a high level this sounds great. But looking into the details about how the component model will be implemented, it looks very complicated due to concurrency:

    https://github.com/WebAssembly/component-model/blob/main/des...

    [-]
    • eqrion 4 hours ago
      The concurrency part of the C-M is complicated (I think for inherent reasons), but won't be exposed to end users. It's basically defining an API that language toolchains can use to coordinate concurrency.

      For end users, they should just see their language's native concurrency primitives (if any). So if you're running Go, it'll be go routines. JS, would use promises. Rust, would have Futures.

    • phickey 4 hours ago
      Real programs, whether native JavaScript or in any other language that targets Wasm, have concurrency. Would you rather the component model exclude all concurrent programs, and fail to interact with concurrent JavaScript? The component model is meeting the web and programmers where they're at. Unless you're one of the few people implementing the low level bindings between components and guest or host languages, you don't have to ever read the CM spec or care about the minutae of how it gets implemented.
  • jjcm 2 hours ago
    This is a great step, if only because it enforces more convention for the "right" way to do things by providing a simpler mechanism for this.

    WRT WebAssembly Components though, I do wish they'd have gone with a different name, as its definition becomes cloudy when Web Components exist, which have a very different purpose. Group naming for open source is unfortunately, very hard. Everyone has different usages of words and understanding of the wider terms being used, so this kind of overlap happens often.

    I'd be curious if this will get better with LLM overseers of specs, who have wider view of the overall ecosystem.

  • exabrial 3 hours ago
    I'd really like to be able to run _any_ language in the browser. WASM is a great first step.
    [-]
    • joshuaissac 2 hours ago
      Internet Explorer used to support any language that Windows Script Host could run. By default, that was JScript and VBScript, but there were third-party engines for Python, Perl, Ruby, Lua, and many others.

      Possibly disabled now as they announced VBScript would be disabled in 2019.

      [-]
  • bikamonki 51 minutes ago
    Do programmers actually write in wasm or automatic tools port/compile other languages to wasm?
    [-]
    • ivanjermakov 41 minutes ago
      Ratio of web developers writing wasm is even less than ratio of system developers writing asm.
  • hinkley 43 minutes ago
    Gretchen. Stop trying to make DOMless WASM happen. It’s not going to happen.
  • haberman 3 hours ago
    > Thankfully, there is the esm-integration proposal, which is already implemented in bundlers today and which we are actively implementing in Firefox.

    From the code sample, it looks like this proposal also lets you load WASM code synchronously. If so, that would address one issue I've run into when trying to replace JS code with WASM: the ability to load and run code synchronously, during page load. Currently WASM code can only be loaded async.

    [-]
    • bvisness 3 hours ago
      This is not strictly true; there are synchronous APIs for compiling Wasm (`new WebAssembly.Module()` and `new WebAssembly.Instance()`) and you can directly embed the bytecode in your source file using a typed array or base64-encoded string. Of course, this is not as pleasant as simply importing a module :)
  • lasgawe 3 hours ago
    Agree with the points. But when reading this, it seems much more complicated than using JavaScript on the web when developing real-world applications. However I think that will not be an issue because of AI.
  • ngrilly 3 hours ago
    We could finally write programs for the browser in any language that compiles to WebAssembly. And even mix and match multiple languages. It would be amazing.
  • throwaway2027 4 hours ago
    Great to see it happening finally. Can we also get compute shaders with WebGL2 now? I don't want to move everything to WebGPU just for compute shaders and I don't know why they kept rejecting the proposals.
  • Tepix 3 hours ago
    WASM with DOM support will be great. Unfortunately it will also be great for obfuscation and malware.
    [-]
    • Retr0id 3 hours ago
      You can already compile malware to obfuscated asm.js. If anything, WASM blobs are easier to reverse engineer than obfuscated JS - good luck writing a ghidra plugin for JS source.
      [-]
      • throwaway12pol 2 hours ago
        What about obfuscated WASM blobs? At least obfuscated JS is still basically source code being interpreted, with WASM we will be running proprietary obfuscated binaries in the browser.
        [-]
        • Retr0id 2 hours ago
          I'd rather deal with an obfuscated WASM blob than obfuscated JS.
          [-]
          • throwaway12pol 1 hour ago
            Why is that? With obfuscated JS you can instantly create ASTs, easily patch and preview the results. We have codemodding tools for mass patching and analysis. With WASM, you can theoretically have a future anti-tamper corporation with a solution to actively obfuscate binaries and antagonize reverse engineers, like we have today with desktop binaries.
            [-]
            • Retr0id 1 hour ago
              ASTs are pretty useless once it's been through a control-flow flattening obfuscation pass. At the end of the day it's just one representation vs another, but there are a lot more existing tools for dealing with binary reverse engineering.
  • barelysapient 3 hours ago
    Wow. We need this so bad.
  • lasgawe 3 hours ago
    Agree with the points. But when reading this, it seems much more complicated than using JavaScript on the web when developing realworld applications. However I think that will not be an issue because of AI.
  • shevy-java 2 hours ago
    > Yet, it still feels like something is missing that’s holding WebAssembly back from wider adoption on the Web.

    > There are multiple reasons for this, but the core issue is that WebAssembly is a second-class language on the web

    It would be nice if WebAssembly would really succeed, but I have to be honest: I gave up thinking that it ever will. Too many things are unsolved here. HTML, CSS and JavaScript were a success story. WebAssembly is not; it is a niche thing and getting out of that niche is now super-hard.

  • dana321 1 hour ago
    This is a brilliant idea for webassembly, implementing the core browser features as libraries - they should do it.

    (though i do like the open code nature of the internet even if a lot of the javascript source code is unreadable and/or obfuscated)

  • csmantle 4 hours ago
    Another important aspect is that, without an external library like `wabt`, I can't just open Notepad, write some inline WASM/WAT in HTML and preview it in a browser, in the same way that HTML+CSS+JS works. Having to obtain a full working toolchain is not very friendly for quick prototyping and demonstrative needs.
    [-]
    • phickey 3 hours ago
      WebAssembly is a compiler target, not a human-authored language. There is exactly one audience of people for writing wat by hand: spec and tutorial authors and readers. Anyone actually developing an application they want to use will use a compiler to produce WebAssembly. Prove me wrong and write Roller Coaster Tycoon in raw wasm if you want, but having written and maintained wasm specs and toolchains for nearly a decade, I will never write any wat outside of a spec or tutorial.
      [-]
      • JoshTriplett 2 hours ago
        There is exactly one case where I'd like to write "raw wat" (and for that matter "raw wasm bytecode"): I'd love to do something like the "bootstrappable builds" project for wasm, starting with a simple wat-to-bytecode parser/translator written in raw bytecode, then some tools wirtten in raw wat for bootstrapping into other languages. :)
    • saghm 3 hours ago
      The same limitation exists with "non-web" assembly. It turns out that having languages that compile to assembly makes a lot of sense for almost every real-world use case than writing it by hand.
  • patchnull 3 hours ago
    [flagged]
    [-]
    • armchairhacker 2 hours ago
      Another example of a top comment that was definitely written by an LLM.

      And to be clear, style isn't the only problem. This comment can be summarized as "WebAssembly can now interact with the DOM directly instead of through JavaScript, making it the better choice for more types of problems". One sentence instead of a paragraph of cliches ("...change how people think about this...chicken-and-egg loop..."), uncanny phrases ("...the hot-path optimization niche"), and inaccurate claims ("...the only viable use cases were compute-heavy workloads like codecs and crypto").

      (For anyone who doesn't believe me, check the user's comment history)

      [-]
      • etaioinshrdlu 1 hour ago
        What do you think is the incentive to LLM post on HN (or any site?)
        [-]
        • homebrewer 1 hour ago
          The usual answer to this question is building out realistically looking accounts for later spam and/or astroturfing.
        • aaroninsf 1 hour ago
          The incentive of the human who deployed it—at one remove or another—would require knowing more. But the more likely cases are easy to guess at, e.g., someone is playing with OpenClaw. I'd guess "someone is playing with OpenClaw and intends to write something about it boost their brand, could be a Show HN could be a LinkedIn screed they hope goes viral."

          Could be for fun. I remember fun.

      • hypeatei 2 hours ago
        Woah, you're right. The account was created a day ago and writes walls of text everywhere (sometimes multiple on the same thread!)
    • tcfhgj 2 hours ago
      > no DOM access meant the only viable use cases were compute-heavy workloads like codecs and crypto,

      no, it didn't mean that, because the overhead is not a deal breaker:

      1) you don't have to do the glue code (libs can do it for you)

      2) there's overhead due to glue, but the overhead is so small that WASM web frameworks easily can compete with fast JS frameworks in DOM heavy scenarios.

      Source: Analysis of the creator of Leptos (a web framework based on WASM): https://www.youtube.com/watch?v=4KtotxNAwME

    • flohofwoe 2 hours ago
      > but the end state where you import a browser API like any other library in your language is genuinely simpler than the current JS FFI dance.

      Tbf, Emscripten has solved this problem long ago - I don't quite understand what's the problem for other language ecosystems.

      The JS shim is still there, but you don't need to deal with it, you just include a C header and "link with a library".

      Some of the Emscripten-specific C APIs are also much saner than their web counterparts, which is an important aspect that would be lost with an automatic binding approach. And EM_JS (e.g. directly embedding JS code into C/C++ files) is just pure bliss, because it allows to easily write 'non-standard' binding layers that go beyond a simple 1:1 mapping.

      Those features won't go away of course, I just feel like the work could be spent on solutions that provide more 'bang for the buck' (yeah, I've never been a fan of the component model to begin with).

    • nikeee 2 hours ago
      > the only viable use cases were compute-heavy workloads like codecs and crypto,

      I tried using it for crypto, but WASM does not have instructions for crypto. So it basically falls back to be non-hw-accelerated. Tried to find out why and the explanation seems to be that it's not needed because JS has a `crypto` API which uses hw intrinsics.

    • embedding-shape 2 hours ago
      > meant the only viable use cases were compute-heavy workloads like codecs and crypto

      And games, which the web is now a viable platform for a huge range of them, albeit not the top of the range, AAA and all that (yet?). Also some new graphical editors taking advantage of it, probably Figma being the most famous example so far.

    • usefulposter 2 hours ago
      Generated comments are not welcome here. Please respect the guidelines of the community.

      https://news.ycombinator.com/newsguidelines.html#generated

      >Don't post generated comments or AI-edited comments. HN is for conversation between humans.

    • shevy-java 2 hours ago
      It will take a LOT more to make WebAssembly win now.

      People have the impression that WebAssembly has failed. After so many years, I sort of agree with that notion. WebAssembly is soon 10 years old by the way:

      https://en.wikipedia.org/wiki/WebAssembly

  • mitchbob 15 hours ago
    Discussed 12 days ago (13 comments):

    https://news.ycombinator.com/item?id=47167944

    [-]
    • tomhow 5 hours ago
      We've decided to give it another try as it didn't get much front page time or discussion.
      [-]
      • flohofwoe 4 hours ago
        It's still not a great idea IMHO ;)

        (there was also some more recent discussion in here: https://news.ycombinator.com/item?id=47295837)

        E.g. it feels like a lot of over-engineering just to get 2x faster string marshalling, and this is only important for exactly one use case: for creating a 1:1 mapping of the DOM API to WASM. Most other web APIs are by far not as 'granular' and string heavy as the DOM.

        E.g. if I mainly work with web APIs like WebGL2, WebGPU or WebAudio I seriously doubt that the component model approach will cause a 2x speedup, the time spent in the JS shim is already negligible compared to the time spent inside the API implementations, and I don't see how the component model can help with the actually serious problems (like WebGPU mapping GPU buffers into separate ArrayBuffer objects which need to be copied in and out of the WASM heap).

        It would be nice to see some benchmarks for WebGL2 and WebGPU with tens-of-thousands of draw calls, I seriously doubt there will be any significant speedup.

        [-]
        • eqrion 4 hours ago
          I agree there are some cases that won't see a huge boost, but also DOM performance is a big deal and bottleneck for a lot of applications.

          And besides performance, I think there are developer experience improvements we could get with native wasm component support (problems 1-3). TBH, I think developer experience is one of the most important things to improve for wasm right now. It's just so hard to get started or integrate with existing code. Once you've learned the tricks, you're fine. But we really shouldn't be requiring everyone to become an expert to benefit from wasm.

        • glenstein 3 hours ago
          With Google now pushing developer certification, Android and iOS practically being mandatory for certain basic functions like accessing your bank or certain government services, Webassembly would make web apps first class citizens that aren't subject to mobile operating system lockdown.

          Being able to complete on efficiency with native apps is an incredible example of purposeful vision driving a significant standard, exactly the kind of thing I want for the future of the web and an example of why we need more stewards like Mozilla.

          [-]
          • flohofwoe 2 hours ago
            FWIW my home computer emulators [1] already run at about the same performance (give or take 5..10% depending on CPU type) in WASM versus their natively compiled counterparts.

            Performance is already as good as it gets for "raw" WASM, the proposed component model integration will only help when trying to use the DOM API from WASM. But I think there must be less complex solutions to accelerate this specific use case.

            [1] https://floooh.github.io/tiny8bit/

          • feznyng 2 hours ago
            How does WASM solve the platform lockdown problem? That WASM will run in a third-party app that is subject to those restrictions. The system interface exposed within that runtime is still going to be limited in the same way a native app can't get real access to the filesystem, etc.
          • pjmlp 16 minutes ago
            Except that the Web is basically ChromeOS Platform nowadays, thanks to all those folks targeting only Chrome, complaining about Safari, and shipping Elecron crap.
          • tadfisher 2 hours ago
            Removing JS glue doesn't enable anything you couldn't do before. Those banks and governments still need to write the web apps, and they need to uncheck the security box which requires a hardware-attested environment.
        • JoshTriplett 2 hours ago
          > just to get 2x faster string marshalling

          That is a useful benefit, not the only benefit. I think the biggest benefit is not needing glue, which means languages don't need to agree on any common set of JS glue, they can just directly talk DOM.

        • dbdr 3 hours ago
          If it "only" speeds up DOM access, that's massive in itself. DOM is obviously a crucial element when running inside a browser.
  • devwastaken 3 hours ago
    [flagged]
  • zb3 2 hours ago
    No no no, wasm has shitty speed if you want to emulate something (it doesn't even support JIT), the problem is in its architecture (tons of restrictions like no self modifying code, no jumps).. this can't be fixed, we need something real, something like WebKVM.
    [-]
    • titzer 2 hours ago
      On the web you can dynamically create new Wasm modules and use JS APIs to load them, though there are ergonomic issues. There are per-module costs and systems like CheerpJ and CheerpX currently do batching of multiple functions into a module to mitigate the per-module costs.

      I've created a proposal to add a fine-grained JIT interface: https://github.com/webassembly/jit-interface

      It allows generating new code one function at a time and a robust way to control what the new code can access within the generating module.

  • hexo 4 hours ago
    [flagged]
    [-]
    • dang 1 hour ago
      Could you please stop posting unsubstantive comments? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

      If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

  • pizlonator 5 hours ago
    It's simple.

    JavaScript is the right abstraction for running untrusted apps in a browser.

    WebAssembly is the wrong abstraction for running untrusted apps in a browser.

    Browser engines evolve independently of one another, and the same web app must be able to run in many versions of the same browser and also in different browsers. Dynamic typing is ideal for this. JavaScript has dynamic typing.

    Browser engines deal in objects. Each part of the web page is an object. JavaScript is object oriented.

    WebAssembly is statically typed and its most fundamental abstraction is linear memory. It's a poor fit for the web.

    Sure, modern WebAssembly has GC'd objects, but that breaks WebAssembly's main feature: the ability to have native compilers target it.

    I think WebAssembly is doomed to be a second-class citizen on the web indefinitely.

    [-]
    • eqrion 4 hours ago
      I'm not sure I follow this.

      > WebAssembly is the wrong abstraction for running untrusted apps in a browser

      WebAssembly is a better fit for a platform running untrusted apps than JS. WebAssembly has a sandbox and was designed for untrusted code. It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.

      > Browser engines evolve independently of one another, and the same web app must be able to run in many versions of the same browser and also in different browsers. Dynamic typing is ideal for this. JavaScript has dynamic typing.

      There are dynamic languages, like JS/Python that can compile to wasm. Also I don't see how dynamic typing is required to have API evolution and compt. Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.

      > Browser engines deal in objects. Each part of the web page is an object. JavaScript is object oriented

      The first major language for WebAssembly was C++, which is object oriented.

      To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.

      [-]
      • perfmode 4 hours ago
        There's something real in the impedance mismatch argument that I think the replies here are too quick to dismiss. The browser's programming model is fundamentally about a graph of objects with identity, managed by a GC, mutated through a rich API surface. Linear memory is genuinely a poor match for that, and the history of FFI across mismatched memory models (JNI, ctypes, etc.) tells us this kind of boundary is where bugs and performance problems tend to concentrate. You're right to point at that.

        Where I think the argument goes wrong is in treating "most websites don't use WASM" as evidence that WASM is a bad fit for the web. Most websites also don't use WebGL, WebAudio, or SharedArrayBuffer. The web isn't one thing. There's a huge population of sites that are essentially documents with some interactivity, and JS is obviously correct for those. Then there's a smaller but economically significant set of applications (Figma, Google Earth, Photoshop, game engines) where WASM is already the only viable path because JS can't get close on compute performance.

        The component model proposal isn't trying to replace JS for the document-web. It's trying to lower the cost of the glue layer for that second category of application, where today you end up maintaining a parallel JS shim that does nothing but shuttle data across the boundary. Whether the component model is the right design for that is a fair question. But "JS is the right abstraction" and "WASM is the wrong abstraction" aren't really in tension, because they're serving different parts of the same platform.

        The analogy I'd reach for is GPU compute. Nobody argues that shaders should replace CPU code for most application logic, but that doesn't make the GPU a "dud" or a second-class citizen. It means the platform has two execution models optimized for different workloads, and the interesting engineering problem is making the boundary between them less painful.

        [-]
        • saghm 3 hours ago
          > The browser's programming model is fundamentally about a graph of objects with identity, managed by a GC, mutated through a rich API surface.

          Even more to the point, for the past couple of decades the browser's programming model has just been "write JavaScript". Of course it's going to fit JavaScript better than something else right now! That's an emergent property though, not something inherent about the web in the abstract.

          There's an argument to be made that we shouldn't bother trying to change this, but it's not the same as arguing that the web can't possibly evolve to support other things as well. In other words, the current model for web programming we have is a local optimum, but statements like the the one at the root of this comment chain talk like it's a global one, and I don't think that's self-evident. Without addressing whether they're opposed to the concept or the amount of work it would take, it's hard to have a meaningful discussion.

      • pizlonator 4 hours ago
        > WebAssembly has a sandbox and was designed for untrusted code.

        So does JavaScript.

        > It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.

        They have that infrastructure because JS has access to the browser's API.

        If you tried to redesign all of the web APIs in a way that exposes them to WebAssembly, you'd have an even harder time than exposing those APIs to JS, because:

        - You'd still have all of the security troubles. The security troubles come from having to expose API that can be called adversarially and can pass you adversarial data.

        - You'd also have the impedence mismatch that the browser is reasoning in terms of objects in a DOM, and WebAssembly is a bunch of integers.

        > There are dynamic languages, like JS/Python that can compile to wasm.

        If you compile them to linear memory wasm instead of just running directly in JS then you lose the ability to do coordinated garbage collection with the DOM.

        If you compile them to GC wasm instead of running directly in JS then you're just adding unnecessary overheads for no upside.

        > Also I don't see how dynamic typing is required to have API evolution and compt.

        Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load.

        > Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.

        We're talking about the browser, which is a particular platform. Not all platforms are the same.

        The largest comparable platform is OSes based on C ABI, which rely on a "kind" of dynamic typing (stringly typed, basically - function names in a global namespace plus argument passing ABIs that allow you to mismatch function signature and get away with it.

        > The first major language for WebAssembly was C++, which is object oriented.

        But the object orientation is lost once you compile to wasm. Wasm's object model when you compile C++ to it is an array of bytes.

        > To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.

        Then what's your excuse for why wasm, despite years of investment, is a dud on the web?

        [-]
        • eqrion 4 hours ago
          > If you compile them to GC wasm instead of running directly in JS then you're just adding unnecessary overheads for no upside

          Language portability is a big feature. There's a lot of code that's not JS out there. And JS isn't a great compilation target for a lot of languages. Google switched to compiling Java to Wasm-GC instead of JS and got a lot of memory/speed improvements.

          > Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load. > The largest comparable platform is OSes based on C ABI, which rely on a "kind" of dynamic typing (stringly typed, basically - function names in a global namespace plus argument passing ABIs that allow you to mismatch function signature and get away with it.

          I don't think any Web API exposed directly to Wasm would have a single fixed ABI for that reason. We'd need to have the user request a type signature (through the import), and have the browser maximally try and satisfy the import using coercions that respect API evolution and compat. This is what Web IDL/JS does, and I don't see why we couldn't have that in Wasm too.

          > Then what's your excuse for why wasm, despite years of investment, is a dud on the web?

          Wasm is not a dud on the web. Almost 6% of page loads use wasm [1]. It's used in a bunch of major applications and libraries.

          [1] https://chromestatus.com/metrics/feature/timeline/popularity...

          I still think we can do better though. Wasm is way too complicated to use today. So users of wasm today are experts who either (a) really need the performance or (b) really need cross platform code. So much that they're willing to put up with the rough edges.

          And so far, most investment has been to improve the performance or bootstrap new languages. Which is great, but if the devex isn't improved, there won't be mass adoption.

          [-]
          • pizlonator 3 hours ago
            > Language portability is a big feature.

            It's a big feature of JS. JS's dynamism makes it super easy to target for basically any language.

            > Google switched to compiling Java to Wasm-GC instead of JS and got a lot of memory/speed improvements.

            That's cool. But that's one giant player getting success out of a project that likely required massive investment and codesign with their browser team.

            Think about how sad it is that these are the kinds of successes you have to cite for a technology that has had as much investment as wasm!

            > Almost 6% of page loads use wasm

            You can disable wasm and successfully load more than 94% of websites.

            A lot of that 6% is malicious ads running bitcoin mining.

            > Wasm is way too complicated to use today.

            I'm articulating why it's complicated. I think that for those same reasons, it will continue to be complicated

        • swiftcoder 4 hours ago
          > Then what's your excuse for why wasm, despite years of investment, is a dud on the web?

          It's not really a dud on the web. It sees a ton of use in bringing heavier experiences to the browser (i.e Figma, the Unity player, and so on).

          Where it is currently fairly painful is in writing traditional websites, given all the glue code required to interact with the DOM - exactly what these folks are trying to solve.

          [-]
          • pizlonator 4 hours ago
            Figma is one site. There are also a handful of other sites that use wasm. But most of the web does not use wasm.

            > Where it is currently fairly painful is in writing traditional websites, given all the glue code required to interact with the DOM - exactly what these folks are trying to solve.

            I don't think they will succeed at solving the pain, for the reasons I have enumerated in this thread.

            [-]
            • swiftcoder 4 hours ago
              I mean, you are obviously entitely to your opinion, but folks have been solving this stuff the hard, glue-based way for ages now, and are using WASM wherever there is an advantage to do so. Getting rid of the glue layer and the associated performance problems can only accelerate those efforts
              [-]
              • pizlonator 4 hours ago
                > I mean, you are obviously entitely to your opinion

                I'm trying to explain to you why attempts to make wasm mainstream have failed so far, and are likely to continue to fail.

                I'm not expressing an "opinion"; I'm give you the inside baseball as a browser engineer.

                > Getting rid of the glue layer

                I'm trying to elucidate why that glue layer is inherent, and why JS is the language that has ended up dominating web development, despite the fact that lots of "obviously better" languages have gone head to head with it (Java, Dart sort of, and now wasm).

                Just like Java is a fantastic language anywhere but the web, wasm seems to be a fantastic sandboxing platform in lots of places other than the web. I'm not trying to troll you folks; I'm just sharing the insight of why wasm hasn't worked out so far in browsers and why that's likely to continue

                [-]
                • swiftcoder 3 hours ago
                  > why JS is the language that has ended up dominating web development

                  JS was dominating web development long before WASM gained steam. This isn't the same situation as "JS beating Java/ActivX for control of the web" (if I follow the thrust of your argument correctly).

                  WASM has had less than a decade of widespread browser support, terrible no-good DevEx for basically the whole time, and it's still steadily making it's way into more and more of the web.

                  [-]
                  • pizlonator 3 hours ago
                    WebAssembly has had extraordinary levels of investment from browser devs and the broader community.

                    > terrible no-good DevEx for basically the whole time

                    I'm telling you why.

                    > still steadily making it's way into more and more of the web.

                    It is, but you can still browser the web without it just fine, despite so much investment and (judging by how HN reacts to it) overwhelming enthusiasm from devs

        • saghm 3 hours ago
          > Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load.

          I don't understand this objection. If you compile code that doesn't call a function, and then put that artifact on a server and send it to a browser, how is it broken when that function is removed?

    • saghm 3 hours ago
      I'm not convinced JavaScript is a great abstraction for the browser as much as we've forced the web into a shape that fits JavaScript because of a lack of viable alternatives. I'd argue that the popularity of TypeScript implies that dynamic typing is not a universal ideal. Browser engines deal in objects because they're currently all built on top of JavaScript only; that doesn't demonstrate anything fundamental about the web that implies object oriented is the only reasonable representation.

      If it gets stuck as a second-class citizen like you're predicting, it sounds a lot more like it's due to inflexibility to consider alternatives than anything objectively better about JavaScript.

    • flohofwoe 4 hours ago
      That's just like your opinion man ;)

      (I'm not a fan of the WASM component model either, but your generalized points are mostly just wrong)

      [-]
      • pizlonator 4 hours ago
        Then give me a counterargument instead of just saying that I'm wrong.

        My points are validated by the reality that most of the web is JavaScript, to the point that you'd have a hard time observing degradation of experience if you disabled the wasm engine.

        [-]
        • flohofwoe 4 hours ago
          I created and maintain a couple of WASM projects and have not experienced the problems you describe:

          - https://floooh.github.io/tiny8bit/

          - https://floooh.github.io/sokol-webgpu/

          - https://floooh.github.io/visualz80remix/

          - https://floooh.github.io/doom-sokol/

          All those projects also compile into native Windows/Linux/macOS/Android/iOS executables without any code changes, but compiling to WASM and running in web browsers is the most painless way to get this stuff to users.

          Dealing with minor differences of web APIs in different browsers is a rare thing and can be dealt with in WASM just the same as in JS: a simple if-else will do the job, no dynamic type system needed (apart from that, WASM doesn't have a "type system" in the first place, just like CPU instruction sets don't have one - unless you count integer and float types as type system"). Alternatively it's trivial to call out into Javascript. In Emscripten you can even mix C/C++ and Javascript in the same source file.

          E.g. for me, WASM is already a '1st class citizen of the web' no WASM component model needed.

          [-]
          • pizlonator 4 hours ago
            The fact that you made some webassembly things isn't an answer to the question of why webassembly is not used by the overwhelming majority of websites.
            [-]
            • saghm 3 hours ago
              That's a fairly arbitrary metric. The overwhelming majority of code running outside of the browser on my laptop isn't in Python, but it's hard to argue that's evidence of it being "doomed to being a second-class citizen on my desktop indefinitely".
            • flohofwoe 4 hours ago
              > why webassembly is not used by the overwhelming majority of websites

              This is such a bizarre take that I don't know whether it's just a trolling attempt or serious...

              Why should web-devs switch to WASM unless they have a specific problem to solve where WASM is the better alternative to JS? The two technologies live side by side, each with specific advantages and disadvantages, they are not competing with each other.

              [-]
              • pizlonator 4 hours ago
                > This is such a bizarre take that I don't know whether it's just a trolling attempt or serious...

                I'm being serious.

                > Why should web-devs switch to WASM unless they have a specific problem to solve where WASM is the better alternative to JS?

                They mostly shouldn't. There are very few problems where wasm is better.

                If you want to understand why wasm is not better, see my other posts in this thread.

          • skybrian 4 hours ago
            What toolchain do you use to build your apps?
            [-]
            • flohofwoe 4 hours ago
              Vanilla Emscripten. Most of the higher level platform abstraction (e.g. window system glue, 3D APIs or audio APIs) happens via the sokol headers though:

              https://github.com/floooh/sokol