Vouch(github.com/mitchellh)

454 points by chwtutha 19 hours ago | 200 comments

  • femto113 2 hours ago
    Users already proven to be trustworthy in one project can automatically be assumed trustworthy in another project, and so on.

    I get the spirit of this project is to increase safety, but if the above social contract actually becomes prevalent this seems like a net loss. It establishes an exploitable path for supply-chain attacks: attacker "proves" themselves trustworthy on any project by behaving in an entirely helpful and innocuous manner, then leverages that to gain trust in target project (possibly through multiple intermediary projects). If this sort of cross project trust ever becomes automated then any account that was ever trusted anywhere suddenly becomes an attractive target for account takeover attacks. I think a pure distrust list would be a much safer place to start.

    [-]
    • mitchellh 13 minutes ago
      I think this fear is overblown. What Vouch protects against is ultimately up to the downstream but generally its simply gated access to participate at all. It doesn't give you the right to push code or anything; normal review processes exist after. It's just gating the privilege to even request a code review.

      Its just a layer to minimize noise.

    • tgsovlerkhgsel 2 hours ago
      Based on the description, I suspect the main goal isn't "trust" in the security sense, it's essentially a spam filter against low quality AI "contributions" that would consume all available review resources without providing corresponding net-positive value.
    • theshrike79 21 minutes ago
      And then they become distrusted and BOOM trust goes away from every project that subscribed to the same source.

      Think of this like a spam filter, not a "I met this person live and we signed each other's PGP keys" -level of trust.

      It's not there to prevent long-con supply chain attacks by state level actors, it's there to keep Mr Slopinator 9000 from creating thousands of overly verbose useless pull requests on projects.

    • stavros 1 hour ago
      It's just an example of what you can do, not a global feature that will be mandatory. If I trust someone on one of my projects, why wouldn't I want to trust them on others?
  • brikym 20 minutes ago
    It seems like dating apps to me. You have a large population of highly motivated undesirables to filter out. I think we'll see the same patterns: pay to play, location filtering, identity verification, social credit score (ELO etc).

    I even see people hopping on chat servers begging to 'contribute' just to get github clout. It's really annoying.

  • andai 2 hours ago
    It should just be $1 to submit PR.

    If PR is good, maintainer refunds you ;)

    I noticed the same thing in communication. Communication is now so frictionless, that almost all the communication I receive is low quality. If it cost more to communicate, the quality would increase.

    But the value of low quality communication is not zero: it is actively harmful, because it eats your time.

    [-]
    • Bewelge 53 minutes ago
      > But the value of low quality communication is not zero: it is actively harmful, because it eats your time.

      But a non-zero cost of communication can obviously also have negative effects. It's interesting to think about where the sweet spot would be. But it's probably very context specific. I'm okay with close people engaging in "low quality" communication with me. I'd love, on the other hand, if politicians would stop communicating via Twitter.

      [-]
      • lelandbatey 35 minutes ago
        The idea is that sustained and recurring communication would have a cost that quickly drops to zero. But establishing a new line of communication would have a slight cost, but which would quickly drop to zero.

        A poorly thought out hypothetical, just to illustrate: Make a connection at a dinner party? Sure, technically it costs 10¢ make that initial text message/phone call, then the next 5 messages are 1¢ each, but thereafter all the messages are free. Existing relationships: free. New relationships, extremely cheap. Spamming at scale: more expensive.

        I have no idea if that's a good idea or not, but I think that's an ok representation of the idea.

        [-]
        • Bewelge 25 minutes ago
          Haha yea, I almost didn't post my comment since the original submission is about contributors where a one time "introduction fee" would solve these problems.

          I was specifically thinking about general communication. Comparing the quality of communication in physical letters (from a time when that was the only affordable way to communicate) to messages we send each other nowadays.

    • _puk 22 minutes ago
      It's externalisation of cost.

      We've seen it everywhere, in communication, in globalised manufacturing, now in code generation.

      It takes nothing to throw something out there now; we're at a scale that there's no longer even a cost to personal reputation - everyone does it.

    • k8sToGo 2 hours ago
      If you want me to read your comment, please pay me $1 first... if I find your comment interesting I might refund.
      [-]
      • hermanb 1 hour ago
        I had this idea / pet project once where I did exactly this for email. Emails would immediately bounce with payment link and explanation. If you paid you get credit on a ledger per email address. Only then the mail goes through.

        You can also integrate it in clients by adding payment/reward claim headers.

        [-]
        • Fnoord 3 minutes ago
          [delayed]
      • jt2190 1 hour ago
        The market currently values your reading of HN comments at $0.
    • ramon156 29 minutes ago
      Sorry, but this seems like a privileged solution.

      Let's say you're a one-of-a-kind kid that already is making useful contributions, but $1 is a lot of money for you, then suddenly your work becomes useless?

      It feels weird to pay for providing work anyway. Even if its LLM gunk, you're paying to work (let alone pay for your LLM).

      [-]
  • Halan 3 hours ago
    How does a potential positive contributor pierce through? If they are not contributing to something already and are not in the network with other contributors? They might be a SME on the subject and legit have something to bring to the table but only operated on private source.

    I get that AI is creating a ton of toil to maintainers but this is not the solution.

    [-]
    • arcologies1985 3 hours ago
      In my OSS projects I appreciate if someone opens an issue or discussion with their idea first rather than starting with a PR. PRs often put me in an awkward position of saying "this code works, but doesn't align with other directions I'm taking this project" (e.g. API design, or a change making it harder to reach longer term goals)
    • buovjaga 2 hours ago
      One solution is to have a screensharing call with the contributor and have them explain their patch. We have already caught a couple of scammers who were applying for a FOSS internship this way. If they have not yet submitted anything non-trivial, they could showcase personal projects in the same way.

      FOSS has turned into an exercise in scammer hunting.

      [-]
      • swordsith 1 hour ago
        I'm not sure if I follow, are the PRs legitimate and they are just being made to buff their resume, or are PRs malicious?
    • lelandbatey 20 minutes ago
      It seems like it depends on how the authors have configured Vouch. They might completely close the project except to those on the vouch list (other than viewing the repo, which seems always implied).

      Alternatively they might keep some things open (issues, discussions) while requiring a vouch for PRs. Then, if folks want to get vouched, they can ask for that in discussions. Or maybe you need to ask via email. Or contact maintainers via Discord. It could be anything. Linux isn't developed on GitHub, so how do you submit changes there? Well you do so by following the norms and channels which the project makes visible. Same with Vouch.

    • qmarchi 3 hours ago
      Looking at this, it looks like it's intended to handle that by only denying certain code paths.

      Think denying access to production. But allowing changes to staging. Prove yourself in the lower environments (other repos, unlocked code paths) in order to get access to higher envs.

      Hell, we already do this in the ops world.

      [-]
      • Halan 3 hours ago
        So basically we are back at tagging stuff as good for first contributors like we have been doing since the dawn of GitHub
  • adeebshihadeh 4 hours ago
    "Open source has always worked on a system of trust and verify"

    Not sure about the trust part. Ideally, you can evaluate the change on its own.

    In my experience, I immediately know whether I want to close or merge a PR within a few seconds, and the hard part is writing the response to close it such that they don't come back again with the same stuff.

    (I review a lot of PRs for openpilot - https://github.com/commaai/openpilot)

    [-]
    • jgauth 1 hour ago
      Cool to see you here on HN! I just discovered the openpilot repository a few days ago and am having a great time digging through the codebase to learn how it all works. Msgq/cereal, Params, visionipc, the whole log message system in general. Some very interesting stuff in there.
    • ngcazz 3 hours ago
      When there's time, you review, when there isn't you trust...
      [-]
      • 999900000999 2 hours ago
        That's the issue here.

        Even if I trust you, I still need to review your work before merging it.

        Good people still make mistakes.

        [-]
        • stavros 1 hour ago
          What is the definition of trust if you still have to verify? How does "trust" differ from "untrust" in that scenario?
      • adeebshihadeh 1 hour ago
        What's the rush? Building good things takes time.
    • rafram 3 hours ago
      [flagged]
      [-]
      • BowBun 3 hours ago
        Why? I don't appreciate comments that cast doubt on decent technical contributors without any substance to back it up. It's a cheap shot from anonymity.
        [-]
        • 8n4vidtmkvmk 3 hours ago
          I'm not the parent but if you know you want to merge a PR "within a few seconds" then you're likely to be merging in bad changes.

          If you had left it at know you want to reject a PR within a few seconds, that'd be fine.

          Although with safety critical systems I'd probably want each contributor to have some experience in the field too.

          [-]
          • colinmcdermott 2 hours ago
            Sounds like you misunderstood. They didn't say they are merging PRs after a few seconds. Just that the difference between a good one and a bad is often obvious after a few seconds. Edit: typos
            [-]
            • adeebshihadeh 1 hour ago
              Exactly, every PR starts with:

              1. What’s the goal of this PR and how does it further our project’s goals?

              2. Is this vaguely the correct implementation?

              Evaluating those two takes a few seconds. Beyond that, yes it takes a while to review and merge even a few line diff.

            • stavros 1 hour ago
              I'm not sure there are many ways to interpret "I know whether I want to merge a PR within a few seconds".
              [-]
              • jeremyjh 31 minutes ago
                Yet I also agree with GP.
          • theshrike79 16 minutes ago
            "*WANT* to close or *WANT* to merge". Not WILL close or WILL merge.

            You look at the PR and you know just by looking at it for a few seconds if it looks off or not.

            Looks off -> "Want to close"

            Write a polite response and close the issue.

            Doesn't look off -> "Want to merge"

            If we want to merge it, then of course you look at it more closely. Or label it and move on with the triage.

      • latency-guy2 3 hours ago
        What kind of things would you like to hear? The default is you hear nothing. Most black boxes work this way. And you similarly have no say in the matter.
  • stephantul 17 hours ago
    IMO: trust-based systems only work if they carry risk. Your own score should be linked to the people you "vouch for" or "denounce".

    This is similar to real life: if you vouch for someone (in business for example), and they scam them, your own reputation suffers. So vouching carries risk. Similarly, if you going around someone is unreliable, but people find out they actually aren't, your reputation also suffers. If vouching or denouncing become free, it will become too easy to weaponize.

    Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.

    [-]
    • ashton314 16 hours ago
      > Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.

      Good reason to be careful. Maybe there's a bit of an upside to: if you vouch for someone who does good work, then you get a little boost too. It's how personal relationships work anyway.

      ----------

      I'm pretty skeptical of all things cryptocurrency, but I've wondered if something like this would be an actually good use case of blockchain tech…

      [-]
      • joecool1029 3 hours ago
        > I'm pretty skeptical of all things cryptocurrency, but I've wondered if something like this would be an actually good use case of blockchain tech…

        So the really funny thing here is the first bitcoin exchange had a Web of Trust system, and while it had it's flaws IT WORKED PRETTY WELL. It used GPG and later on bitcoin signatures. Nobody talks about it unless they were there but the system is still online. Keep in mind, this was used before centralized exchanges and regulation. It did not use a blockchain to store ratings.

        As a new trader, you basically could not do trades in their OTC channel without going through traders that specialized in new people coming in. Sock accounts could rate each other, but when you checked to see if one of those scammers were trustworthy, they would have no level-2 trust since none of the regular traders had positive ratings of them.

        Here's a link to the system: https://bitcoin-otc.com/trust.php (on IRC, you would use a bot called gribble to authenticate)

        [-]
        • buckle8017 3 hours ago
          Biggest issue was always the fiat transfers.
      • HumanOstrich 15 hours ago
        If we want to make it extremely complex, wasteful, and unusable for 99% of people, then sure, put it on the blockchain. Then we can write tooling and agents in Rust with sandboxes created via Nix to have LLMs maintain the web of trust by writing Haskell and OCaml.
        [-]
        • tempaccount420 3 hours ago
          Well done, you managed to tie Rust, Nix, Haskell and OCaml to "extremely complex, wasteful, and unusable"
        • refulgentis 1 hour ago
          Zig can fix this, I'm sure.
      • nine_k 3 hours ago
        I don't think that trust is easily transferable between projects, and tracking "karma" or "reputation" as a simple number in this file would be technically easy. But how much should the "karma" value change form different actions? It's really hard to formalize efficiently. The web of trust, with all intricacies, in small communities fits well into participants' heads. This tool is definitely for reasonably small "core" communities handling a larger stream of drive-by / infrequent contributors.
        [-]
        • JoshTriplett 2 hours ago
          > I don't think that trust is easily transferable between projects

          Not easily, but I could imagine a project deciding to trust (to some degree) people vouched for by another project whose judgement they trust. Or, conversely, denouncing those endorsed by a project whose judgement they don't trust.

          In general, it seems like a web of trust could cross projects in various ways.

      • drewstiff 13 hours ago
        Ethos is already building something similar, but starting with a focus on reputation within the crypto ecosystem (which I think most can agree is an understandable place to begin)

        https://www.ethos.network/

      • refulgentis 2 hours ago
        I'm unconvinced, to my possibly-undercaffeinated mind, the string of 3 posts reads like this:

        - a problem already solved in TFA (you vouching for someone eventually denounced doesn't prevent you from being denounced, you can totally do it)

        - a per-repo, or worse, global, blockchain to solve incrementing and decrementing integers (vouch vs. denounce)

        - a lack of understanding that automated global scoring systems are an abuse vector and something people will avoid. (c.f. Black Mirror and social credit scores in China)

      • atmosx 2 hours ago
        Sounds like a black mirror episode.
        [-]
        • moodyScarf 1 hour ago
          isnt that like literally the plot in one of the episodes? where they get a x out of 5 rating that is always visble.
      • smoyer 16 hours ago
        Look at ERC-8004
    • mlinsey 3 hours ago
      > Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.

      The same as when you vouch for your company to hire someone - because you will benefit from their help.

      I think your suggestion is a good one.

    • __turbobrew__ 16 hours ago
      > Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.

      Maybe your own vouch score goes up when someone you vouched for contributes to a project?

    • skeptic_ai 13 hours ago
      Think Epstein but in code. Everyone would vouch for him as he’s hyper connected. So he’d get a free pass all the way. Until all blows in our faces and all that vouched for him now gets flagged. The main issue is that can take 10-20 years for it to blow up.

      Then you have introverts that can be good but have no connections and won’t be able to get in.

      So you’re kind of selecting for connected and good people.

      [-]
      • dzink 2 hours ago
        Excellent point. Currently HN accounts get much higher scores if they contribute content, than if they make valuable comments. Those should be two separate scores. Instead, accounts with really good advice have lower scores than accounts that have just automated re-posting of content from elsewhere to HN.
      • zbentley 3 hours ago
        Fair (and you’re basically describing the xz hack; vouching is done for online identities and not the people behind them).

        Even with that risk I think a reputation based WoT is preferable to most alternatives. Put another way: in the current Wild West, there’s no way to identify, or track, or impose opportunity costs on transacting with (committing or using commits by) “Epstein but in code”.

      • pphysch 2 hours ago
        But the blowback is still there. The Epstein saga has and will continue to fragment and discipline the elite. Most people probably do genuinely regret associating with him. Noam Chomsky's credibility and legacy is permanently marred, for example.
    • JumpCrisscross 2 hours ago
      > trust-based systems only work if they carry risk. Your own score should be linked to the people you "vouch for" or "denounce"

      This is a graph search. If the person you’re evaluating vouches for people those you vouch for denounce, then even if they aren’t denounced per se, you have gained information about how trustworthy you would find that person. (Same in reverse. If they vouch for people who your vouchers vouch for, that indirectly suggests trust even if they aren’t directly vouched for.)

    • ares623 14 hours ago
      I've been thinking in a similar space lately, about how a "parallel web" could look like.

      One of my (admittedly half baked) ideas was a vouching similar with real world or physical incentives. Basically signing up requires someone vouching, similar to this one where there is actual physical interaction between the two. But I want to take it even further -- when you signup your real life details are "escrowed" in the system (somehow), and when you do something bad enough for a permaban+, you will get doxxed.

  • dom96 4 hours ago
    Initially I liked the idea, but the more I think about it the more this feels like it just boils down to: only allow contributions from a list of trusted people.
    [-]
    • 3371 4 hours ago
      Well a lot of useful things are not useful because they are innovative, but well designed an executed.
    • ramses0 4 hours ago
      It's similar to old Usenet "killfiles" - https://en.wikipedia.org/wiki/Kill_file

      ...or spam "RBL" lists which were often shared. https://en.wikipedia.org/wiki/Domain_Name_System_blocklist

    • rvz 4 hours ago
      This makes a lot more sense for large scale and high profile projects, and it eliminates low quality slop PRs by default with the contributors having to earn the trust of the core maintainers to contribute directly to the project.
      [-]
      • verdverm 3 hours ago
        it also increases the barrier to new adopters

        why not use ai to help with the ai problem, why prefer this extra coordination effort and implementation?

        [-]
        • tristan957 30 minutes ago
          The barrier in the Ghostty project is to simply open a discussion. It's not really hard.
        • Rumple22Stilk 3 hours ago
          That's the whole point. There are many new adopters and few competent ones.
          [-]
          • verdverm 3 hours ago
            I mean to well meaning contributors, I understand the goal of vouch, I think it goes too far and you'll turn off said well meaning contributors

            I certainly have dropped off when projects have burdensome rules, even before ai slop fest

  • otterley 41 minutes ago
    I'm reminded of the old Usenet responses to people claiming to solve the spam problem, so I can't help myself:

        Your solution advocates a
        ( ) technical (X) social ( ) policy-based ( ) forge-based
        approach to solving AI-generated pull requests to open source projects. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws.)
        
        ( ) PR spammers can easily use AI to adapt to detection methods
        ( ) Legitimate non-native English speakers' contributions would be affected
        ( ) Legitimate users of AI coding assistants would be affected
        ( ) It is defenseless against determined bad actors
        ( ) It will stop AI slop for two weeks and then we'll be stuck with it
        (X) Project maintainers don't have time to implement it
        (X) Requires immediate total cooperation from maintainers at once
        (X) False positives would drive away genuine new contributors
        
        Specifically, your plan fails to account for
        (X) Ease of creating new GitHub accounts
        (X) Script kiddies and reputation farmers
        ( ) Armies of LLM-assisted coding tools in legitimate use
        (X) Eternal arms race involved in all detection approaches
        ( ) Extreme pressure on developers to use AI tools
        (X) Maintainer burnout that is unaffected by automated filtering
        ( ) Graduate students trying to pad their CVs
        ( ) The fact that AI will only get better at mimicking humans
        
        and the following philosophical objections may also apply:
        (X) Ideas similar to yours are easy to come up with, yet none have ever
        been shown practical
        (X) Allowlists exclude new contributors
        (X) Blocklists are circumvented in minutes
        ( ) We should be able to use AI tools without being censored
        (X) Countermeasures must work if phased in gradually across projects
        ( ) Contributing to open source should be free and open
        (X) Feel-good measures do nothing to solve the problem
        (X) This will just make maintainer burnout worse
        
        Furthermore, this is what I think about you:
        (X) Sorry dude, but I don't think it would work.
        ( ) This is a stupid idea, and you're a stupid person for suggesting it.
        ( ) Nice try, assh0le! I'm going to find out what project you maintain and
        send you 50 AI-generated PRs!
  • HiPhish 4 hours ago
    Not sure about this one. I understand the need and the idea behind it is well-intentioned, but I can easily see denouncelists turn into a weapon against wrongthinkers. Said something double-plus-ungood on Twitter? Denounced. Accepted contribution from someone on a prominent denouncelist? Denouced. Not that it was not possible to create such lists before, but it was all informal.

    The real problem are reputation-farmers. They open hundreds of low-effort PRs on GitHub in the hope that some of them get merged. This will increase the reputation of their accounts, which they hope will help them stand out when applying for a job. So the solution would be for GitHub to implement a system to punish bad PRs. Here is my idea:

    - The owner of a repo can close a PR either neutrally (e.g. an earnest but misguided effort was made), positively (a valuable contribution was made) or negatively (worthless slop)

    - Depending on how the PR was closed the reputation rises or drops

    - Reputation can only be raised or lowered when interacting with another repo

    The last point should prevent brigading, I have to make contact with someone before he can judge me, and he can only judge me once per interaction. People could still farm reputation by making lots of quality PRs, but that's actually a good thing. The only bad way I can see this being gamed is if a bunch of buddies get together and merge each other's garbage PRs, but people can already do that sort of thing. Maybe the reputation should not be a total sum, but per project? Anyway, the idea is for there to be some negative consequences for people opening junk PRs.

    [-]
    • pwdisswordfishs 1 hour ago
      > The real problem are reputation-farmers. They open hundreds of low-effort PRs on GitHub in the hope that some of them get merged. This will increase the reputation of their accounts, which they hope will help them stand out when applying for a job. So the solution would be for GitHub to implement a system to punish bad PRs.

      GitHub customers really are willing to do anything besides coming to terms with the reality confronting them: that it might be GitHub (and the GitHub community/userbase) that's the problem.

      To the point that they'll wax openly about the whole reason to stay with GitHub over modern alternatives is because of the community, and then turn around and implement and/or ally themselves with stuff like Vouch: A Contributor Management System explicitly designed to keep the unwashed masses away.

      Just set up a Bugzilla instance and a cgit frontend to a push-over-ssh server already, geez.

      [-]
      • stavros 60 minutes ago
        I mean, "everyone already has an account" is already a very good reason. That doesn't mean "I automatically accept contributions from everyone", it might be "I want to make the process of contribution as easy as possible for the people I want as contributors".
        [-]
        • pwdisswordfishs 34 minutes ago
          Hatching a reputation-based scheme around a "Contributor Management System" and getting "the people you want as contributors" to go along with it is easier than getting them to fill in a 1/username 2/password 3/confirm-password form? Choosing to believe that is pure motivated reasoning.
          [-]
          • stavros 28 minutes ago
            People aren't on Github just to implement reputation-based management, though.
            [-]
            • pwdisswordfishs 3 minutes ago
              What does that observation have to do with the topic under the microscope?
    • pixl97 2 hours ago
      >The only bad way I can see this being gamed is if a bunch of buddies get together and merge each other's garbage PR

      Ya, I'm just wondering how this system avoids a 51% attack. Simply put there are a fixed number of human contributers, but effectively an infinite number of bot contributers.

    • zozbot234 4 hours ago
      GitHub needs to implement eBay-like feedback for contributors. With not only reputation scores, but explanatory comments like "AAAAAAAAAAAAAA++++++++++++ VERY GOOD CONTRIBUTIONS AND EASY TO WORK WITH. WOULD DEFINITELY MERGE THEIR WORK AGAIN!"
      [-]
      • zbentley 3 hours ago
        I know this is a joke, but pretending for a moment that it isn’t: this would immediately result in the rep system being gamed the same way it is on eBay: scam sellers can purchase feedback on cheap or self-shipping auctions and then pivot into defrauding people on high-dollar sales before being banned, rinse, and repeat.
      • Loughla 3 hours ago
        The ones I've never understood are: Prompt payment. Great buyer.

        I can't check out unless I pay. How is that feedback?

      • HiPhish 3 hours ago
        I think merged PRs should be automatically upvoted (if it was bad, why did you merge it?) and closed unmerged PRs should not be able to get upvoted (if it was good, why did you not merge it?).
        [-]
        • MarkusQ 3 hours ago
          Intrinsically good, but in conflict with some larger, out of band concern that the contributor could have no way to know about? Upvote to take the sting out of rejection, along with a note along the lines of "Well done, and we would merge is it weren't for our commitment to support xxx systems which are not compatible with yyy. Perhaps refactor as a plugin?"

          Also, upvotes and merge decisions may well come from different people, who happen to disagree. This is in fact healthy sometimes.

  • moogly 7 hours ago
    So you're screwed if you don't have any connections. In that way it's just like meat space.
    [-]
    • tristan957 29 minutes ago
      Nobody is screwed in the Ghostty project. Simply open a discussion to discuss your idea.
    • eightnoteight 2 hours ago
      exactly this, verification should always been on the code

      if someone fresh wants to contribute, now they will have to network before they can write code

      honestly i don't see my self networking just so that i can push my code

      I think there are valid ways to increase the outcome, like open source projects codifying the focus areas during each month, or verifying the PRs, or making PRs show proof of working etc,... many ways to deter folks who don't want to meaningfully contribute and simply ai generate and push the effort down the real contributors

  • tmvnty 10 hours ago
    Are we seeing forum moderations (e.g., Discourse trust levels^[1]) coming to source code repositories?

    [1]: https://blog.discourse.org/2018/06/understanding-discourse-t...

  • someone_jain_ 19 hours ago
    Hope github can natively integrate something in the platform, a relevant discussion I saw on official forums: https://github.com/orgs/community/discussions/185387
    [-]
    • matthewisabel 18 hours ago
      We'll ship some initial changes here next week to provide maintainers the ability to configure PR access as discussed above.

      After that ships we'll continue doing a lot of rapid exploration given there's still a lot of ways to improve here. We also just shipped some issues related features here like comment pinning and +1 comment steering [1] to help cut through some noise.

      Interested though to see what else emerges like this in the community, I expect we'll see continued experimentation and that's good for OSS.

      [1] https://github.blog/changelog/2026-02-05-pinned-comments-on-...

  • mijoharas 1 hour ago
    > The idea is based on the already successful system used by @badlogicgames in Pi. Thank you Mario.

    This is from the twitter post referenced above, and he says the same thing in the ghostty issue. Can anyone link to discussion on that or elaborate?

    (I briefly looked at the pi repo, and have looked around in the past but don't see any references to this vouching system.)

  • ctoth 59 minutes ago
    Ah, we have converted a technical problem into a social problem. Historically those are vastly easier to solve, right?

    Spam filters exist. Why do we need to bring politics into it? Reminds me of the whole CoC mess a few years back.

    Every time somebody talks about a new AI thing the lament here goes:

    > BUT THINK OF THE JUNIORS!

    How do you expect this system to treat juniors? How do your juniors ever gain experience committing to open source? who vouches for them?

    This is a permanent social structure for a transient technical problem.

  • nmstoker 2 hours ago
    Interesting idea.

    It spreads the effort for maintaining the list of trusted people, which is helpful. However I still see a potential firehose of randoms requesting to be vouched for. Various ways one might manage that, perhaps even some modest effort preceding step that would demonstrate understanding of the project / willingness to help, such as A/B triaging of several pairs of issues, kind of like a directed, project relevant CAPTCHA?

  • nabilsaikaly 1 hour ago
    I believe interviewing devs before allowing them to contribute is a good strategy for the upcoming years. Let’s treat future OS contributors the same way companies/startups do when they want to hire new devs.
    [-]
    • tedk-42 1 hour ago
      This adds friction, disincentivizes legitimate and high quality code commits and uses humans even more.
      [-]
      • otterley 53 minutes ago
        The entire point is to add friction. Accepting code into public projects used to be highly frictive. RMS and Linus Torvalds weren't just accepting anyone's code when they developed GNU and Linux; and to even be considered, you had to submit patches in the right way to a mailing list. And you had to write the code yourself!

        GitHub and LLMs have reduced the friction to the point where it's overwhelming human reviewers. Removing that friction would be nice if it didn't cause problems of its own. It turns out that friction had some useful benefits, and that's why you're seeing the pendulum swing the other way.

  • smileson2 21 minutes ago
    feels very micromanagement-ish
  • 1a527dd5 3 hours ago
    I think denouncing is an incredibly bad idea especially as the foundation of VOUCH seems to be web of trust.

    If you get denounced on a popular repo and everyone "inherits" that repo as a source of trust (e.g. think email providers - Google decides you are bad, good luck).

    Couple with the fact that usually new contributors take some time to find their feet.

    I've only been at this game (SWE) for ~10 years so not a long time. But I can tell you my first few contributions were clumsy and perhaps would have earned my a denouncement.

    I'm not sure if I would have contributed to the AWS SDK, Sendgrid, Nunit, New Relic (easily my best experience) and my attempted contribution to Npgsql (easily my worst experience) would have definitely earned me a denouncement.

    Concept is good, but I would omit the concept of denouncement entirely.

    [-]
    • acjohnson55 3 hours ago
      I'm guessing denounce is for bad faith behavior, not just low quality contributions. I think it's actually critical to have a way to represent this in a reputation system. It can be abused, but abuse of denouncement is grounds for denouncement, and being denounced by someone who is denounced by trusted people should carry little weight.
      [-]
      • ncr100 3 hours ago
        IDK about this implementation ...

        OVER-Denouncing ought to be tracked, too, for a user's trustworthiness profile.

    • Rapzid 3 hours ago
      Off topic but why was contributing to Npgsql a bad experience for you? I've contributed, admittedly minor stuff, to that ecosystem and it was pretty smooth.
    • mjr00 3 hours ago
      What value would this provide without the denouncement feature? The core purpose of the project, from what I can tell, is being able to stop the flood of AI slop coming from particular accounts, and the means to accomplish that is denouncing those accounts. Without denouncement you go from three states (vouched, neutral, denounced) to two (vouched and neutral). You could just make everyone who isn't vouched be put into the same bucket, but that seems counterproductive.
  • abracos 5 hours ago
    Isn't it extremely difficult problem? It's very easy to game, vouch 1 entity that will invite lots of bad actors
    [-]
    • mjr00 4 hours ago
      At a technical level it's straightforward. Repo maintainers maintain their own vouch/denouncelists. Your maintainers are assumed to be good actors who can vouch for new contributors. If your maintainers aren't good actors, that's a whole other problem. From reading the docs, you can delegate vouching to newly vouched users, as well, but this isn't a requirement.

      The problem is at the social level. People will not want to maintain their own vouch/denounce lists because they're lazy. Which means if this takes off, there will be centrally maintained vouchlists. Which, if you've been on the internet for any amount of time, you can instantly imagine will lead to the formation of cliques and vouchlist drama.

    • speps 4 hours ago
      The usual way of solving this is to make the voucher responsible as well if any bad actor is banned. That adds a layer of stake in the game.
      [-]
      • supriyo-biswas 4 hours ago
        A practical example of this can be seen in lobsters invite system, where if too many of the invitee accounts post spam, the inviter is also banned.
        [-]
        • iugtmkbdfil834 3 hours ago
          I think this is the inevitable reality for future FOSS. Github will be degraded, but any real development will be moved behind closed doors and invite only walls.
      • bsimpson 3 hours ago
        That's putting weight on the other end of the scale. Why would you want to stake your reputation on an internet stranger based on a few PRs?
        [-]
        • 63stack 3 hours ago
          You are not supposed to vouch for strangers, system working as intended.
    • dboon 4 hours ago
      You can't get perfection. The constraints / stakes are softer with what Mitchell is trying to solve i.e. it's not a big deal if one slips through. That being said, it's not hard to denounce the tree of folks rooted at the original bad actor.
      [-]
      • anupamchugh 4 hours ago
        > The interesting failure mode isn’t just “one bad actor slips through”, it’s provenance: if you want to > “denounce the tree rooted at a bad actor”, you need to record where a vouch came from (maintainer X, > imported list Y, date, reason), otherwise revocation turns into manual whack-a-mole. > > Keeping the file format minimal is good, but I’d want at least optional provenance in the details field > (or a sidecar) so you can do bulk revocations and audits.
    • DJBunnies 5 hours ago
      Indeed, it's relatively impossible without ties to real world identity.
      [-]
      • mjr00 4 hours ago
        > Indeed, it's relatively impossible without ties to real world identity.

        I don't think that's true? The goal of vouch isn't to say "@linus_torvalds is Linus Torvalds" it's to say "@linus_torvalds is a legitimate contributor an not an AI slopper/spammer". It's not vouching for their real world identity, or that they're a good person, or that they'll never add malware to their repositories. It's just vouching for the most basic level of "when this person puts out a PR it's not AI slop".

        [-]
        • DJBunnies 3 hours ago
          That’s not the point.

          Point is: when @lt100, @lt101, … , @lt999 all vouch for something, it’s worthless.

          [-]
          • jen20 2 hours ago
            But surely then a maintainer notices what has happened, and resolves the problem?
    • hobofan 4 hours ago
      Then you would just un-vouch them? I don't see how its easy to game on that front.
    • smotched 5 hours ago
      you can't really build a perfect system, the goal would be to limit bad actors as much as possible.
  • skeptrune 14 hours ago
    I have a hard time trying to poke holes in this. Seems objectively good and like it, or some very similar version of it, will work long term.
  • davidkwast 19 hours ago
    I think LLMs are accelerating us toward a Dune-like universe, where humans come before AI.
    [-]
    • sph 15 hours ago
      You say that as if it’s a bad thing. The bad thing is that to get there we’ll have to go through the bloody revolution to topple the AI that have been put before the humans. That is, unless the machines prevail.

      You might think this is science fiction, but the companies that brought you LLMs had the goal to pursue AGI and all its consequences. They failed today, but that has always been the end game.

    • ashton314 16 hours ago
      Got to go through the Butlerian Jihad first… not looking forward to that bit.

      (EDIT: Thanks sparky_z for the correction of my spelling!)

      [-]
  • alexjurkiewicz 18 hours ago
    The Web of Trust failed for PGP 30 years ago. Why will it work here?

    For a single organisation, a list of vouched users sounds great. GitHub permissions already support this.

    My concern is with the "web" part. Once you have orgs trusting the vouch lists of other orgs, you end up with the classic problems of decentralised trust:

    1. The level of trust is only as high as the lax-est person in your network 2. Nobody is particularly interested in vetting new users 3. Updating trust rarely happens

    There _is_ a problem with AI Slop overrunning public repositories. But WoT has failed once, we don't need to try it again.

    [-]
    • Animats 15 hours ago
      > The Web of Trust failed for PGP 30 years ago. Why will it work here?

      It didn't work for links as reputation for search once "SEO" people started creating link farms. It's worse now. With LLMs, you can create fake identities with plausible backstories.

      This idea won't work with anonymity. It's been tried.

      [-]
      • ibrahima 4 hours ago
        I guess this is why Sam Altman wants to scan everyone's eyeballs.
    • chickensong 2 hours ago
      I'm not convinced that just because something didn't work 30 years ago, there's no point in revisiting it.

      There's likely no perfect solution, only layers and data points. Even if one of the layers only provides a level of trust as high as the most lax person in the network, it's still a signal of something. The internet will continue to evolve and fracture into segments with different requirements IMHO.

    • javascripthater 17 hours ago
      Web of Trust failed? If you saw that a close friend had signed someone else's PGP key, you would be pretty sure it was really that person.
      [-]
      • BugsJustFindMe 5 hours ago
        Identity is a lot easier than forward trustworthiness. It can succeed for the former and fail for the latter.
  • ashton314 16 hours ago
    Reminds me of the reputation system that the ITA in Anathem by Neal Stephenson seem to have. One character (Sammann) needs access to essentially a private BBS and has to get validated.

    “After we left Samble I began trying to obtain access to certain reticules,” Sammann explained. “Normally these would have been closed to me, but I thought I might be able to get in if I explained what I was doing. It took a little while for my request to be considered. The people who control these were probably searching the Reticulum to obtain corroboration for my story.”

    “How would that work?” I asked.

    Sammann was not happy that I’d inquired. Maybe he was tired of explaining such things to me; or maybe he still wished to preserve a little bit of respect for the Discipline that we had so flagrantly been violating. “Let’s suppose there’s a speelycaptor at the mess hall in that hellhole town where we bought snow tires.”

    “Norslof,” I said.

    “Whatever. This speelycaptor is there as a security measure. It sees us walking to the till to pay for our terrible food. That information goes on some reticule or other. Someone who studies the images can see that I was there on such-and-such a date with three other people. Then they can use other such techniques to figure out who those people are. One turns out to be Fraa Erasmas from Saunt Edhar. Thus the story I’m telling is corroborated.”

    “Okay, but how—”

    “Never mind.” Then, as if he’d grown weary of using that phrase, he caught himself short, closed his eyes for a moment, and tried again. “If you must know, they probably ran an asamocra on me.”

    “Asamocra?”

    “Asynchronous, symmetrically anonymized, moderated open-cry repute auction. Don’t even bother trying to parse that. The acronym is pre-Reconstitution. There hasn’t been a true asamocra for 3600 years. Instead we do other things that serve the same purpose and we call them by the old name. In most cases, it takes a few days for a provably irreversible phase transition to occur in the reputon glass—never mind—and another day after that to make sure you aren’t just being spoofed by ephemeral stochastic nucleation. The point being, I was not granted the access I wanted until recently.” He smiled and a hunk of ice fell off his whiskers and landed on the control panel of his jeejah. “I was going to say ‘until today’ but this damned day never ends.”

    “Fine. I don’t really understand anything you said but maybe we can save that for later.”

    “That would be good. The point is that I was trying to get information about that rocket launch you glimpsed on the speely.”*

    [-]
    • igor47 14 hours ago
      Man, I'm a huge fan of Anathem (and Stephenson in general) but this short excerpt really reminded me of https://xkcd.com/483/
      [-]
      • ashton314 30 minutes ago
        Oh for sure. To be fair, that excerpt I posted is probably the worst in the entire book since Sammann is explaining something using a bunch of ITA ~~jargon~~ bulshytt and it’s meant to be incomprehensible to even the POV character Erasmas.
      • renewiltord 6 hours ago
        Spoilers for Anathem and His Dark Materials below

        Xkcd 483 is directly referencing Anathem so that should be unsurprising but I think in both His Dark Materials (e.g. anbaric power) and in Anathem it is in-universe explained. The isomorphism between that world and our world is explicitly relevant to the plot. It’s the obvious foreshadowing for what’s about to happen.

        The worlds are similar with different names because they’re parallel universes about to collide.

        [-]
        • CamperBob2 4 hours ago
          I wonder how effective that might be as a language-learning tool. Imagine a popular novel in the US market, maybe 80000-100000 words long but whose vocabulary consists of only a few thousand unique words. The first few pages are in English, but as you progress through the book, more and more of the words appear in Chinese or German or whatever the target language is. By the end of the book you are reading the second language, having absorbed it more or less through osmosis.

          Someone who reads A Clockwork Orange will unavoidably pick up a few words of vaguely-Russian extraction by the end of it, so maybe it's possible to take advantage of that. The main problem I can see is that the new language's sentence grammar will also have to be blended in, and that won't go as smoothly.

  • arjie 13 hours ago
    The return of the Web of Trust, I suppose. Interesting that if you look at the way Linux is developed (people have trees that they try to get into the inner circle maintainers who then submit their stuff to Linus's tree) vs. this, it's sort of like path compression in a union-find data structure. Rather than validating a specific piece of code, you validate the person themselves.

    Another thing that is amusing is that Sam Altman invented this whole human validation device (Worldcoin) but it can't actually serve a useful purpose here because it's not enough to say you are who you are. You need someone to say you're a worthwhile person to listen to.

  • bmitch3020 9 hours ago
    I could see this becoming useful to denounce contributors. "This user is malicious, a troll, contributes LLM slop, etc." It could become a distributed block list, discourage some bad behavior I've been seeing on GitHub, assuming the denounce entries are reviewed rather than automatically accepted.

    But using this to vouch for others as a way to indicate trust is going to be dangerous. Accounts can be compromised, people make mistakes, and different people have different levels of trust.

    I'd like to see more attention placed in verifying released content. That verification should be a combination of code scans for vulnerabilities, detection of a change in capabilities, are reproducible builds of the generated artifacts. That would not only detect bad contributions, but also bad maintainers.

  • vips7L 4 hours ago
    Love seeing some nushell usage!
  • mehdibl 2 hours ago
    Why in nushell? Not in go?

    But I like the idea and principle. OSS need this and it's traded very lightly.

    [-]
    • tristan957 20 minutes ago
      Mitchell has really enjoyed Nu essentially. If it is implemented in a shell script, it probably also means that general shell tooling can work with the format.
  • canada_dry 19 hours ago
    An interesting approach to the worsening signal-to-noise ratio OSS projects are experiencing.

    However, it's not hard to envision a future where the exact opposite will be occur: a few key AI tools/models will become specialized and better at coding/testing in various platforms than humans and they will ignore or de-prioritize our input.

  • kfogel 2 hours ago
    Can't believe they didn't call it VouchDB.
  • cedws 19 hours ago
    I think this project is motivated by the same concern I have that open source (particularly on GitHub) is going to devolve into a slop fest as the barrier of entry lowers due to LLMs. For every principled developer who takes personal responsibility for what they ship, regardless of whether it was LLM-generated, there are people 10 others that don't care and will pollute the public domain with broken, low quality projects. In other words, I foresee open source devolving from a high trust society to a low one.
  • readitalready 30 minutes ago
    Is this social credit?
  • amadeuspagel 11 hours ago
    Why isn't the link directly to the github repository[1]?

    [1]: https://github.com/mitchellh/vouch

  • rorylaitila 3 hours ago
    I don't know if this is the right solution, but I appreciate the direction. It's clear that AI slop is trading on people's good names and network reputation. Poisoning the well. The dead internet is here. In multiple domains people are looking for a solution to "are you someone/something worthy of my emotional investment." I don't think code can be held to be fully AI-free, but we need a way to check that they are empathy-full.
    [-]
    • the_biot 2 hours ago
      That's what I thought of right away as well. We may end up with a blacklist of "known AI slop peddlers".
  • sunir 3 hours ago
    Reminds me fondly of advogato.
  • sanufar 19 hours ago
    Makes sense, it feels like this just codifies a lot of implicit standards wrt OSS contribution which is great to see. I do wonder if we'll ever see a tangible "reputation" metric used for contribs, or if it'd even be useful at all. Seems like the core tension now is just the ease of pumping out slop vs the responsibility of ownership of code/consideration for project maintainers.
  • pyrolistical 17 hours ago
    Another way to solve this is how Linux organizes. Tree structure where lower branches vet patches and forward them up when ready
  • jemfinch 16 hours ago
    Is this the return of Advogato?
  • whalesalad 15 hours ago
    We got social credit on GitHub before GTA 6.
  • aatd86 4 hours ago
    Does is overlap with Contributor License Agreement?
  • danilocesar 2 hours ago
    Wait until he finds out about GPG signing parties in the early 2000s.
  • quotemstr 4 hours ago
    Fortunately, as long as software is open sourced, forking will remain a viable way to escape overzealous gatekeeping.
  • baq 2 hours ago
    Central karma database next, please. Vouch = upvote, denounce = downvote
  • treeshateorcs 3 hours ago
    this wouldn't have helped against the xz attack
    [-]
    • jen20 2 hours ago
      It's not intended to, though? It's supposed to address the issue of low-effort slop wasting maintainer time, not a well-planned attack.
  • IshKebab 4 hours ago
    > Who and how someone is vouched or denounced is left entirely up to the project integrating the system.

    Feels like making a messaging app but "how messages are delivered and to whom is left to the user to implement".

    I think "who and how someone is vouched" is like 99.99% of the problem and they haven't tried to solve it so it's hard to see how much value there is here. (And tbh I doubt you really can solve this problem in a way that doesn't suck.)

    [-]
    • skeeter2020 4 hours ago
      Agree! Real people are not static sets of characteristics, and without a immutable real-world identity this is even harder. It feels like we've just moved the problem from "evaluate code one time" to "continually evaluate a persona that could change owners"
    • vscode-rest 4 hours ago
      [dead]
  • a-dub 56 minutes ago
    this highlights the saddest thing about this whole generative ai thing. beforehand, there was opportunity to learn, deliver and prove oneself outside of classical social organization. now that's all going to go away and everyone is going to fall back on credentials and social standing. what an incredible shame for social mobility and those who for one reason or another don't fit in with traditional structures.
    [-]
    • patcon 4 minutes ago
      I feel this is a bit too pessimistic. For example, people can make tutorials that auto-certify in vouch. Or others can write agent skills that share etiquette, which agents must demonstrate usage of before PRs can be created.

      Yes, there's room for deception, but this is mostly about superhuman skills and newcomer ignorance and a new eternal September that we'll surely figure out

    • senko 10 minutes ago
      The origin of the problems with low-quality drive-by requests is github's social nature[0]. AI doesn't help, but it's not the cause.

      I've seen my share of zero-effort drive-by "contributions" so people can pad their GH profile, long before AI, on tiny obscure projects I have published there: larger and more prominent projects have always been spammed.

      If anything, the AI-enabled flood will force the reckoning that was long time coming.

      [0] https://news.ycombinator.com/item?id=46731646

    • boltzmann-brain 34 minutes ago
      Vouch is a good quick fix, but it has some properties that can lead to collapsed states, discussed in the article linked here: https://news.ycombinator.com/item?id=46938811
      [-]
      • a-dub 26 minutes ago
        it's also going to kill the open web. nobody is going to want to share their ideas or code publicly anymore. with the natural barriers gone, the incentives to share will go to zero. everything will happen behind closed doors.
        [-]
        • tolerance 9 minutes ago
          You could argue that this could increase output to the open web: outsiders still need a place to clout chase.
          [-]
          • boltzmann-brain 6 minutes ago
            GitHub has never been a good method of clout chasing. in decades of being in this industry, I've seen < 1% of potential employers care about FLOSS contributions, as long as you have some stuff on your GH.
    • potsandpans 39 minutes ago
      > that's all going to go away and everyone is going to fall back on credentials and social standing.

      Only if you allow people like this to normalize it.

    • cyanydeez 14 minutes ago
      argueably, the years 2015-2020, we should have gone back to social standing.
    • yencabulator 22 minutes ago
      .. all revolving around a proprietary Microsoft service.

      Support Microsoft or be socially shunned?

      [-]
      • mitchellh 15 minutes ago
        Vouch is forge-agnostic. See the 2nd paragraph in the README:

        > The implementation is generic and can be used by any project on any code forge, but we provide GitHub integration out of the box via GitHub actions and the CLI.

        And then see the trust format which allows for a platform tag. There isn't even a default-GitHub approach, just the GitHub actions default to GitHub via `--default-platform` flag (which makes sense cause they're being invoked ON GITHUB).

        [-]
        • yencabulator 8 minutes ago
          Define "platform".

          So I can choose from github, gitlab or maybe codeberg? What about self-hosters, with project-specific forges? What about the fact that I have an account on multiple forges, that are all me?

          This seems to be overly biased toward centralized services, which means it's just serving to further re-enforce Microsoft's dominance.

          [-]
          • mitchellh 3 minutes ago
            It's a text string, platform can be anything you want, then use the vouch CLI (or parse it yourself) to do whatever you want. We don't do identity mapping, because cross-forge projects are rare and maintaining that would centralize the system and its not what we're trying to do. The whole thing is explicitly decentralized with tiny, community specific networks that you build up.
    • bicx 42 minutes ago
      I guess you could say the same about a lot of craft- or skill-based professions that ultimately got heavily automated.
    • siva7 14 minutes ago
      It also marks the end of the open source movement as the value of source code has lost any meaning with vibe coding and ai.
  • skeeter2020 4 hours ago
    Doesn't this just shift the same hard problem from code to people? It may seem easier to assess the "quality" of a person, but I think there are all sorts of complex social dynamics at play, plus far more change over time. Leave it to us nerds to try and solve a human problem with a technical solution...
    [-]
    • mjr00 4 hours ago
      > Leave it to us nerds to try and solve a human problem with a technical solution...

      Honestly, my view is that this is a technical solution for a cultural problem. Particularly in the last ~10 years, open source has really been pushed into a "corporate dress rehearsal" culture. All communication is expected to be highly professional. Talk to everyone who opens an issue or PR with the respect you would a coworker. Say nothing that might offend anyone anywhere, keep it PG-13. Even Linus had to pull back on his famously virtiolic responses to shitty code in PRs.

      Being open and inclusive is great, but bad actors have really exploited this. The proper response to an obviously AI-generated slop PR should be "fuck off", closing the PR, and banning them from the repo. But maintainers are uncomfortable with doing this directly since it violates the corporate dress rehearsal kayfabe, so vouch is a roundabout way of accomplishing this.

      [-]
      • zbentley 3 hours ago
        What on earth makes you think that denouncing a bot PR with stronger language would deter it? The bot does not and cannot care.

        If that worked, then there would be an epidemic of phone scammers or email phishers having epiphanies and changing careers when their victims reply with (well deserved) angry screeds.

        [-]
        • mjr00 2 hours ago
          I didn't mean the "fuck off" part to be quite verbatim... this ghostty PR[0] is a good example of how this stuff should be handled. Notably: there's no attempt to review or provide feedback--it's instantly recognized as a slop PR--and it's an instant ban from repo.

          This is the level of response these PRs deserve. What people shouldn't be doing is treating these as good-faith requests and trying to provide feedback or asking them to refactor, like they're mentoring a junior dev. It'll just fall on deaf ears.

          [0] https://github.com/ghostty-org/ghostty/pull/10588

          [-]
          • zozbot234 2 hours ago
            Sure, but that pull request is blatantly unreviewable because of how it bundles dozens of entirely unrelated commits together. Just say that and move on: it only takes a one-line comment and it informs potential contributors about what to avoid if any of them is lurking the repo.
            [-]
            • jack_pp 2 hours ago
              One problem with giving any feedback is that it can automatically be used by an agent to make another PR.
              [-]
              • zozbot234 1 hour ago
                If they immediately make another low-quality PR that's when you ban them because they're clearly behaving like a bad actor. But providing even trivial, boilerplate feedback like that is an easy way of drawing a bright line for contributors: you're not going to review contributions that are blatantly low-quality, and that's why they must refrain from trying to post raw AI slop.
            • mjr00 1 hour ago
              Sounds like we're largely saying the same thing. Open source maintainers should feel empowered to say "nope, this is slop, not reading, bye" and ban you from the repo, without worrying if that seems unprofessional.
              [-]
              • zozbot234 1 hour ago
                If you explicitly say "this is unreviewable junk, kthxbye" there's nothing unprofessional about it. But just blaming "AI slop" runs into the obvious issue that most people may be quite unaware that AI will generate unreviewable junk by default, unless it's being very carefully directed by an expert user.
      • verdverm 3 hours ago
        > Particularly in the last ~10 years ...

        This is maturation, open source being professional is a good sign for the future

      • zozbot234 3 hours ago
        I disagree. The problem with AI slop is not so much that it's from AI, but that it's pretty much always completely unreadable and unmaintainable code. So just tell the contributor that their work is not up to standard, and if they persist they will get banned from contributing further. It's their job to refactor the contribution so that it's as easy as possible to review, and if AI is not up to the task this will obviously require human effort.
        [-]
        • mjr00 3 hours ago
          You're giving way too much credit to the people spamming these slop PRs. These are not good faith contributions by people trying to help. They are people trying to get pull requests merged for selfish reasons, whether that's a free shirt or something to put on their resume. Even on the first page of closed ghostty PRs I was able to find some prime slop[0]. It is a huge waste of time for a maintainer to nicely tell people like this they need to refactor. They're not going to listen.

          edit; and just to be totally clear this isn't an anti-AI statement. You can still make valid, even good PRs with AI. Mitchell just posted about using AI himself recently[1]. This is about AI making it easy for people to spam low-quality slop in what is essentially a DoS attack on maintainers' attention.

          [0] https://github.com/ghostty-org/ghostty/pull/10588

          [1] https://mitchellh.com/writing/my-ai-adoption-journey

          [-]
          • zozbot234 3 hours ago
            If you can immediately tell "this is just AI slop" that's all the review and "attention" you need; you can close the PR and append a boilerplate message that tells the contributor what to do if they want to turn this into a productive contribution. Whether they're "good faith contributors trying to help" or not is immaterial if this is their first interaction. If they don't get the point and spam the repo again then sure, treat them as bad actors.
            [-]
            • michaelt 3 hours ago
              The thing is, the person will use their AI to respond to your boilerplate.

              That means you, like John Henry, are competing against a machine at the thing that machine was designed to do.

        • bpavuk 3 hours ago
          ...and waste valuable time reviewing AI slop? it looks surprisingly plausible, but never integrates with the bigger picture.
  • WhereIsTheTruth 2 hours ago
    Replacing merit with social signaling.. ..sigh..

    The enshitification of GitHub continues

  • rvz 3 hours ago
    This makes sense for large-scale and widely used projects such as Ghostty.

    It also addresses the issue in tolerating unchecked or seemingly plausible slop PRs from outside contributors from ever getting merged in easily. By default, they are all untrusted.

    Now this social issue has been made worse by vibe-coded PRs; and untrusted outside contributors should instead earn their access to be 'vouched' by the core maintainers rather than them allowing a wild west of slop PRs.

    A great deal.

  • zenoware 4 hours ago
    [dead]
  • enterprisetalk 16 hours ago
    [dead]
  • BiteCode_dev 4 hours ago
    Illegal in europe. You are bot allowed to keep a black list of people with the exception of some criminal situations or addiction.
    [-]
    • jen20 2 hours ago
      Can you cite the law that says you may not do this?

      There are obvious cases in Europe (well, were if you mean the EU) where there need not be criminal behaviour to maintain a list of people that no landlord in a town will allow into their pubs, for example.

      [-]
      • BiteCode_dev 1 hour ago
        Under the EU’s GDPR, any processing of personal data (name, contact, identifiers, etc.) generally requires a legal basis (e.g., consent, legitimate interest, contractual necessity), clear purpose, minimal data, and appropriate protection. Doing so without a lawful basis is unlawful.

        It is not a cookie banner law. The american seems to keep forgetting that it's about personal data, consent, and the ability to take it down. The sharing of said data is particularly restricted.

        And of course, this applies to black list, including for fraud.

        Regulators have enforced this in practice. For example in the Netherlands, the tax authority was fined for operating a “fraud blacklist” without a statutory basis, i.e., illegal processing under GDPR: https://www.autoriteitpersoonsgegevens.nl/en/current/tax-adm...

        The fact is many such lists exist without being punished. Your landlord list for example. That doesn't make it legal, just no shutdown yet.

        Because there is no legal basis for it, unless people have committed, again, an illegal act (such as destroying the pub property). Also it's quite difficult to have people accept to be on a black list. And once they are, they can ask for their data to be taken down, which you cannot refuse.

        [-]
        • jen20 14 minutes ago
          > The american seems to keep forgetting that it's about personal data, consent, and the ability to take it down.

          I am European, nice try though.

          It is very unclear that this example falls foul of GDPR. On this basis, Git _itself_ fails at that, and no reasonable court will find it to be the case.

  • archagon 5 hours ago
    However good (or bad) this idea may be, you are shooting yourself in the foot by announcing it on Twitter. Half the devs I know won’t touch that site with a ten foot pole.
  • returnInfinity 16 hours ago
    Easy for the koreans to game this.
  • rcakebread 3 hours ago
    Who trusts people who still use X?
    [-]
    • jimmaswell 3 hours ago
      I still prefer it to Wayland for various reasons, and I don't think Wayland would work properly on my mid 2010 Macbook anyway.
      [-]
  • mijoharas 58 minutes ago
    Oh and one other thing I was curious about. Did Mitchell comment on why he wrote it in nushell? I've not really messed around with that myself yet.

    Would people recommend it? I feel like I have such huge inertia for changing shells at this point that I've rarely seriously considered it.

    [-]
    • yencabulator 15 minutes ago
      Nushell has great sugar coating but mishandles basics like it will eat errors and get into impossible code paths on control-C. I have given up on it.