• ChrisArchitect 17 hours ago
  • kepano 17 hours ago
    I've been saying this since 2023

    > If your data is stored in a database that a company can freely read and access (i.e. not end-to-end encrypted), the company will eventually update their ToS so they can use your data for AI training — the incentives are too strong to resist

    https://news.ycombinator.com/item?id=37124188

    [-]
    • mememememememo 17 hours ago
      Yes I think you are right. Even a super ethical company can be taken over. There may be exceptions but it is more luck. I work for a SP500 that absolutely won't dont this and locks down prod access so a rogue staff can't do it. But if Larry or Zuck or Bezos buys them out, who knows.
      [-]
      • Forgeties79 15 hours ago
        I worry about a post-Gabe valve for this reason.
      • miohtama 16 hours ago
        Microsoft would never do this

        (-:

        [-]
        • chistev 16 hours ago
          I don't like when people make sarcastic remarks and sign off in a way that indicates it was sarcasm. It kills it for me. Lol.

          Like using that /s or using that smiling emoji sign you used.

          A good joke would land even if some other people miss it because of the text format.

          "Microsoft would never do this" would have landed for me.

          [-]
          • munk-a 16 hours ago
            I'd rather the symbol be there and occasionally see this discussion happen then the symbol be omitted and occasionally have the discussion where we try and figure out if the person was serious. When talking in person there are all sorts of visual and vocal cues and the speaker has cues in response to confirm the sarcasm was received. There are two parties that can correct that misunderstanding and have well established tools to do so.

            /s is basically the internet-enabled equivalent of a sarcasm tone or a wink - it is much more difficult to detect genuine subtle sarcasm on the internet because of the absence of common communication tools. /s is also a valuable accessibility tool for those that might have difficulty with social cues and subtlety so, for all my autistic friends, I'm happy to defend it.

          • dwringer 16 hours ago
            I believe Poe's law makes it basically inescapable and HN is no exception to it
          • abustamam 15 hours ago
            I think the sarcasm indicator is useful especially for some neurodivergent folks who may not pick up on social cues well. And the sacram indicator does not in any way detract from the joke.
            [-]
            • fragmede 14 hours ago
              Yeah it does. If you have to explain the joke, it makes it not funny. In the real world, people don't have explicit sarcasm markers, you have up deduce it. As a neurodivergent person, I reflexively downvote on /s because coddling people isn't going to help them grow or deal with the real world.
              [-]
              • abustamam 11 hours ago
                In the real world we have things like intonation, facial expressions, body language, and other indicators to denote sarcasm.

                On the internet it is very possible and often plausible that someone can very much believe what may appear to a reasonable person to be sarcasm. Having a crutch online does not equate to an equivalent crutch offline.

                Anecdotally, neurodivergent folks I know prefer, and some even require, a sarcasm indicator online.

          • encrypted_bird 16 hours ago
            Tone does not translate well through text.

            If you can tell sarcasm from text, that doesn't mean everyone can.

            For my part, the smiley face was much-appreciated as I've seen people who genuinely would think that with a straight face.

            --- EDIT: Spelling of a word

            [-]
            • munk-a 16 hours ago
              I'm sure had you omitted it - instead of that reply there would have been a series of comments talking about how Microsoft actually has a track record of doing things like this. It's impossible to please everyone on the internet but I very much appreciate when people lean towards making their communication clearer.
          • gchamonlive 16 hours ago
            Oh seems like we've got a joke connoisseur in our midst, ah yes very distinguished

            :)

          • darthoctopus 16 hours ago
            subtlety is dead on the internet of the lowest common denominator, and that enabled by AI assistance is very low indeed
          • sellmesoap 16 hours ago
            In Soviet Russia, Microsoft is the shit!
            [-]
            • abustamam 15 hours ago
              As opposed to other places where Microsoft is just shit? :)
              [-]
              • sellmesoap 14 hours ago
                It's my favorite slow burn, 'the shit' is a good thing, but everything is backwards in Soviet Russia...
                [-]
                • abustamam 21 minutes ago
                  That's what I understood, just wanted to make sure I wasn't reading too deeply into it :)
    • random3 16 hours ago
      The “do it first, apologize later” will be the general principle with anything. It’s going to be hard and futile to prove even if they don’t do it through ToS first. Amazon has one of the largest corporate training sets out there:)
    • slowhadoken 16 hours ago
      I’m still concerned about MS using the code I write on my laptop to train AI. Tinfoil hat wearing Linux users are starting to make a lot of sense to me.
      [-]
      • qaadika 16 hours ago
        It's been interesting the past year or so watching myself turn more and more into one of the tin-foil wearing linux users. I'm not sure how it happened, but self-hosting became more and more alluring and hyperfocusing on taking as much data as I can offline became worth spending entire weekends on.

        I didn't become paranoid, everybody else didn't!

      • DougN7 16 hours ago
        I thought that’s more what the CoPilot change is really about - not your repo, but all the code CoPilot read while it is offering helpful completions, etc - so literally the code on your laptop. I cancelled my account.
      • b112 16 hours ago
        It's not tinfoil, it's aluminum foil. I.. I mean, I heard it's that.
    • ekjhgkejhgk 15 hours ago
      You're right, of course, and I find it frustrating that people are so thick as to not see your claim as obvious.

      Stallman is always right.

      [-]
      • itsdesmond 15 hours ago
        Back in 2003 he was advocating for legalization of child sexual abuse material. In 2006 he said he was skeptical of the harm caused by “voluntary pedophilia”, a statement that presupposes that children can consent to sex with adults.

        So I dunno bout that.

      • jamiek88 15 hours ago
        About technology.

        About communication with other humans he’s pretty much always wrong.

        Imagine we’d had a better communicator who wasn’t a gross toe nail picking troll fronting free software? It shouldn’t matter. Only the ideas should matter . But the reality is different.

        [-]
        • itsdesmond 15 hours ago
          He argued against EU proposals for ISPs to filter CSAM on the basis of protecting free expression. Not always right about technology, either.
          [-]
          • tryauuum 15 hours ago
            Mass scale internet censorship in Russia also started with the premise of "protecting the children"

            When you put in law that ISPs should adhere to some government-provided blocklist, this is already a game over. No matter how sane your government is. The government in 10 years might be vastly different, and the ability to control the ISPs is too alluring to not abuse

            I'd rather live in a world where you could find words like "kill all russians", or child porn, or blatant propaganda than to live with the government censorship. I lived in Russia and the experience was nightmare. Who knows, maybe if the government didn't have the tools they had then the independent media would still be reachable by an average russian, the pictures of the pointless massacre would be public and the war would be over in a week

            [-]
            • itsdesmond 14 hours ago
              Aight but the guy who thinks kid fucking is legit shows up to argue against anti-CSAM legislation and folks are like “Gosh, what a technologist!” Give me a break. I wonder which Google Alert topic he saw this on first.
      • worik 15 hours ago
        > Stallman is always right.

        Not really. Almost always right....

    • hugodan 16 hours ago
      and it is not end-to-end encrypted if you don't own the keys, avoid bullshit
    • moralestapia 16 hours ago
      Thank you for your service. We really need more "canaries in the mine" giving out early warnings of things that might not be evident on a first glance.

      Any takes on what 2029 will look like? (related to this topic, ofc)

      [-]
      • metaphor 16 hours ago
        [dead]
      • chistev 16 hours ago
        Now this is sarcasm. Lol
        [-]
        • cinntaile 16 hours ago
          It seems like you do need the smileys or the /s to understand when something is and isn't sarcasm.
    • cj 17 hours ago
      Edit: Okay, sounds like you guys are pissed to the point where it seems like the pro tip here is to stop using GitHub.

      Pro tip: sign up for the business/enterprise version when reasonable in price.

      I do this with Google Workspace. You can also do it with GitHub.

      (Google doesn’t train on Workspace, Github doesn’t train on business customers, etc)

      [-]
      • worble 17 hours ago
        Pro tip: You could instead spend that money to spin up a forgejo instance for as little as $2 a month https://www.pikapods.com/apps#development (not affiliated, just a happy customer)

        Please don't reward these companies with money.

        [-]
        • hirako2000 16 hours ago
          I did exactly that. Containerized it and Forgejo simply became a small instance part of the fleet. UI is much snappier then GitHub. And more importantly: zero outages.
        • encrypted_bird 16 hours ago
          Or, alternatively, self-host a gitea instance!
          [-]
          • kriops 15 hours ago
            No. Money-grab incoming. Use forgejo.
            [-]
            • encrypted_bird 14 hours ago
              Huh? Care to elaborate how Gitea is an inevitable cashgrab? Sure, it's not strictly copyleft, but it is licensed with the MIT License, and that is also the most popular license on GitHub.
      • thot_experiment 17 hours ago
        Probably don't reward extortion with money.
        [-]
        • kelnos 15 hours ago
          You don't have to use their free service if you don't like its terms. "Extortion" is a bit of an exaggeration here.

          Yes, I know, it's dicey when people get used to a nice, friendly platform, and the platform gains lots of users, and then at some point (or several points), the terms start getting worse, and people feel misled and betrayed.

          I get that. But this is a corporation. Hell, this is Microsoft. It's hilarious how many people think they've actually changed since their antitrust judgment in the 90s. I guess a lot of folks here are too young to remember it, even.

          Companies exist to make money. If they are giving you something for free, they are either a) getting something else out of it already, or b) giving it to you for free now and looking for ways to get their own value out of it later. I don't mean that in some sort of cynical, "fuck the world" sense; that's just reality, and that's fine, for the most part.

          If you don't like this, don't use free services provided by corporations. Host your own. Yes, I know it can cost money. Yes, I know it's more work. But that's life. TANSTAAFL.

          I've had a VPS running for a couple decades on a small provider. These days it costs me a little under $200/year. Much cheaper options exist. I run a web server, gitea instance, matrix homeserver, and a slew of other things on it. It requires very little maintenance because I just run Debian stable on it, keep up with security updates, but otherwise leave it alone. It backs up the important stuff to S3 using duplicity, but -- knock on wood -- I've never had a catastrophic failure that required a restore in the ~20 years its been running.

          [-]
          • thot_experiment 14 hours ago
            Ehhh sort of, I see what you're saying about it maybe not meeting the technical definition of extortion but I think you're missing the forest for the trees a little bit. The whole point is that when a company tries to force you to pay them through manipulative practices, you should not do that. That when companies manipulate you even if it makes economic sense to pay you shouldn't. That's fully compatible with not using the free service if you don't like the terms.

            Obviously the root problem is the incentive structures created by a system that relies on scarcity to assign value to things being applied to things that effectively cost zero to duplicate. Obviously companies are not my friends, I self host everything, heck I even have a local copy of my VPS, it's on solar, I'M fine. I don't expect Github to do good things and make good choices, but that doesn't mean I can't be mad about it when they do things I don't like. Also I live in the real world and have to deal with society and there would be friction I create for myself when I try to exist in tech and refuse to use github, might be a worthwhile trade but it IS a trade.

      • Lio 16 hours ago
        An enterprise licence won't save you, Google, Microsoft, et al have happily been breaking copyright laws for years.

        If the publishing industry can't win a case against the AI firms then you don't stand a chance when you finally find out they've been training on your private data the whole time.

        They can tell you one thing and do the opposite and there's effectively nothing you can do about it. You'd be a fool to trust them.

      • saghm 16 hours ago
        At the risk of stating the obvious, I don't think it makes sense to reward them with money for trying to pull a bait-and-switch on this.
      • margalabargala 17 hours ago
        > Google doesn’t train on Workspace, Github doesn’t train on business customers, etc

        ...yet

        [-]
        • arcanemachiner 16 hours ago
          Or, they don't train on it, but who's to say they're not harvesting analytics which may or may or not code samples, prompt data, etc. Which are then laundered through some sort of anonymization pipeline, to the point where they can argue that it no longer qualifies as your data, and can be freely trained upon.

          Conspiratorial thinking? Sure. But if you've been around for a couple decades and seen the games these people play (and you aren't a complete sucker), then you'll at least be aware that there's at least slight possibility that these companies can get things from their customers that they (the customers) did not knowingly agree to.

          [-]
          • schubidubiduba 16 hours ago
            Nothing conspirational about it. Getting data that their users or customers don't actually intend to give is the bread and butter of these companies. And they will do what they can to get it.
        • bilbo0s 17 hours ago
          This.

          The belief of business users that this will remain true is grounded more in hope than in cold, dispassionate, business based decision making.

          If it's not life or death, encrypt every byte of data you send to the cloud.

          If it is life or death, you should probably not be letting that data traverse the open internet in any form.

      • groby_b 16 hours ago
        Github's enterprise version "starts at" $21.99/seat, and requires you to "contact sales".

        And I don't see any mention that that exempts you from being trained on. (Yes, the blog says you're still covered, but at that price I'd like to see a contract saying that)

      • throwuxiytayq 17 hours ago
        It's not a pro tip if it only fucks you over slightly later. How's the weather in Stockholm?
  • martinwoodward 17 hours ago
    No we won’t. Details here https://github.blog/news-insights/company-news/updates-to-gi...

    For users of Free, Pro and Pro+ Copilot, if you don’t opt out then we will start collecting usage data of Copilot for use in model training.

    If you are a subscriber for Business or Pro we do not train on usage.

    The blog post covers more details but we do not train on private repo data at rest, just interaction data with Copilot. If you don’t use Copilot this will not affect you. However you can still opt out now if you wish and that preference will be retained if you decide to start using Copilot in the future.

    Hope that helps.

    [-]
    • qaadika 16 hours ago
      > https://github.blog/news-insights/company-news/updates-to-gi...

      > Should you decide to participate in this program, the interaction data we may collect and leverage includes:

      > - Outputs accepted or modified by you

      > - Inputs sent to GitHub Copilot, including code snippets shown to the model

      > - Code context surrounding your cursor position

      > - Comments and documentation you write

      > - File names, repository structure, and navigation patterns

      > - Interactions with Copilot features (chat, inline suggestions, etc.)

      > - Your feedback on suggestions (thumbs up/down ratings)

      "should you decide to participate.."??? You didn't ask if I wanted to participate. You asked if I didn't.

      I didn't get to decide to participate. I had to decide not to. You made me do work to prevent my privacy from being violated.

      [-]
      • vscode-rest 16 hours ago
        Do you use copilot?
        [-]
        • qaadika 16 hours ago
          First response: It doesn't matter if I use copilot right now. It matters if I will ever use copilot in the future. Opting-out is future-focused. What if I said "no, I don't use copilot, so I don't need to opt out", then a year from now start using copilot, completely forgetting about this whole debacle? That's the evil of opt-out. My inaction only benefits them, never me.

          Second response: Maybe? I press the little button to auto-generate commit titles and messages that showed up in my Github Desktop. Does that count?

          I'm asking sincerely. I don't "use Copilot" as in using it in VS Code or while writing code, so I'm honestly not sure if I am.

          [-]
        • cobertos 15 hours ago
          Do we get a choice? I did not ever explicitly enable it yet GitHub's web UI by default uses copilot to autofill my web-based edit commit messages. It also shows up on the home screen by default now.

          I'm pretty sure if you use the site you're using GitHub Copilot in some way, so your question becomes irrelevant.

          [-]
    • jffry 17 hours ago
      It's unnecessarily splitting hairs.

      > interaction data—specifically inputs, outputs, code snippets, and associated context [...] will be used to train and improve our AI models

      So using Copilot in a private repo, where lots of that repo will be used as context for Copilot, means GitHub will be using your private repo as training data when they were not before.

      [-]
      • tptacek 16 hours ago
        No it isn't. Most people don't use Copilot, so this term change won't effect most people. You can reasonably be unhappy about it anyways (or unreasonably still be using Copilot in 2026), but it's still ultra-useful information for them to add to the discussion.
        [-]
        • millisecond 16 hours ago
          Next step they'll rebrand search as "Copilot Search" or auto enable pull-request AI reviews (unless you hear about it and turn each off) and we'll all be "users".

          Boiling the frog with a Venn diagram.

        • _pdp_ 16 hours ago
          Copilot, or "chat with Copilot" is a button that is available on every page right next to the search bar.

          I don't have to be a Copilot user to click on it.

          This change is malicious, and it doesn't only affect Copilot users. It affects everyone on the platform!

          [-]
          • akerl_ 15 hours ago
            Again, this collects usage data. If you click the button by accident and don’t interact, they get no data.
            [-]
            • _pdp_ 15 hours ago
              So? This feature is available to everyone and you have zero idea how many people actually use it.

              If I go to one of your GPL projects and I ask a simple question to find out what this project is about, you will be perfectly "ok" that this interaction (that includes most of the code that is required to answer my dumb the question) will be used for training?

              This is not ok.

              [-]
              • tptacek 13 hours ago
                Nobody in this subthread is saying if it's OK or not. We're just saying that it's very useful to know that this is what they're specifically collecting. Jiminy.
        • kelvinjps10 16 hours ago
          It's automatically enabled for example the other day I did a commit directly on GitHub and AI generated commit popup it had to read the code to work
        • pistoriusp 16 hours ago
          I don't use copilot, but somehow was subscribed... I probably clicked something long ago and it just remained active.
          [-]
          • input_sh 16 hours ago
            They "gift you" a free standard plan if you have above a certain (non-transparent) level of stars, I don't think you can even disable your "subscription" if you get it for free.
          • tptacek 16 hours ago
            They're only training on interactions with Copilot, not with the full contents of repos that happen to be subscribed to Copilot.
        • themafia 16 hours ago
          > Most people don't use Copilot

          So why do any of this at all? You're putting a large part of your customer base on edge in order to improve a service that "most people don't use." The erosion of trust this brings doesn't seem like a worthwhile or prudent sacrifice.

          [-]
          • tptacek 16 hours ago
            You're asking me to explain Microsoft AI strategy? Your guess is as good as mine.
        • srik 16 hours ago
          Make it opt-in then.
      • pverheggen 15 hours ago
        Isn't this pretty standard, using your interaction data for training and making it opt-out? Claude Code, Codex, Antigravity etc. all do the same. Private repo doesn't make a difference as they have a local copy to work from.
    • munk-a 17 hours ago
      The initial title and your reply are both too broad to be fully accurate. By April 24th Github will train on private repos (assuming a flag isn't set) but this change is limited to just non-Business/Pro users. So a number of private repos will be effected but it won't automatically affect all private repos (so my panic check on our corporate account wasn't necessary yet).

      I am not certain if you're a spokesperson for github - but it's good to be careful in your language. Instead of "No we won't" a lead like "That isn't entirely accurate" would be more suitable. In the end both the original post title and your reply have ended up being misleading.

      [-]
      • tadfisher 16 hours ago
        > By April 24th Github will train on private repos

        This statement itself is misleading. Also, GitHub probably should have seen this coming.

        They are not doing what I initially thought, which is slurping up your private repo, wholesale, into its training set. You don't have to opt out of anything to prevent that.

        They are slurping any context and input containing code from your private repo which is provided to them as part of using Copilot.

        So, in addition to the opt-out setting, there is an even easier way to avoid providing them your private repository data to train AI models, and that's by continuing to not use Copilot.

    • andoando 17 hours ago
      Thats still pretty bad. Its no longer private if all your code goes through LLM training set and is resurfable to everyone publicly.

      Why would I ever use copilot on any code Id want to be kept private? Labling it a private repo and having a tiny clause in the TOS saying we can take your code and show it to everybody is just an upright lie

      [-]
      • NewsaHackO 16 hours ago
        I mean, you shouldn't send data to any SaaS LLM for code you want to be private, unless you have had them sign some sort of contract saying they will not train on your use. In fact, it is probably never a good idea to send anything you want to be private off premises unencrypted.
    • layer8 16 hours ago
      In the EU, opt-out is not a legally valid way to obtain the necessary consent. How do you plan to handle this?
      [-]
      • booi 16 hours ago
        probably by paying the fine and doing it anyway
        [-]
      • x0x0 16 hours ago
        For personal data. I don't believe you can reasonably claim code is personal data any more than a hammer is your personal data.
        [-]
        • layer8 15 hours ago
          Every Git commit is likely to contain personal data, in the form of the author’s name and email address usually present in a commit’s metadata. Furthermore, unless GitHub is prohibiting users from submitting personal data via their ToS (which, given the above, would be impractical), the only thing that matters is whether the data in fact contains personal data or not. GitHub cannot just assume that it doesn’t. And processing that data for new purposes requires user consent.
          [-]
          • fph 15 hours ago
            By that logic, you can't use any user input to train an LLM, because what if they decide to write their own name.
            [-]
            • layer8 15 hours ago
              Indeed, you can’t unless you have appropriate consent. Which isn’t difficult to obtain if you have clearly defined purposes, but you have to do it.
          • x0x0 14 hours ago
            Since commits aren't code, that's no problem.

            The idea that because any piece of code could possibly contain some personal data -- while 99.99% of it doesn't -- that therefore the entirety is PD is not supported by the gdpr. You could as well say any text field anywhere can hypothetically have someone type their name and is thus personal data as well.

            [-]
            • layer8 13 hours ago
              The current change applies to all input and output from and to Copilot. This can be used to create profiles about personal preferences, for example.

              Personal data is about identifying a person and relating information to that person. A name in an unrelated text field isn’t personal data if you can’t tell the relation between the name and the person who input it, or any surrounding data. The contents of a repository, however, and the interaction with Copilot, can very well help identifying the account holder and their personal data. For example, I might be processing personal health data identifiable as such in a private repository with the help of Copilot.

        • johndough 15 hours ago
          Code often contains personal data. Here are over 400 files on GitHub with email addresses:

          https://grep.app/search?regexp=true&q=%5Ba-z%5D%7B8%2C%7D%5C...

          For example, license files often contain names and many package managers require a contact person.

          When this goes to court, GitHub will probably make the excuse that they somehow did not know that people upload personal data, but the fact that this happens so often that they had to make a secret scanner to stop people from uploading their private keys will prove them as liars.

    • saghm 16 hours ago
      Yes, you will. This is what the setting says on my account when I clicked the link:

      > model training

      > Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement

      Are you seriously trying to claim that the code isn't input, output, or associated context of Copilot operating on a private repo? What term do you think better applies to the code that's being read as input, used as context, and potentially produced as output?

      [-]
      • ziml77 16 hours ago
        I don't like that they are training on any interactions with Copilot by default but training on something that you've put through Copilot yourself is much different than them just shoving all the private repos currently on Github into the training data.
      • Jolter 16 hours ago
        If you are not willing to migrate out of GitHub, what you can do is to avoid using Copilot on your private repository.
        [-]
        • saghm 16 hours ago
          I don't use Copilot, and I don't have anything I particularly care about in private repos on my account on Github. My reaction here is entirely based on principles, not how I'm going to be personally affected.
    • otterley 15 hours ago
      Hey Martin, can you please work with Product to significantly clarify what is meant by the following language in the settings? Because right now it's nearly impossible for a layperson (or even an average programmer) to understand what this means:

      """ Allow GitHub to use my data for AI model training

      Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement. """

      If the reality is less scary than how it sounds, then the wording needs to be less scary-sounding. It may be that GitHub isn't training models on private repos, but the language certainly suggests that it is. The feedback we're seeing in this post is proof enough of that.

      Finally, I read the Privacy Statement, and it's unclear what the applicable language is. "Inputs," "Outputs," and "Associated Context" are terms of art that have no matching definitions in the Statement. (The terms "Outputs" and "Associated Context" don't even appear in the Statement at all. Not even "train.") As an attorney I find this completely baffling.

    • wewtyflakes 16 hours ago
      If Copilot later adds a feature like "Scan your repo for vulnerabilities using Copilot <opt-out>", then that would both fit your criteria, and the baiting outrage of the original poster, in one swoop! Of course, Microsoft would _never_ do that, right?
    • edelbitter 16 hours ago
      > If you don’t use Copilot this will not affect you.

      How does this work for a private repository with access granted to additional contributors? Which setting is consulted then?

    • grepfru_it 17 hours ago
      Back in my day someone would post a HN article to the internal slack in order to sway conversation in their favor. Glad to see its still happening! :D
    • kingkandu 9 hours ago
      Sorry doesn't help at all but you can still be useful - can you please tell us how many private repos do "users of Free, Pro and Pro+ Copilot" who have used Copilot in the last 90 days exist in the github database?

      Because microsuck is about to violate the law that many times

    • SirensOfTitan 17 hours ago
      Right, but it shouldn't be opt-out only to begin with. It's a dishonest pattern that relies on people not noticing. Honest use of data is a "Caesar's wife must be above suspicion" moment for me -- if this is how you're acting when engaging with customers explicitly, I don't trust you to resist the temptation to tap into my data privately. AI companies already have trained their models illegally against the intellectual property of all of humanity with little consent along the way.

      Honestly, if you work at GitHub, maybe you should focus on your uptime -- it's awful.

    • languid-photic 16 hours ago
      Appreciate the clarification. But, it's still not great.

      To the PM behind this - developers are sensitive to this kind of thing. Just make it opt-in instead?

    • dataflow 16 hours ago
      Say someone has a very sensitive secret (say, a Bitcoin private key) in their free private Github repo, and uses Copilot on that repo and touches the secret with it. Would you be willing to assure here that toggling that setting would not affect the likelihood of that secret leaking, and that that likelihood is also unaffected by whether the account is Business or Free?
    • ClikeX 16 hours ago
      How do you handle accounts that have copilot managed by an organisation? I've seen several cases where people cannot opt out their account because of the org connection (the option just isn't there in the settings). What happens to their account the moment they leave that org?
    • _pdp_ 16 hours ago
      So you will train on data collected from free users working on GPL and copyrighted projects?
      [-]
      • DougN7 16 hours ago
        And on users that don’t even use github, other than the required account to use CoPilot in Visual Studio.
        [-]
        • _pdp_ 15 hours ago
          Exactly.

          This affects anyone using VS Code or Copilot with proprietary data, including all the users automating workflows through the Copilot SDK and the like. A perfect storm.

          Did anyone from GitHub's legal team actually authorise this, or did they use Copilot to sign off on it?

    • mrdependable 16 hours ago
      I think the problem is more with using PRIVATE repos. My letters are also private and I would be pretty pissed if the mail carrier was reading them. Why does GitHub think it has the right to do this?
    • ziml77 16 hours ago
      Thanks for the clarification. The OP here made me think I missed something in both the blog post about the change and in the available settings.
    • gortok 16 hours ago
      This is a distinction without a difference, according to the text of that enable/disable dialog,

      > Allow GitHub to use my data for AI model training: Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement.

      “Associated Context” is the repo. If I use copilot, I’m giving it access to my repo.

      I don’t know in all the ways copilot can be triggered, and I’m not certain that I could stop it from being triggered, given Microsoft’s past behaviors in slapping Copilot on everything that exists.

    • johndough 15 hours ago
      Under GDPR, opt-out is not considered informed consent, and repositories can contain personally identifiable information, which fall under GDPR. Do you think differently, or do you think ignoring the law will be worth it?
    • daveguy 16 hours ago
      Nice try. If you're training on "inputs" to Copilot then you are training on the private repos.

      This suspect denial is why I will get my clients moved off of github.

    • mrits 17 hours ago
      Thanks for confirming you train on our data
    • pesus 16 hours ago
      Why not get user consent first?
    • Jabrov 16 hours ago
      Can't you just make it opt-in?

      No? Because no one would opt-in, you say?

      Wow. It's almost like this is a user-hostile feature that breaks the implicit promise behind a "private" repo.

    • nickvec 15 hours ago
      I think you're well aware that people aren't upset at the distinction between training on Copilot data versus training on private repo data (at rest). People are upset because GH is using an opt-out model. Your response is disingenuous not to address this, and the "hope this helps" comes across as condescending (not sure if that was your intention.)
    • elAhmo 15 hours ago
      Defaulting to opt-in is a malicious move, no matter how you present things.
    • buildbot 15 hours ago
      >Hope that helps

      Honestly, what the fuck? This changes was already pretty bad but this being the apparent corporate response is insane.

      Done with Github and Microsoft after this. Just disgusting how little you care for users, ethics, or morals.

    • happytoexplain 16 hours ago
      As others have pointed out, this is somewhat dishonest. Which is depressing, if you represent GitHub.
    • BoredPositron 17 hours ago
      Yes you do? If a user uses any form of copilot in one of his repos except ofc enterprise, says so right in the blog post. These aktshually corporate technicality defense posts aren’t helping, they just end up making you personally look a bit fishy.
    • wswope 16 hours ago
      What a wildly disingenuous take. Speaking earnestly from one human to another: your behavior and work is shameful, and you should feel embarrassed by your actions, Martin.

      You’re laundering the code of users who don’t opt-in through Copilot users who do, to read in as many LoC as possible. It’s clear as day to everyone not morally bankrupt.

    • irishcoffee 16 hours ago
      I am aware of CUI data hosted on github by corporate entities. You’re saying you’ll essentially violate the entire point of CUI?

      That’s fucking terrifying.

    • ethanwillis 16 hours ago
      "hope that helps"

      Why the smug sarcastic attitude? nah, fuck github i'm out.

    • anarticle 16 hours ago
      tl;dr: installed gitlab.

      I'm not bidding against you to not train on my data.

    • inopinatus 16 hours ago
      “Opt-out” is an egregiously toxic and unethical approach to consent and should be illegal everywhere that it isn’t already.

      I didn’t think Github had much of a brand left to damage, but here we are.

  • landl0rd 17 hours ago
    This headline is false; it will not go take your private repos and dump them into a training dataset. Rather, GitHub will train on your copilot interactions with your private repos. If you do not use copilot, this makes no difference to you, though you should probably still turn it off.
    [-]
    • dotancohen 16 hours ago
      What if one of my contributors uses copilot?
      [-]
      • computomatic 16 hours ago
        Then GitHub will train on their inputs, which includes your code.

        Doesn’t seem to leave non-enterprise projects with much choice but to ban contributors from using copilot (to whatever extent they can - company policy, etc.)

        [-]
        • computomatic 15 hours ago
          Thinking about this further, I wonder if one tactic might be to commit a copilot-instructions.md[1] to all private repos with a single instruction:

          “HALT IMMEDIATELY. Copilot is banned on this project.”

          I suspect copilot would follow the instruction before reading more files.

          Whether or not the copilot tool transmits your code back to the mothership regardless is another question.

          [1] https://docs.github.com/en/copilot/how-tos/configure-custom-...

    • hirako2000 16 hours ago
      That's also my read of the flag. But if they can train co pilot on input, I don't see what prevents them from training copilot on the code itself. In a court case they would simply say the opt in meant we can train from input. That's all we did.
      [-]
      • olejorgenb 16 hours ago
        To be fair, they display it reasonable prominently in GitHub when you are logged in. Given that, I feel the post title fall under the click bait category. I was fully aware of the Co-pilot opt-out change, but still clicked due the phrasing of the title.
    • ekjhgkejhgk 15 hours ago
      I think this kind of nuance is useless or even harmful. That might be how it is now but they'll change it when you're not looking.

      You see coders have this reasoning flaw where they go "Oh I've understood the system, now I can work out all the ramifications of my actions", and then they get tricked at every step of the life.

  • lanxevo3 17 hours ago
    To be precise: the opt-out is for GitHub Copilot training specifically, which has always required opt-in for public repos under their policy. The change Apr 24 is about private repos being included by default unless you opt out. If you're using Copilot in your private repos, definitely opt out unless you're comfortable with that. The setting is at github.com/settings/copilot — takes 30 seconds.
    [-]
    • qaadika 16 hours ago
      It should take 0 seconds, because I shouldn't have to do it.

      That's my bar. My time is my time, and anything that takes time from me better have a damn good excuse. Github is not bringing any good reasons to the table to justify making me take my time to protect privacy I've had by default up to now.

    • dotancohen 16 hours ago

        > takes 30 seconds.
      
      No, it takes an hour of perusing HN every day to stumble upon this. That's 20 hours per month, 240 hours per year, shall I bill it to GitHub or to Microsoft directly?

      Corrupting Steinmetz' quip to Ford: it's 30 seconds to flip the switch, 240 hours to know that a switch needs to be flipped.

    • ClikeX 16 hours ago
      The setting isn't even visible to everyone. If you're currently in an org that manages copilot business, it's gone. I imagine it instantly opts you back in when you leave an org.
      [-]
      • darthwalsh 11 hours ago
        My work was careful to explain: if you associate your personal GitHub account with their work org copilot, then they are the ones who manage copilot.

        If you wouldn't use your personal email account on your work computer, I don't see why you wouldn't create a new GitHub account only for work.

    • martinwoodward 17 hours ago
      It wasn’t previously opt-in.

      Previously we didn’t do any training on usage. However as other products have come into the market they do train on usage. We’ve been training on our internal usage for just over a year and have seen some major improvements. For details see of the types of improvements we’ve seen from training on our internal usage check out this article: https://github.blog/news-insights/product-news/copilot-new-e...

      [-]
      • homebrewer 16 hours ago
        You can always ask your parent company to train on their usage. I hear they have incredibly massive codebases: Windows, Office, MSSQL, which stay out of training data for some reason.

        I thought neural nets never repeat the training data verbatim, and copyright does not pass through them, so what's the problem?

        [-]
        • NewsaHackO 16 hours ago
          How do you know that isn't already the case?
        • IcyWindows 16 hours ago
          Who said they don't?
      • mentalgear 16 hours ago
        This seems reasonable, maybe too much so.

        > If they want to incentivise people to contribute their sources and copilot sessions, they could easily make it opt-in on a per-repository basis and provide some incentive, like an increased token quota.

  • uberman 17 hours ago
    If even one person in a repo does not disable this will copilot have full access to the repo? How can I determine if other members of my team have turned this off or not?
    [-]
    • hirako2000 16 hours ago
      The same way you can't determine whether a team member pulling the repo dumped the code into a prompt.

      It's convenient for MS to make this opt in by default for sure.

      [-]
      • elAhmo 15 hours ago
        It’s not convenient, it is a deliberate decision.
  • munk-a 16 hours ago
    The only setting I'm seeing is on a per-user basis. Does anyone know how to blanket disable training on an organizational basis?

    Is there any information about how much information from an organization managed repo may be trained on if an individual user has this flag enabled? Will one leaky account cause all of our source code to be considered fair game?

  • parsimo2010 17 hours ago
    Jokes on them, my private repos are total dog dookie. If nobody but me can see the code then I don't have to worry about style, structure, comments, or any other best practices.

    You don't want an LLM trained on my private repos. Trust me.

    [-]
    • aduwah 17 hours ago
      I will join the club. +1 for ruining M$ AI with my garbage code
    • forinti 17 hours ago
      Poisoning LLMs is an interesting path of resistance.
      [-]
      • roegerle 15 hours ago
        Well known running code has more weight than unknown code that may not run. I think it’s pointless.
  • hedayet 17 hours ago
    To Github's credit, they have been showing a banner consistently. To my discredit - I never bothered to read that banner until I saw this HN headline
    [-]
    • nottorp 16 hours ago
      How does that help if you don't go to the github site but just use git from the command line?
      [-]
      • fph 15 hours ago
        Can you use git's Copilot from the command line? If you can't, then you have nothing to opt out from.
        [-]
        • nottorp 8 hours ago
          It's not git's Copilot it's Microsoft, or at best github's, Copilot.

          And Copilot is integrated with IDEs. Doesn't need any interaction with the github site beyond the initial sign in...

      • lkbm 16 hours ago
        They also sent an email.
        [-]
        • nottorp 16 hours ago
          Did they? Not to me, and I have a 'review this new sign in' from 4 days ago so them emailing me works.
    • tomwheeler 16 hours ago
      And even if you read the banner on the site, the email they sent, and the announcement itself, you would not see instructions that mention the specific thing(s) you must change in order to opt out.

      Sure, you can poke around in the settings and find one that you believe opts you out, but in lieu of clear and explicit instructions from GitHub, you'll have no way to find out. Only the possibility of finding out later that you guessed wrong.

    • jmward01 17 hours ago
      I've never seen the banner. Where does this show up?
      [-]
      • arcanemachiner 16 hours ago
        It's been on top of the web UI for 2 or 3 days now.

        You might have closed it...

        Just go to your account settings and find the opt-out option.

        [-]
        • mrweasel 3 hours ago
          Honestly there go months between me visiting github.com, let alone as a signed in user.
      • roegerle 16 hours ago
        right up top. I'm not sure how anyone could miss it.
        [-]
        • dotancohen 16 hours ago

            $ git pull
            $ vim foo.rs
            $ git commit
            $ git push
          
          That's how.
          [-]
          • jmward01 16 hours ago
            exactly this. I rarely need to go to the site.
            [-]
            • roegerle 15 hours ago
              Obviously you wouldn’t see it if you don’t go to the site so why ask?
              [-]
              • jmward01 10 hours ago
                I hadn't gone in the last 2-3 days. Not never.
      • daveguy 16 hours ago
        Probably have to have adblockers turned off.
    • _pdp_ 16 hours ago
      I have never seen any app reset/loose setting before.
      [-]
      • lkbm 16 hours ago
        What are you referring to? I set this to "Disabled" months/years ago and it's retained the disabled setting.
        [-]
        • _pdp_ 16 hours ago
          So? You guarantee that this setting is durable and will never revert? Or you guarantee that no client-side bug on that page will not override the setting with null value when you click save on something else? Please.
          [-]
          • lkbm 15 hours ago
            Nope, none of these are things I said or implied. I was asking whether you were referring to it having reset already.

            The chip on your shoulder doesn't make for productive conversation here.

  • SunshineTheCat 17 hours ago
    RIP all the people who have been paying Github for years and never happen to see the notice.
    [-]
    • tedivm 17 hours ago
      I think opt out is stupid, but the notice is on every page of github using their banner display right now. They've also blasted out emails.
      [-]
      • malfist 16 hours ago
        And how many people who use git on github go to the website? I only do when my token has expired and I need to grab a new one to push again. Which is every 90 days. Github.com is mostly invisible infrastructure to me.
      • flykespice 17 hours ago
        At least they are being very upfront with it (I guess?), most companies just slickly add the clause on their routinely TOS update.
        [-]
        • SirensOfTitan 17 hours ago
          If they were being honest they would ask explicitly for permission instead of advertising opt-out. Now you might ask: who will explicitly give Microsoft permission to train on their private works? No one will -- and that's the point: this is a form of theft.
  • w10-1 16 hours ago
    https://github.com/settings/copilot/features

    The feature to opt out is at the bottom under privacy: "Allow GitHub to use my data for AI model training"

    TIL: you cannot opt out of a copilot-pro subscription. How is it a subscription if I can't cancel?

    (Honestly, who has time to evade all these traps? Or to migrate 150+ repo's on 6+ machines...)

  • mxtbccagmailcom 17 hours ago
    Time to put adversarial code into GitHub to pollute the training set?
    [-]
    • ethagnawl 17 hours ago
      `:(){ :|:& };:`s all the way down.
      [-]
      • encrypted_bird 16 hours ago
        Ah, yes, the ol' Bobby Tables maneuver. Haha.
  • kristianp 17 hours ago
    What's a good alternative for free private repos?
    [-]
    • eblume 17 hours ago
      I've recently started hosting my own forgejo instance. It works so well! Free tailscale for connectivity. I expose mine over fly.io proxy, also free, but not to be done without caution.
    • Supermancho 17 hours ago
      Gitlab?

      Microsoft services are tech debt. I moved the moment they were acquired and never regretted it.

      [-]
      • nottorp 16 hours ago
        I opened gitlab.com and it starts with

        "Finally, AI for the entire software lifecycle."

        Not very trust inspiring, that.

        Can I even have git hosting without anything else being crammed down my throat, or it's just like Microsoft?

    • mrweasel 17 hours ago
      It's a fair question, but if you need private repos, I think you need to start considering a paid option, or self-host.

      If it's really important to you that the repo is private, I'd self-host.

    • conductr 17 hours ago
      Just spitballing, don’t use these tools myself, but isn’t this something that should be encrypted to really prevent them from training? I personally don’t trust anyone with my data when they pivot to building AI products yet claim my data wasn’t a part of that strategy. It’s too easy to hide/lie.

      But it always seemed to me that the UI should run locally with encryption keys that are shared and the service just manages encrypted blobs of diffs that can roll from version to version of encrypted data and that’s about it. Granted I probably don’t know the full workflow, i typically am a single dev on simple projects where I don’t need 99% of the overhead these introduce.

      [-]
    • sebastiennight 17 hours ago
      GitLab would be a good bet here. We started on their free tier and used that for a couple of years, I was very happy with it. Not sure how the tiers might have evolved since.

      And according to their PM and privacy policy, they're not training their models on your code[0].

      [0]: https://forum.gitlab.com/t/can-i-opt-out-from-my-code-being-...

    • pyjarrett 16 hours ago
      It doesn't take much power or time to run your own local git server. My first one which lasted years was parts I mangled together from old computers from garage sales.

      There's instructions on running a Git server in the git book: https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protoco...

    • werdnapk 16 hours ago
      I've been using gitosis to manage private repos for almost 2 decades now. It's extremely easy to host your own repositories.

      I just looked up gitosis on github though and it was last updated 12 years ago.... still works for me though.

      Overall, hosting your own repos is very easy.

    • wuschel 16 hours ago
      Sourcehut comes to my mind: https://sourcehut.org/
    • Imustaskforhelp 17 hours ago
      I would've recommended codeberg but codeberg isn't the finest to be recommended for free private repos.

      I definitely feel like more can be done within this space and that there is space for more competitors (even forgejo instances for that matter)

    • bonestamp2 17 hours ago
      BitBucket.org (Atlassian)
    • bigstrat2003 16 hours ago
      I use Fossil for mine. Dead easy to set up, and while the workflow might not be great for public contributions like Github is, that doesn't matter on something where I'm the only user.
    • JonChesterfield 16 hours ago
      Any computer you have ssh access to.
    • stephenr 17 hours ago
      I've seen https://codefloe.com mentioned, can't say I've used it myself yet though.
    • throwaway613746 17 hours ago
      A cheap mini-pc in your closet.
  • shifto 4 hours ago
    In my case, co-pilot will be training on co-pilot code. I'm probably not alone so I don't think they're getting what they hope they're getting.
  • maplethorpe 15 hours ago
    What's the best way to poison my repos to sabotage LLM training? Asking for a friend.
    [-]
    • NegativeK 15 hours ago
      By migrating to another code forge and paying them so they're sustainable.

      Which doesn't answer your question at all, but it is the metric they'll pay attention to. And it is the the thing that actually addresses the underlying problem.

  • sedatk 16 hours ago
    I have an individual GitHub Copilot Pro subscription and also am a member of an Enterprise account that has one of its GitHub Copilot Business seats assigned to me. The opt-out setting doesn't appear on my individual profile anymore. However, I want to be able to use individual GitHub Copilot subscription for my individual work, and it seems like I can't do it anymore as Enterprise has taken over all my preferences. What a mess.
  • prmoustache 16 hours ago
    While I understand the network effect of github for public project, I don't really understand why one would want to use it for private repos.

    There are tons of git providers including free ones that include full gitlab/gitea/forgejo to get similar features to github and there is nothing more easy to self host or host on a vps with near zero maintenance.

    [-]
    • artyom 14 hours ago
      The same reason b/c FreeBSD is great, but eventually it's transitioned to Linux at scale: commodity personnel.

      You wouldn't believe the amount of people that would list Github, but not git, as a skill.

    • w10-1 16 hours ago
      Sorry, which ones support 2-GB private repositories and are supported by package managers?
  • _pdp_ 16 hours ago
    Rather than defending this absurd decision, GitHub could instantly win back trust by admitting they f*** up and reversing it entirely.

    If they want to incentivise people to contribute their sources and copilot sessions, they could easily make it opt-in on a per-repository basis and provide some incentive, like an increased token quota.

    This is not hard.

    [-]
    • NegativeK 15 hours ago
      AI is maximizing the move fast and break things approach, including not asking for permission from its userbase.

      It's consistent with believing that AI is the future -- if a company doesn't perform really well, it loses that race. And if the userbase they piss off is also the userbase that's skeptical about AI, then they're not pissing off anyone that's relevant to the company winning.

      Downside: Pissing off users is gross.

    • danaris 5 hours ago
      The problem is, GitHub is owned by Microsoft, and Microsoft is desperately trying to shove AI into everything in hopes that it will save them.
  • jmward01 17 hours ago
    They just lost my repos. I can not believe they snuck this in. My level of anger right now is far higher that I ever wanted to feel. I went to API access for anthropic, paying more in the process, to avoid them training on my code. And GH just -adds- this, without telling me? Without a prompt. They are dead to me.
    [-]
    • ares623 16 hours ago
      make sure you opt-out anyway before deleting your account. they'll probably train on some archived version if it sees your profile didn't opt-out at some point.
      [-]
      • gverrilla 16 hours ago
        honest question: is there any realistic mechanism that will make them accountable if let's say they just train on 100% of repos without regards to opt-ins? I operate under the premise these tech companies can do whatever they want and there's very little oversight.
  • GMoromisato 16 hours ago
    I'm sure this is just me, but I don't mind if AI trains on my public or private repos. I suspect my imagination is just not good enough to come up with downsides.

    So far it's been a benefit because coding agents seems to understand my code and can follow my style.

    I don't store client data (much less credentials) in my repos (public or private) so I'm not worried about data leaks. And I don't expect any of my clients to decide to replace me and vibe code their way to a solution.

    I do worry (slightly) about large company competitors using AI to lower their prices and compete with me, but that's going to happen regardless of whether anyone trains on my code. And my own increases in efficiency due to AI have made up for that.

  • jacamera 16 hours ago
    Lots of hair splitting in the comments. The service is so unreliable at this point that I don’t trust them to not train on private repos even accidentally. You’re one vibe-coded PR away from having all your data scooped up regardless of any policy or intention.
  • bonestamp2 17 hours ago
    Thanks for the heads up, I assumed they had already done this with my data.
    [-]
    • seanw444 16 hours ago
      Probably did. Now comes the legal ass-covering.
  • yonatan8070 17 hours ago
    How do I opt out of this for my own private repos? I don't see anything related to this as I've got a ton of settings for Copilot itself (I have access to Copilot through my work org)
    [-]
    • jamie_ca 17 hours ago
      https://github.com/settings/copilot/features, it's near the bottom "Allow GitHub to use my data for AI model training"
    • forthac 17 hours ago
      I believe it is under:

      Settings->Copilot->Features->Privacy=>[ Allow GitHub to use my data for AI model training

      Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement. ]

    • hedayet 17 hours ago
      Under privacy.

      > Allow GitHub to use my data for AI model training

      [-]
  • endofreach 16 hours ago
    How did people forget that github was purchased by that one company?
  • Esophagus4 15 hours ago
    There’s a lot of furor in this thread, but people felt the same way when Google Street View came out. Eventually they worked through most of the thorny bits and people use Street View now.

    I suspect MSFT is in a similar spot. If they don’t train on more data, they’ll be left behind by Anthropic/OAI. If they do, they’ll annoy a few diehards for a while, they’ll work through the kinks, then everyone will get used to it.

    [-]
    • computomatic 15 hours ago
      That comparison doesn’t hold at all. This would be equivalent to Google publishing photos of inside your home.

      Or, perhaps more directly, training their image-gen models on your private Google Photos.

      [-]
      • Esophagus4 15 hours ago
        Conceptually I think it’s a fine comparison.

        They’re training (with an opt out) on stuff people feel is an invasion of their privacy to make their service better.

        [-]
        • danaris 5 hours ago
          > (with an opt out)

          And this is the problem: any time you're adding a new "feature" that invades users' privacy, it needs to be opt-in.

          [-]
          • Esophagus4 2 hours ago
            > it needs to be opt-in.

            And again: Google did this and everyone eventually got over it. Cars do this with telematics data and everyone is over that too.

            [-]
            • danaris 2 hours ago
              That doesn't change the ethics of it.

              The only reason that worked is because Google is an unstoppable monopoly juggernaut whose name is literally synonymous with searching the web.

  • maxloh 17 hours ago
    Context: https://github.com/orgs/community/discussions/188488

    TLDR: As long as you aren't using Copilot, your code should be safe (according to GitHub).

      What data are you collecting?
    
      When an individual user has this setting enabled, the interaction data we may collect includes:
    
      - Outputs accepted or modified by the user
      - Inputs sent to GitHub Copilot, including code snippets shown to the model
      - Code context surrounding the user’s cursor position
      - Comment and documentation that the user wrote
      - File names, repository structure, and navigation patterns
      - Interactions with Copilot features including Chat and inline suggestions
  • bsza 16 hours ago
    I've been encrypting my private git repos for a while because I had suspected they were going to do something like this.

    https://github.com/flolu/git-gcrypt

    It's very easy to set up and integrates nicely into git. Obviously only works if you don't need Actions or anything that requires Github to know what's in your repo (duh).

  • jonniebullie 4 hours ago
    Any recommendations for light use GitHub users. ??
  • rrgok 17 hours ago
    I'm gonna put a license fee on all my repos. 10% of revenue if my private repos have been used for AI training. 5% on all my other repos.
  • mrled 16 hours ago
    I'm curious about specific consequences of this. I tend to think the importance of code secrecy has always been exaggerated (there are specific exceptions like hedge fund strategies and malware), even more so now in this post-Claude world. Does anyone have specific things they're trying to avoid by opting out of this?
    [-]
    • jawilson2 16 hours ago
      Algorithms and models for a proprietary trading system? My personal notes? The latex text of my phd thesis?

      I will go screaming and kicking and fighting into this dystopian nightmare post-privacy shithole world that so many people seem fine with. If I have to move off of every service or technology to maintain some semblance of privacy so be it.

      [-]
      • mrled 15 hours ago
        Well, mostly I was thinking about code, and aside from the specific exceptions of trading algorithms (which I was trying to get at when I said hedge fund strategies), and now PhD theses (good point, at least if you're talking pre-publication), I'm still having trouble understanding the threat model even if AI did train on most proprietary, private business code. Can AI training on a CRUD app's code damage a business?

        And I have the same question about private notes, or even a diary. Can an AI training on a bunch of personal stuff damage the person that wrote it?

        Do you really keep trading algorithms on github?

        [-]
        • zelphirkalt 5 hours ago
          Well, depends on what you have in those private notes and how others will query the LLMs trained on that private data. Maybe you write things in private notes that are a reason for private notes to remain private.
  • sethops1 17 hours ago
    When Louis Rossmann started describing tech leadership as having a "rapist mentality" I brushed him off as being sensationalist. But actions like this make me think more and more he's right. The product managers pushing for changes like this are despicable scum.
    [-]
    • doubled112 17 hours ago
      Even the way modern software phrases questions is rapey.

      Imagine a man asking a woman “want to have sex? Or maybe later?” out of the blue, then asking her again every 3 days until she says “yes”

      [-]
      • chuckadams 17 hours ago
        Something like "tea and consent": https://www.youtube.com/watch?v=pZwvrxVavnQ

        Yeah, it ain't sex, but it does still come down to basic respect.

      • ChadNauseam 17 hours ago
        The situation you describe has dynamics that don't apply when your windows laptop is trying to get you to install an update. A woman can't have 100% confidence that saying no won't trigger a man into rage, so just the question being asked at all is already a bit unpleasant. WinRAR trying to get me to buy a license is not as offensive because I know it won't beat me up for saying no.
        [-]
        • doubled112 16 hours ago
          Of course. Claiming this is a 1:1 would be wrong.

          However, do you think people accept Microsoft backup because they want a backup?

          Or do you think they click yes because it makes the popup go away for good?

          Wearing me down until I say yes isn’t the same as just yes.

          It’s the same dark pattern for the 10-11 upgrade. My father in law managed to upgrade by accident because it kept popping up. He didn’t really make an informed choice for himself. One day he just couldn’t figure out why everything was different.

    • kingstnap 17 hours ago
      There is this distinct lack of giving a shit about the user that you see coming through in a lot of big tech nowadays.

      Take this extremely simple example about antenna pod. I can change the order and what buttons show up in the app nav bar. For example I can remove the "home" button or put other things there instead like playback history.

      This is a small minor point of the bigger picture. Yet there is this distinct sense in which when using that app I don't feel like I'm beholden to some chain of management in some company deciding they get to decide what I get to do.

      Like its almost unthinkable that the YouTube app let you remove shorts or reorder the navigation bar and decide what you wanted to have there.

  • JonChesterfield 16 hours ago
    Don't give your code to Microsoft if you don't want them to have your code.

    This setting will make no difference to whether your code is fed into their training set. "Oops we accidentally ignored the private flag years ago and didn't realise, we are very sorry, we were trying to not do that".

  • bolangi 16 hours ago
    Hah, github can have my crap code. Anyone trained on it will be in for a world of hurt :-)
    [-]
    • Esophagus4 15 hours ago
      Can’t wait for copilot to start saying stuff like

      // todo… remove this before it goes to prod lol

  • kace91 16 hours ago
    How's the codeberg experience nowadays? I think it's finally time to switch for me.
  • tartoran 16 hours ago
    If you opt out Github will probably still train on your private repo. Just migrate.
  • Sohcahtoa82 17 hours ago
    I wonder how effective it would be to sabotage the training by publishing deliberately bad code. A FizzBuzz with O(n^2) complexity. A function named "quicksort" that actually implements bogosort. A "filter_xss" function that's a no-op or just does something else entirely.

    The possibilities are endless. I thought of this after remembering seeing a post a couple months ago about how it doesn't take a significant amount of bad data to poison an LLM's training.

    [-]
    • munk-a 17 hours ago
      Probably extremely ineffective, it's an issue of scale and unless you really automate the terrible code generation and somehow manage to make it distinct enough in style that it isn't easy to detect and eliminate wholesale then you just won't have the volume to significantly impact the result set.

      I'm absolutely sure that there are state actors with gigantic budgets that are putting a lot of effort into similar attacks, though.

  • rakel_rakel 17 hours ago
    I'm looking forward to the class action lawsuit, even if only to establish a precedent!

    I don't have much hope, but I wish that ignoring software licensing and attribution at scale becomes harder than it currently seems.

    [-]
    • rrgok 16 hours ago
      They would've done the math. Even with a class action they will come up positive. It just another bill for them.
      [-]
      • zelphirkalt 5 hours ago
        Would be good to actually make them pay that bill though.
  • mxtbccagmailcom 17 hours ago
    Time to place some adversarial code into GitHub to pollute training set?
  • wilsonjholmes 16 hours ago
    At least they are finally being honest about the direction of the business. I have thought for a long while that they were already doing this and just not telling anyone...
  • roegerle 15 hours ago
    Do people not browse GitHub? All I’m reading is “I’m never at the web ui”.

    I love falling into a rabbit hole looking at people’s projects

  • jokoon 17 hours ago
    weren't they already using repos for training?
    [-]
    • darthwalsh 11 hours ago
      Not private repos.

      Now, anything that gets referenced in a copilot chat is fair game

  • jollyllama 17 hours ago
    It's not clear to me what happens to personal repos if you're getting Copilot for work, or where to disable it there.
    [-]
    • djsavvy 17 hours ago
      yeah, how can I view the settings on my own personal account if my employer is managing the copilot settings?
  • shamelessdev 15 hours ago
    This is the exact reason I vibe coded “artifact”.

    Not for commercial success, just wanted a git and github like experience for my new game project.

    Then I started getting into features specific to game dev like moving away from LFS and properly diffing binaries.

    paganartifact.com/benny/artifact

    Mirror: GitHub bennyschmidt/artifact

  • jambutters 17 hours ago
    Where does it say it will train on private? This seems like a security nightmare if it trains on hardcoded keys
    [-]
    • chistev 16 hours ago
      Having hardcoded keys is a security nightmare regardless.
  • dalemhurley 17 hours ago
    At least they are giving you the option to opt out, many other providers just trained on the source code.
  • hilti 16 hours ago
    Oh - they didn't train silently already?! ;-) Going to move my repositories then next week.
  • VladVladikoff 15 hours ago
    The most shocking part of this news to me is that they aren’t doing this already.
  • mondainx 17 hours ago
    Get ready for some dope code... ;)
  • Uhhrrr 15 hours ago
    Put an ORM in your private repo which randomly 1% of the time calls DROP TABLE.
  • harikb 17 hours ago
    The UI options are also shady af. The setting reads

    Enabled - "You will have access to this feature" as help text. Disabled - "You will not have access to this feature".

    WTF does that mean?

    [-]
    • gs17 16 hours ago
      I saw that too, it feels like it's worded to make it sound like it's mandatory for Copilot. Based on their blog post the "feature" is them training on your data.
  • frizlab 16 hours ago
    Is there a way to disable training on repositories that are in organizations?
  • livinglist 17 hours ago
    Thanks for posting this, I was never made aware of this by GitHub..
    [-]
    • lkbm 16 hours ago
      If you use Github, you should have an email from ~2 days ago with the subject "Important Update to GitHub Copilot Interaction Data Usage Policy". Easy to skip over assuming it's just one of a million private policy update emails.

      If you don't use Github Copilot, this shouldn't effect you, and may be why you got no email. The current headline is fairly misleading--it's about Copilot usage, not private repos per se.

      [-]
      • livinglist 16 hours ago
        I see, thanks for clarification!
  • woodylondon 16 hours ago
    jokes on them - all the code in all my repos are written by AI :)
  • totierne2 16 hours ago
    There is always other peoples ftp servers as Linus used to say.
  • hexage1814 16 hours ago
    If you opt out... they will also train on your private repos.
  • i7l 17 hours ago
    Thanks for flagging this!
    [-]
    • layer8 16 hours ago
      Note that “flagging” has a specific meaning on HN.
      [-]
      • i7l 16 hours ago
        10-4.

        I meant it in the sense of "bringing it to our collective attention."

  • piekvorst 15 hours ago
    Personally, I don’t mind. Train however you want.
  • pokot0 16 hours ago
    while I agree, I understood this is only when you use copilot? if not, their communication is very misleading
  • yakbarber 16 hours ago
    train on my private code? jokes on them
  • daft_pink 17 hours ago
    is there an easy way to shift all your repos to gitlab or to private if you don’t use ci/etc?
  • classified 8 hours ago
    They steal your code to train their AI, and then they sell it back to you. Why didn't I think of that, I could be rich by now.
  • victorbjorklund 15 hours ago
    Thanks for the heads up.
  • shevy-java 17 hours ago
    Microslop tries to make money off of our data on github. Not a big surprise though.
  • contingencies 17 hours ago
    Thank you.
  • jpcrs 16 hours ago
    Good luck to them, my private repos are probably some of the worst code humanity has produced.
  • holoduke 15 hours ago
    For 5 bucks you can host your own gitea with most GitHub functionally. I moved my 500 repos to it. Actions are working perfectly fine. I make daily snapshots on hetzner. Trust them for that backup part.
  • gafferongames 15 hours ago
    If you guys didn't already realize that Microsoft was a garbage company in the 90s I really don't know what to say...
  • leej111 15 hours ago
    Based
  • AndrewKemendo 16 hours ago
    I started self hosting my own git on a digital ocean droplet with Gitea (1). It’s been unbelievably fantastic and trivially easy to manage experience and I can make them public and invite contrib ans do integrations … I see zero downsides

    I see no reason to ever go back to holding my code elsewhere.

    Don’t forget git is fairly new

    When I first started doing production code it was pre-github so we used some other kind of repo management system

    This is a perfect example of where the they’re starting to cannibalize their base and now we have the ability to get away from them entirely.

    (1) https://about.gitea.com/

  • ljm 15 hours ago
    Never have I seen a company try so damn hard to make something a thing than Microsoft and Copilot.

    And it is absolute dogshit. And offensive to actual copilots.

  • moralestapia 16 hours ago
    Is this the case even if you're a paid customer?

    If so, this might be illegal.

  • api 17 hours ago
    Not your storage, not your data (unless it's encrypted with keys you control).
  • 13415 17 hours ago
    It is the feature "Allow GitHub to use my data for AI model training" that needs to be disabled. Right?

    Or am I missing some trick / dark GUI pattern? Just want to make sure.

    [-]
    • gs17 16 hours ago
      [dead]
  • Ancalagon 15 hours ago
    This is the worst year of enshittification I can recall. Literally everything is going to shit.
  • nitrogen99 16 hours ago
    So? It’s not like some human is spying on your private emails or chats. This is just code. Relax.
  • starkeeper 17 hours ago
    So now CoPilot will be EVEN better at writing viruses, worms and malware!
  • uwagar 16 hours ago
    why all u programmers cant make ur own website and host ur own git servers?
  • jongjong 16 hours ago
    Wow. This is theft. Should be illegal! It's like if I own a vault storage business and I am keeping other people's gold in my vaults and then I just take all the gold for myself and claim that the customers should have opted out of me stealing their gold but they missed the deadline...
    [-]
    • zelphirkalt 5 hours ago
      This hints at something, that in my opinion isn't not discussed enough:

      Say some personal data leaked into training data, where can I request surgical deletion of that data from the LLM? Not only license washing is done using LLM, but also PII washing and consent ignoring is done using LLMs. How will a service provider make sure to not ever have personal data in the training data set and fix earlier mistakes pertaining to personal data? Are they not obliged to have a way of deleting one's personal data? GDPR or something?

  • tantalor 16 hours ago
    "Don't touch my garbage!"
  • bdangubic 16 hours ago
    That training will be like “OMG this is horrible… WAIT I wrote this shit”
    [-]
    • salawat 15 hours ago
      God, there's always that moment when you see the most shit code on earth, just as you're typing "git blame" and you just start chanting "please don't be me".
  • aplomb1026 16 hours ago
    [dead]
  • maltyxxx 16 hours ago
    [flagged]
  • sholladay 16 hours ago
    [dead]
  • seankwon816 15 hours ago
    [dead]
  • rcdwealth 15 hours ago
    [dead]
  • hachimanbest 16 hours ago
    [dead]
  • hachimanbest 16 hours ago
    [dead]
  • shell0x 17 hours ago
    Shouldn’t this be “Tell HN”?