27 comments

  • Arcuru42 minutes ago
    Personally, I think that the human directing the agent owns the copyright for whatever is produced, but the ability for the agent to build it in the first place is based off of stolen IP.<p>I&#x27;m concerned about the copyright &#x27;washing&#x27; this enables though, especially in OSS, and I think the right thing for OSS devs to do is to try to publish resulting code with the strongest copyleft licensing that they are comfortable with - <a href="https:&#x2F;&#x2F;jackson.dev&#x2F;post&#x2F;moral-ai-licensing&#x2F;" rel="nofollow">https:&#x2F;&#x2F;jackson.dev&#x2F;post&#x2F;moral-ai-licensing&#x2F;</a>
    • nadermx8 minutes ago
      Funny how the copyright industry was able to spin copyright infringment into the pejorative &quot;stealing&quot;. If you still have the item, what was stolen?<p>Dowling v. United States, 473 U.S. 207 (1985): The Supreme Court ruled that the unauthorized sale of phonorecords of copyrighted musical compositions does not constitute &quot;stolen, converted or taken by fraud&quot; goods under the National Stolen Property Act
  • jugg1es6 hours ago
    I want this question to have an interesting answer, but everyone knows that if this question ever goes to the courts, ownership will go to the people in charge with the money. The idea that Anthropic may not own Claude Code just because Claude wrote it is wishful thinking.
    • embedding-shape6 hours ago
      Best part is, it&#x27;s likely to have a different answer in every country, who knows what&#x27;ll happen, not every country implicitly sides with the ones with the most money.
      • MarsIronPI31 minutes ago
        Well, eventually it&#x27;ll probably be added to the Berne Convention agreement or some such.
        • LawnGnome18 minutes ago
          That&#x27;s my feeling on the endgame too, but it&#x27;ll probably be a decade before we get anywhere near it.
      • adrianN1 hour ago
        Depends on where they pay their taxes generally.
    • beej714 hours ago
      I love that genAI art will not be copyrightable and genAI code will be. The power of the Almighty Dollar at work.
    • senaevren5 hours ago
      The work-for-hire doctrine actually supports your intuition more than the AI authorship question does. The reason Anthropic likely owns Claude Code has little to do with whether Claude wrote it and everything to do with the employment contracts of the engineers who directed it. The DMCA takedown question is genuinely interesting though because DMCA requires the claimant to assert copyright ownership in good faith. If a court later found the codebase was predominantly AI-authored and therefore not copyrightable, the 8,000 takedowns could be challenged as bad faith DMCA claims. That is a different and more tractable legal question than the ownership one.
      • gpm23 minutes ago
        I have trouble believing that the DMCA claims would be found to be in bad faith when they were made at a time when the question of what degree of human input is required to acquire copyright on AI generate code hasn&#x27;t been resolved at all.<p>It doesn&#x27;t seem like bad faith to think that copyright is stronger than the courts end up thinking, just being mistaken.
      • rasz4 hours ago
        Work-for-hire doctrine doesnt automagically absolve you from IP law. Microsoft and Intel already learned this in the nineties when they paid San Francisco Canyon Company to steal Apple code.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;San_Francisco_Canyon_Company" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;San_Francisco_Canyon_Company</a><p>LLMs are just code stealers, will gladly generate Carmacks inverse for you with original comments.
        • senaevren4 minutes ago
          The San Francisco Canyon case is a good example of exactly the right distinction. Work-for-hire determines who owns the output, but if the process of creating that output involved copying protected material, the infringement claim runs separately. The piece makes this point on the open source contamination section: owning the output and having a clean chain of title to the output are different questions. You can own AI-generated code and still have a copyleft problem in it.
    • conartist66 hours ago
      It&#x27;s not wishful thinking, and ownership isn&#x27;t a foregone conclusion.<p>Sure the courts could mint a communist society with a few weird decisions about property rights, but this being the US do you really suppose that&#x27;s likely?<p>There&#x27;s really no legal question of any kind that models aren&#x27;t people and therefore cannot own property (and also cannot enter into legal contract as would be required to reassign the intellectual property they don&#x27;t and can&#x27;t own)
      • wongarsu6 hours ago
        The catch-22 is that the fact that models aren&#x27;t people is only relevant if you treat them similar to a person. Like the US Copyright Office&#x27;s opinion which treats it similar to a freelancer. If you treat the LLM as a machine similar to a camera, with the author expressing their existing intent through the tools of this machine, ownership is back on the table and more or less how it was before LLMs.
        • conartist66 hours ago
          Well if the camera in addition to choosing autoexposure also decided how to frame the shots, which lens to use, where to stand, and everything else salient to the artistry of photography -- all without direct human intervention, then I would think the situation would again be analogous. If the camera could do all that because an intern was holding it, the intern would still own the shots even if their employer gave them the assignment.<p>That&#x27;s why the intern signs an employment contract that reassigns their rights to their employer!!
    • helterskelter12 minutes ago
      I&#x27;m not sure Anthropic would appreciate the liability that ownership would imply.
  • qsera1 hour ago
    More interesting question is &quot;Who wants to own it&quot;...<p>The answer is probably &quot;Nobody&quot;!
    • nine_k1 hour ago
      Depending on the scale. If you ask Clause to one-shot an app from a nebulous description, you get a prototype which you would understandably loathe to own the code of. If you plan carefully and limit the scope, you get code that you understand, can approve of, and are okay owning further down the line.
      • burnte1 hour ago
        I spent two and a half hours writing up a detailed outline for a small webapp. Claude popped it out in one shot 100% working., I added features after but the time you spend on a good outline saves hours later.
    • onlyrealcuzzo1 hour ago
      Presumably, every company that has non-LGPL CC code in production wants to own it...
      • nine_k1 hour ago
        &quot;Own&quot; as in &quot;be responsible for&quot;. Nobody is too keen to own a pile of semi-working trash, and extensive vide-coding can produce such piles easily.
        • curt151 hour ago
          Not sure why this is being down voted. Outsourcing work doesn&#x27;t also outsource accountability.
        • qsera54 minutes ago
          Yea, that is how I meant it.
  • p0w3n3d6 hours ago
    That&#x27;s quite impressive approach from the companies&#x27; perspective. Let&#x27;s first use claude code and then we&#x27;ll think who the code belongs to.<p>I think that the gold rush approach happening right now around me (my company EMs forcing me to work with claude as fast as possible) show really short-sight of all the management people.<p>First - I lose my understanding of the code base by relying too much on claude code.<p>Second - we drop all the good coding practices (like XP, code review etc.) because claude is reviewing claude&#x27;s code.<p>Third - we just take a big smelly dump on the teamwork - it&#x27;s easier and cheaper to let one developer drive the whole change from backend to frontend, despite there are (or were) two different teams - one for FE, one for BE.<p>Fourth - code commenting was passe, as the code is documentation itself... Unless... there is a problem with the context (which is). So when the people were writing the code, they would not understand the over-engineered code because of their fault. But now we make a step back for our beloved claude because it has small context... It&#x27;s unfair treatment.<p>I could go on and on. And all those cultural changes are because of money. So I dub this &quot;goldrush&quot;, open my popcorn and see what happens next.
    • nicoburns6 hours ago
      &gt; Third - we just take a big smelly dump on the teamwork - it&#x27;s easier and cheaper to let one developer drive the whole change from backend to frontend, despite there are (or were) two different teams - one for FE, one for BE.<p>Agree with your other points, but IMO this one has always been better. You often need to design the backend and frontend to work with each other, and that requires a lot more coordination when it&#x27;s separate teams.
    • senaevren5 hours ago
      The fourth point about code commenting is the one that connects directly to the ownership question. When developers write comments to explain intent, those comments are evidence of human creative direction. When Claude writes the code and the comments, and the developer merges without adding their own explanation of the architectural decisions, the record of human authorship disappears along with the institutional knowledge. The documentation problem and the copyright problem are the same problem.
    • sebastianconcpt6 hours ago
      Also, it&#x27;s supremely easy do the wrong abstractions long term and compromise premature internal designs that will start to starve of human mental modeling, hence explaining with accountability how things work and what the plans are when an incident happens. Also, if the wrong generalizations are introduced, coded correctly and reviewed and approved by AIs, then who&#x27;s even driving really?
    • bearjaws6 hours ago
      I rarely see #3 yield better solutions, it&#x27;s usually better to collaborate as a team on requirements and gotchas, but let one person own implementation.
    • cindyllm6 hours ago
      [dead]
  • ottah44 minutes ago
    My opinion, copyright has mattered very little in the corporate world. Copyright is effectively meaningless with SaaS, and the compiled software ran on your machine is protected more by technical controls and EULAs. A world where copyright didn&#x27;t exist for software would look nearly the same for the commercial world. Trade secrets, NDAs, and employment contracts bind workers more than copyright. The only thing that the question of copyright has real world impact is open source, but even then only for more restrictive licenses such as gpl.
  • zuzululu45 minutes ago
    I think it&#x27;s pretty clear cut, whoever is paying for your agentic coding tool subscription is part of the litmus test.<p>I use my own computer, I pay for my own subscription and I build my open source projects then the code belongs to me.<p>If I use my company&#x27;s computer, they pay for my subscription and we work on the company&#x27;s projects then the code belongs to the company.<p>In any step of the way if some copy-left or any other form of exotic open source license is violated, who pays for discovery? Is it someone in Russia who created a popular OSS library that is now owed? How will it be enforced?
  • metalcrow1 hour ago
    &quot;if Claude was trained on the LGPL-licensed codebase and its output reflects patterns learned from that code, can the output be treated as license-free? The emerging legal consensus is probably not, and assuming it can creates significant liability for anyone shipping that code commercially.&quot;<p>Is there any citation for this &quot;legal consensus&quot;? I was not aware there was any evidence backed stances on this topic as of yet
    • onlyrealcuzzo1 hour ago
      This sounds like a problem that&#x27;s pretty easy to get around.<p>CC does not <i>need</i> LGPL code. There&#x27;s more than enough BSD and Apache code to go around.<p>And they can generate synthetic data that is better than LGPL for their training.<p>It&#x27;s also a problem that does not seem feasible to meaningfully enforce.<p>It&#x27;s easy to generate CC code and lie and say you didn&#x27;t. It would be hard to <i>prove</i> that you did, especially if you took any precautions to make it even slightly difficult that you did.
      • adrian_b49 minutes ago
        Unlike GPL, BSD and Apache licenses do not claim to also cover your non-AI-generated code that only invokes the AI-generated code.<p>However, even if the BSD&#x2F;Apache&#x2F;MIT licensed code can be incorporated freely in your application, you still have no right to remove the copyright notices from it and&#x2F;or to claim that you own the copyright for it.<p>Therefore, unless the AI model has been trained only on non-copyrighted public-domain code, incorporating the generated code in your application means that you have removed the copyright notices from it, which is not allowed by the original licenses.<p>There is absolutely no doubt that using an AI coding assistant works around the copyright laws, but it is still equivalent with doing copy and paste with fragments from copyrighted works into your source code.<p>I consider that copyright should not be applicable to program sources, at least not in its current form, so reusing parts from other programs should be fair use, but only if human programmers would be allowed to do the same.
    • senaevren4 minutes ago
      [dead]
  • _flux6 hours ago
    I think it should be pretty clear that if you provided the tool the specification for the code you want, you have already provided creative input.<p>After all, is this not what happens with compilers as well? LLM agents are just quite advanced compilers that don&#x27;t require the specification to be as detailed as with traditional compilers.
    • senaevren5 hours ago
      The compiler analogy is the right one to reach for and the Copyright Office addressed it directly: the question is not whether you provided input, it is whether the creative expression in the output reflects human authorship. With a traditional compiler, the programmer authors every expression in the source. With an LLM, the programmer authors the intent and the model makes the expressive decisions about structure, naming, pattern, and implementation. Whether that distinction matters legally is what Allen v. Perlmutter is working through right now. The summary judgment briefing completed in early 2026 and it may be the next landmark ruling on exactly this question.
    • yodon6 hours ago
      &gt;it should be pretty clear that if you provided the tool the specification for the code you want, you have already provided creative input.<p>If you provided a human contractor with the specifications for the code you want, the courts have repeatedly made clear you have not provided the creative input from a copyright perspective, and the contractor needs to explicitly assign those rights to you if want to own the copyright on the code.
      • _flux3 hours ago
        Let&#x27;s say we didn&#x27;t have assemblers, but instead we would have three professions:<p>- Specifiers, who make the specification for the system<p>- Programmers, who write C code<p>- Machine encoders, that take that C code and write machine code for a CPU<p>Would it be that the copyright would then belong to programmers, if no other explicit assignments would be made?<p>---<p>Thinking about it, probably yes: copyright of the spec belongs to specifies, copyright of the C belong to programmers, and copyright of machine code to machine encoders. Or would it depend on the amount of optimizations the machine encoders would do, i.e. is it creative or not? And then does this relate to the task and copyrightability of C compiler output, where optimizations can sometimes surprise the developer?
    • kk_mors2 hours ago
      [dead]
    • hypercube336 hours ago
      To me this is like asking who owns the binary files a compiler generates.
  • jhbadger6 hours ago
    This is of course assuming you take AI-generated code unchanged. But you don&#x27;t, in my experience. And that generates a new work fully copyrightable even if the original wasn&#x27;t. Just like how the fad a decade or so ago of taking Tolstoy and Jane Austen works and adding new elements -- &quot;Android Karenina&quot; and &quot;Sense and Sensibility and Sea Monsters&quot; are copyrighted works even if the majority of the text in them was from public domain sources.
    • FartyMcFarter6 hours ago
      The article addresses this explicitly:<p>&gt; Works predominantly generated by AI without meaningful human authorship are not eligible for copyright protection<p>Note the word &quot;predominantly&quot;, and the discussion that follows in the article about what the courts and the copyright office said.
      • wongarsu6 hours ago
        Skimming over the article, it&#x27;s a lot about what the copyright office said and very little about what courts said. But the opinion of the copyright office doesn&#x27;t have any legal force. Regulations passed by the copyright office would be binding, but their opinions are just opinions. We will have to wait until relevant court cases reach a conclusion. And so far running litigation isn&#x27;t even about that question, it&#x27;s about infringing the rights of works that are in the training data
    • Luker886 hours ago
      No such assumption is made in the article.<p>Nor does it give a single answer.<p>Mere prompting is still not enough for copyright, and the problem is unsolved on how much contribution a human needs to make to the generated code.<p>In the case for generated images copyright has been assigned only to the human-modified parts.<p>Even worse, it will be slightly different in other nations.<p>The only one that accepts copyright for the unchanged output of a prompt is China.
      • ModernMech6 hours ago
        Here&#x27;s a question I have: if the AI generated image is of a character of which you own the IP, don&#x27;t you have protections based on the character regardless of who gets copyright protections from authorship of the image?
        • sarchertech5 hours ago
          Yeah if you have a copyright on the character, the AI generated image doesn’t change that. It doesn’t give you more of less protection than you already had.
          • beej714 hours ago
            IANAL but this sounds more like trademark territory.
            • sarchertech2 hours ago
              You can also trademark a character if it’s used as a brand identifier in commerce.<p>There are far more characters protected by copyright than trademark.
    • conartist66 hours ago
      I&#x27;m sure it&#x27;s not quite that simple. Only parts the parts of those knock-off works that aren&#x27;t public domain could be copyrightable. If you only own the copyright to ten lines in a 10k line codebase, then it&#x27;s probably fair use for someone else to just to take the whole thing.<p>Plus what if Anna Karenina was GPL?
      • d1sxeyes3 minutes ago
        Anna Karenina is public domain, assuming you’re talking about the original? If you translate it then maybe you could release it under GPL, but bit odd?
    • brianwawok6 hours ago
      You use humans to edit AI code? When you level up you are just using AI to write, AI to review, AI to edit, AI to test. Not a lot of steps left for meat bags.
      • mathgeek6 hours ago
        You&#x27;re forgetting that you need coffee&#x2F;tea&#x2F;mate to fuel the button pushers. The Jetsons predicted this decades ago.
      • gchamonlive6 hours ago
        AI for review is terrible, and by no fault of their own. It&#x27;s our job to specify and document intention, domain and the right problems to solve, and that is just hard to do. No getting around it. That&#x27;s job security for us meat bags.
      • ModernMech5 hours ago
        AI to write - code is buggy and not what I asked for<p>AI to review - shallow minutia and bikeshedding<p>AI to edit - wrote duplicated functions that already existed<p>AI to test - special casing and disabling code to pass the narrow tests it wrote<p>AI report - &quot;Everything looks good, ship it!&quot;
    • throwatdem123116 hours ago
      Ok what about all the Anthropic’s engineers who say they don’t write code at all and it’s 100% AI-generated?
    • gchamonlive6 hours ago
      &gt; This is of course assuming you take AI-generated code unchanged.<p>How much code do you need to change in order for it to be original? One line? 10%? More than 50%?<p>That&#x27;s arbitrary and quite unproductive convo to be honest.
      • ninkendo6 hours ago
        &gt; That&#x27;s arbitrary and quite unproductive convo to be honest.<p>Yeah but that’s what the legal system ostensibly <i>does</i>. Splitting fine hairs over whether a derived work is “transformative” is something lawyers and judges have been arguing and deciding for centuries. Just because it’s hard to define a bright red line, doesn’t mean the decision is arbitrary. Courts will mull over whether a dotted quarter note on the fourth bar of a melody constitutes an independent work all day long. It seems absurd, but deciding blurry lines are what courts are built to handle.
        • gchamonlive4 hours ago
          EDIT: I changed my argument completely.<p>That makes no sense because what if you refactor your code ad infinitum using AI? You spin up a working implementation, then read through the code, catalog the changes like interface, docs, code quality and patterns and delegate to the AI to write what you would.<p>It&#x27;s 100% AI code and it&#x27;s 100% human code. That distinction is what&#x27;s counterproductive.
        • stvltvs4 hours ago
          Because at the end of the day, someone has to own the code, so some lines have to be drawn no matter how arbitrary they seem.
    • exe341 hour ago
      &gt; This is of course assuming you take AI-generated code unchanged. But you don&#x27;t, in my experience. And that generates a new work fully copyrightable even if the original wasn&#x27;t.<p>That&#x27;s not how copyright works. The modified version is derivative. You can&#x27;t just take the Linux kernel, make some changes, and slap a new license on it.
    • mzl4 hours ago
      If you modify the work, that creates a derived work from whatever copyright the original works has, not a new work that is fully copyrightable.<p>As the article says in the Tl;DR at the top the code may be contaminated by open source licenses<p>&gt; Agentic coding tools like Claude Code, Cursor, and Codex generate code that may be uncopyrightable, owned by your employer, or contaminated by open source licenses you cannot see
    • 6stringmerc5 hours ago
      Wrong. This territory was heavily covered in music before this code concept - it has to be “transformative” in the eyes of the law. Even going in and cleaning up code or adding 10-25% new code won’t pass this threshold. Don&#x27;t bother arguing with me on this, just accept reality and deal with it.
      • jhbadger3 hours ago
        My copy of &quot;Sense and Sensibility and Sea Monsters&quot; is explicitly listed as being copyrighted by Ben H. Winters in 2009 despite the majority of the words being Austen&#x27;s, though. Perhaps music has different rules compared to text. I suspect Winters and his publisher have investigated the legality of this more than either of us have.
        • acdha50 minutes ago
          Jane Austen died long enough ago that her works are in the public domain, so Winters did not need a license to use it. That does not mean that he gained rights to her work: if he tried to sue someone for use of anything which appeared in the original, he would lose in court because it’s easy to show that copies made before he was born had the same text. This also how they prevent people trying to extend copyright by making minor changes to an existing work: the new copyright only covers the additions.<p>There’s a very accessible summary of the United States rules here:<p><a href="https:&#x2F;&#x2F;www.copyright.gov&#x2F;circs&#x2F;circ14.pdf" rel="nofollow">https:&#x2F;&#x2F;www.copyright.gov&#x2F;circs&#x2F;circ14.pdf</a>
  • daishi556 hours ago
    I’m no lawyer but I feel that meta, my employer, wouldn’t be letting us go hog-wild with Claude code if they weren’t completely confident that they fully owned the outputs, whether we change it or not.
    • senaevren5 hours ago
      Meta&#x27;s confidence almost certainly rests on the employment contracts and IP assignment clauses, not on a legal theory that AI output is inherently copyrightable. The enterprise agreement with Anthropic assigns outputs to the licensee. The employment contract assigns work product to Meta. Those two documents together give Meta a defensible ownership position regardless of the authorship question. The interesting gap is for developers using personal accounts or consumer plans on side projects, where neither of those documents exists.
      • beej713 hours ago
        I don&#x27;t understand how a company can have IP copyright rights on code that is inherently uncopyrightable (in the unlikely event scotus rules that way).
    • sarchertech4 hours ago
      There’s so much FOMO right now around AI that no one is thinking clearly. I wouldn’t be so confident in your company.
      • user3428326 minutes ago
        To evaluate the legal risks of using AI generated code, let’s consider how many lawsuits there have been over these concerns.<p>Inadvertent copyleft license violations: probably 0 lawsuits<p>Competitor copied your software, you could not defend your rights in court because it was made with AI: probably also 0<p>Users of agentic AI for software development: &gt;10 million<p>The thinking here seems pretty clear to me.
  • bko6 hours ago
    This is all well and good as an intellectual exercise, but in real life none of this matters. Almost no one thinks their code is copyrightable or seriously thinks their code is a moat. I&#x27;ve written the same chunks of code for a number of employers as has every engineer. We&#x27;ve all taken chunks from stack overflow and other places without carefully considering attribution.<p>This comes up in a few places as a kind of vindictive battle. One example is Oracle suing Google for too closely mimicking their API in Android. Here is an example:<p>&gt; private static void rangeCheck(int arrayLen, int fromIndex, int toIndex) {<p><pre><code> if (fromIndex &gt; toIndex) throw new IllegalArgumentException(&quot;fromIndex(&quot; + </code></pre> fromIndex +<p><pre><code> &quot;) &gt; toIndex(&quot; + </code></pre> toIndex + &quot;)&quot;);<p><pre><code> if (fromIndex &lt; 0) throw new ArrayIndexOutOfBoundsException(fromIndex); if (toIndex &gt; arrayLen) throw new ArrayIndexOutOfBoundsException(toIndex); </code></pre> }<p>And it was deemed fair use by the Supreme Court. Other times high frequency hedge funds sued exiting employees, sometimes successfully. In America, anyone can sue you for any reason, so sure, you&#x27;ll have Ellison take a feud up with Page and Brin all the way up to the Supreme Court.<p>In 99.9% of instances none of this matter. Sure there&#x27;s the technical letter of the law but in practice, and especially now, none of this matters.<p><a href="https:&#x2F;&#x2F;www.supremecourt.gov&#x2F;opinions&#x2F;20pdf&#x2F;18-956_d18f.pdf" rel="nofollow">https:&#x2F;&#x2F;www.supremecourt.gov&#x2F;opinions&#x2F;20pdf&#x2F;18-956_d18f.pdf</a>
    • freedomben6 hours ago
      &gt; <i>Almost no one thinks their code is copyrightable or seriously thinks their code is a moat.</i><p>You&#x27;d be surprised! Among non-software management types, they often think of the code as extremely valuable IP and a trade secret. I&#x27;m a CTO and I&#x27;ve made comments before to non&#x2F;less technical peers about how the code (generally speaking) isn&#x27;t that big of a secret, and I routinely get shocked expressions. In one case the company almost passed on a big contract because it required disclosure of the source code (with an NDA). When I told them that was a silly reason and explained why, they got it, but the old way of thinking still permeates and is a hard habit to break.<p>Edit: Fixed errant copy pasta error. Glad that wasn&#x27;t a password :-)
      • bko6 hours ago
        You&#x27;re right, I guess maybe I mean in any serious actionable way. Senior, non technical people leave plenty of money on the table by thinking they&#x27;re protecting something valuable or they have some kind of secret sauce. It&#x27;s all silly is what I meant to say, and digging into the technicalities of whether your code is truly copyrightable is kind of pointless. It&#x27;s all vibes.
        • senaevren5 hours ago
          The place where it concretely matters is M&amp;A due diligence. Acquirers are now routinely asking about AI tool usage in development and running license scans as a condition of closing. A codebase that cannot demonstrate human authorship over its core IP, or that contains GPL contamination, creates a representation and warranty problem in the purchase agreement. For most companies day to day you are right. For the companies that get acquired or raise institutional capital, the question becomes very concrete very quickly.
          • freedomben9 minutes ago
            Very interesting, I had no idea. That&#x27;s probably going to be a very painful lesson learned by all the startups that have been pumping out AI code. I know of several just among my peer groups that will be shocked and dismayed by this. Thanks for sharing that!
      • hackingonempty5 hours ago
        Maybe LLM coding agents change the equation by making it much easier to adapt and use foreign and probably incomplete code. Getting you closer to competing with the original authors in a shorter amount of time than generating new code from scratch.
    • conartist66 hours ago
      Nobody ever talks about convergence.<p>You, right now, are taking about convergence.<p>If there is no artwork, there can be no copyright. If every character of the code to write is basically predetermined by the APIs you need to call, there is no artwork and no copyright.<p>Build a novel new API, and you&#x27;ll be protected though.
    • sarchertech4 hours ago
      &gt; Almost no one thinks their code is copyrightable<p>Every open source license is built on the premise that code is copyrightable.
      • adrian_b38 minutes ago
        No.<p>It is based on the premise that if the proprietary licenses are valid, then also the open source licenses are valid.<p>So what is held as true is only the implication stated above and not the truth value of the claims that either kind of licenses are valid.<p>If the proprietary licenses are not valid, then it does not matter that also the open source licenses are not valid.<p>The open source licenses are intended as defenses against the people who would otherwise attempt to claim ownership of that code and apply a proprietary license to the code, i.e. exactly what now Anthropic and the like have done, together with their corporate customers.<p>Of course, if it is accepted that the code generated by an AI coding assistant is not copyrightable, then using it would not really be a violation of the original open source licenses. The problem is that even if this principle is the one accepted legally, at least for now, both Anthropic and their corporate customers appear to assume that they own the copyright for this code that should have been either non-copyrightable or governed by the original licenses of the code used for training.
    • Rietty6 hours ago
      Why were the HFT firms suing employees?
    • Nursie6 hours ago
      &gt; Almost no one thinks their code is copyrightable<p>I think this is an unusual opinion.<p>Code may not be copyrightable in as small chunks as you put there, but in terms of larger pieces I think companies and individuals very often labour under the belief that code is intellectual property under copyright law.<p>If code isn&#x27;t copyrightable, from where comes the GPL?<p>And why does anyone care if (for instance) some Microsoft code might have accidentally ended up in ReactOS, causing that project to need to go into a locked-down review mode for months or years? For that matter why do employers assert that they own the copyright in contracts?<p>I think it&#x27;s the opposite - almost everyone thinks their code is copyrightable, outside of APIs and interop stuff, or things so simple as to be trivial.
    • croes6 hours ago
      &gt; Almost no one thinks their code is copyrightable<p>Then why does reverse engineered code need to be a clean room implementation?<p>Ask any emulator developer or the developers of ReactOS<p><a href="https:&#x2F;&#x2F;reactos.org&#x2F;forum&#x2F;viewtopic.php?t=21740" rel="nofollow">https:&#x2F;&#x2F;reactos.org&#x2F;forum&#x2F;viewtopic.php?t=21740</a>
  • TheFirstNubian5 hours ago
    The elephant in the room, of course, is what constitutes “meaningful human authorship.” However, I cannot shake off the feeling that all user interactions with these AI models are being logged. Perhaps this may turn out to be the bigger concern in a potential legal battle than code authorship.
  • palata5 hours ago
    One question I have is this: if an employee produces code predominantly generated by AI, it means that it is not copyrightable. Does that mean that the employee can take that code and publish it on the Internet?<p>Or is it still IP even if it is not copyrightable? That would feel weird: if it&#x27;s in the public domain, then it&#x27;s not IP, is it?
    • senaevren5 hours ago
      That is exactly the right question and the answer is genuinely strange. Uncopyrightable work falls into the public domain, which means anyone can use it, copy it, or build on it freely. The employer can still call it a trade secret and protect it through confidentiality obligations in employment contracts, but that protection is contractual rather than property-based. A trade secret loses protection the moment it is disclosed. So the employer&#x27;s claim over purely AI-generated code is essentially: &quot;you cannot share this&quot; rather than &quot;we own this.&quot; Those are meaningfully different legal positions, and most companies have not thought through which one they actually have.
      • zvr3 hours ago
        Yes, and if the same come ends up in someone else&#x27;s hands, they can state &quot;we didn&#x27;t steal it, a GenAI generated it for us, the same as it did for you&quot;. Given the non-deterministic operation of current GenAI systems (a major difference from compilers), it would probably be hard to prove either position.
      • palata4 hours ago
        So employees are not allowed to distribute the code, but if it leaks, then it is public and the company cannot do anything about it. Correct? That&#x27;s what happened to Anthropic I think?
    • BlackFly5 hours ago
      A recipe isn&#x27;t copyrightable but is still protected under trade secret law. I imagine that the same would apply. I think the major difference with software copyright is that I can just decompile your binary or copy a binary and give it to other people. For SAAS companies that don&#x27;t distribute binaries, I imagine they basically have the same protections against rogue employees.
    • cillian645 hours ago
      To look at it another way, just because some code I work on at my job is derived from open source MIT-licensed code doesn&#x27;t mean I personally have the right to distribute it if my company doesn&#x27;t want me to. I&#x27;d guess this comes under some generic &quot;confidential information&quot; clause in the employment contract.
      • palata4 hours ago
        Hmm your example is different: if you manually write code, there is a copyright for it whether it is derived from an MIT-licence or not. If you don&#x27;t own that copyright (because your employer does), then you don&#x27;t have the right to distribute it because it is not your code.<p>If you generate the same code with AI, now it does not have a copyright. If it depends on an MIT library, then the MIT library has a copyright and you have to honour the licence. But the code you produced does not have a copyright (because it was generated by an AI). And therefore nobody &quot;owns&quot; it. My question is: can your employer prevent you from distributing something they don&#x27;t own?
    • ModernMech5 hours ago
      Presumably company policy would be implicated here, not copyright law. Whether or not it&#x27;s copyrightable, what you create using AI is work product.
  • threepts37 minutes ago
    Whoever pays for the tokens.
  • joshka5 hours ago
    If you want to go much deeper, <a href="https:&#x2F;&#x2F;www.copyright.gov&#x2F;ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.copyright.gov&#x2F;ai&#x2F;</a> is particularly good at least on the side of comprehensiveness.
  • hackingonempty5 hours ago
    Nobody disputes that I own the copyright in a sound recording I made just by pushing the red button on my recorder. So it is a mystery to me that copyright to any sort of human conditioned machine generation is in dispute.
    • senaevren5 hours ago
      The sound recording analogy breaks down at the point where the recorder makes no creative decisions. Pressing record captures what is already there. Prompting Claude generates something that did not exist, through decisions the model makes about structure, naming, pattern, and implementation. The closer analogy is hiring a session musician and telling them the key and tempo. You own the recording under work-for-hire if they signed the right contract, but the creative expression in the performance is theirs unless explicitly assigned. The button you push to start the model is not the same button as the one on the recorder.
      • CamperBob255 minutes ago
        Fourier theory says that any sound, however complex, can be synthesized by summing sines and cosines. That&#x27;s what an LLM does, if you twist the metaphor enough. It synthesizes complex outputs from simpler basis functions that are, or should be, uncopyrightable.<p>The fact that it inferred those basis functions from studying copyrighted works doesn&#x27;t seem relevant. Nor does the fact that the &quot;Fourier sums&quot; sometimes coincide with larger fragments of works that are copyrighted. How weird would it be if that <i>didn&#x27;t</i> happen?
  • e12e6 hours ago
    Seems to gloss over other kinds of contamination, beyond GPL code. Code from pirated text books, the problem with the entire language model being trained on copyright data, and on the possibility of the training data containing various copyrighted code.
    • embedding-shape6 hours ago
      &gt; Code from pirated text books<p>Anthropic &quot;solved&quot; this by intermingling the texts extracted from pirated books (illegal) with texts extracted from the physical books they bought and destroyed (legal), so no one can clearly say if the copyrighted material it spits out came from a legal source or not. Everyone rejoiced.
      • senaevren5 hours ago
        The intermingling argument is actually central to the Bartz settlement structure. The settlement required destruction of the pirated dataset specifically because commingled training data creates an unresolvable provenance problem. For deployers building on Claude, EDPB Opinion 28&#x2F;2024 requires a documented assessment of the foundation model&#x27;s training data legal basis before deployment. &quot;We cannot tell which outputs came from which source&quot; is not a satisfactory answer to a regulator running that assessment. wrote about it before here: <a href="https:&#x2F;&#x2F;legallayer.substack.com&#x2F;p&#x2F;i-read-every-edpb-document-on-llm" rel="nofollow">https:&#x2F;&#x2F;legallayer.substack.com&#x2F;p&#x2F;i-read-every-edpb-document...</a>
      • e12e2 hours ago
        &gt; books they bought and destroyed (legal)<p>They&#x27;re only legal if training is fair use - and even I don&#x27;t think it&#x27;s immediately clear what would be the legal status of verbatim regurgitation of code in copyright, or code protected by patents?<p>AFAIK I (as a human developer) can&#x27;t assume that I can go and copy code out of a text book, and then assume copyright and charge for a license to it?
        • embedding-shape2 hours ago
          &gt; They&#x27;re only legal if training is fair use<p>The judge seems to have said it&#x27;s because they &quot;transformed&quot; the books (destroying them after digitalizing) in the process, that made it legal.<p>&gt; Ultimately, Judge William Alsup ruled that this destructive scanning operation qualified as fair use—but only because Anthropic had legally purchased the books first, destroyed each print copy after scanning, and kept the digital files internally rather than distributing them. The judge compared the process to “conserv[ing] space” through format conversion and found it transformative. - <a href="https:&#x2F;&#x2F;arstechnica.com&#x2F;ai&#x2F;2025&#x2F;06&#x2F;anthropic-destroyed-millions-of-print-books-to-build-its-ai-models&#x2F;" rel="nofollow">https:&#x2F;&#x2F;arstechnica.com&#x2F;ai&#x2F;2025&#x2F;06&#x2F;anthropic-destroyed-milli...</a>
  • tommy29tmar5 hours ago
    Maybe the useful test is not “who wrote this line?” but “can you show how it went from requirement&#x2F;prompt&#x2F;context to diff to human review&#x2F;tests?” If you can’t, ownership is only one issue. You also can’t tell what was accepted as engineering work versus just copied output.
    • senaevren5 hours ago
      This is actually closer to how the Copyright Office thinks about it than the article makes clear. The registration guidance that emerged from the Thaler proceedings specifically asks applicants to describe the human creative contributions and how the AI was used. A documented workflow showing requirement, architectural decision, rejection of AI output, human restructuring, and review creates a paper trail that maps directly onto what the Office looks for. The can you show how it got here test you are describing is the practical version of the legal standard.
  • skadge6 hours ago
    This seems to be grounded in US law. Does anyone know if the same rules would apply in eg EU law?
    • zvr3 hours ago
      Most of this is based on Copyright legal framework, which is surprisingly homogeneous around the world. The discussions about ownership of AI-generated material are exactly the same in EU.
    • senaevren5 hours ago
      [dead]
  • bearjaws6 hours ago
    Article is incredibly fear mongering.<p>Twice in my career the owners of a company have wanted to sue competitors for stealing their &quot;product&quot; after poaching our staff.<p>Each time, the lawyers came in and basically told us that suing them for copyright is suicide, will inevitably be nearly impossible to prove, and money would be better spent in many other areas.<p>In fact, we ended up suing them (and they settled) for stealing our copyrighted clinical content, which they copied so blatantly they left our own typos and customer support phone number in it.<p>Go ahead, try to sue over your copyrighted code, 10 years and 100M later you will end up like Google v Oracle. What if the code is even 5% different? What about elements dictated by external constraints; hardware, industry standards, common programming practices, these aren&#x27;t copyrightable.<p>Then you have merger doctrine, how many ways can we really represent the same basic functions?<p>Same goes with the copyleft argument, &quot;code resembling copyleft&quot; is incredibly vague, it would need to be verbatim the code, not resembling. Then you have the history of copyleft, there have been many abuses of copyleft and only ~10 notable lawsuits. Now because AI wrote it (which makes it _even harder_ to enforce), we will see a sudden outburst of copyleft cases? I doubt it.<p>Ultimately anyone can sue you for any reason, nothing is stopping anyone right now from suing you claiming AI stole their copyleft code.
  • kouru22548 minutes ago
    IMO this is the greatest argument against AI as technofascism. The general public seems to believe that AI will usher in technofascism by claiming corporate ownership of AI output: the independent entrepreneur will be unable to compete against the corporations compute, every piece of data about you will be stolen and monetized by AI, and you will own nothing.<p>But AI might in fact do the exact opposite and reverse the privatization trend that the West has been going through for the last 400 years. All of our copyright laws rely on the idea that there is a human consciousness behind the copyright. The more AI has input, the less we can claim ownership. If AI returns everything to the commons, then it results in a much more egalitarian world.<p>Hilariously, many people, especially artists, see the return of the commons as an assault against them. They’re so captured by copyright that they assume any infringement on their copyright is inherently fascist. It’s ridiculous. Copyright is a corporations number 1 weapon when it comes to creating a moat and keeping the masses out.<p>The original intent of copyright, in fact, was an incentive to return an idea to the commons. Experts used to hide their discoveries in order to keep them for themselves. Copyright provided an opportunity to release this knowledge and still profit. There were even several cases where it was established that those who claimed copyright could retain copyright even if the idea had been previously discovered. This created a huge incentive: release the knowledge or risk having your process copyrighted by the opposition. But that system worked because copyright could only exist for so long (14 years, doubled if they filed again.)<p>Now copyright is a lifelong sentence at almost 100 years. The entire purpose of it has been undermined. Corporations own all your childhood and by the time you can profit off of it, it’s outdated.<p>A world where the mainstream is primarily a commons seems to me like an egalitarian world. I’d like to live in that world.
    • senaevren3 minutes ago
      The original bargain you describe, limited term in exchange for public disclosure, is exactly what makes the current situation strange. If AI-generated output falls into the public domain immediately, that is actually closer to the original intent of copyright than 95-year terms. The legal question is whether that outcome happens by design or by accident, and what it means for the people building products on top of AI-generated codebases right now.
  • padmabushan6 hours ago
    First answer who owns the model built with public data
    • senaevren5 hours ago
      The model ownership question and the output ownership question run on separate legal tracks and the piece focuses on the second deliberately. On the first: the model weights are owned by Anthropic under work-for-hire from their engineers regardless of what the training data contained. Training data copyright infringement is a separate tort claim against Anthropic, not a basis for anyone else to claim ownership of the model. The Bartz settlement resolved the pirated books claim without disturbing Anthropic&#x27;s ownership of the weights. Owning the training data does not give you ownership of the model trained on it, any more than owning the paint gives you ownership of the painting.
  • smashed6 hours ago
    The &quot;if you generated the code at work using company tools, it&#x27;s owned by your employer&quot; affirmation in the article makes no sense to me?<p>If computer generated code is not copyrightable, ownership cannot be reassigned either.
    • conartist66 hours ago
      It is copyrightable. A *human* can copyright code they wrote.
      • smashed6 hours ago
        I meant in the sense that the &quot;tool&quot; is an LLM and the &quot;work&quot; was vibe coded.<p>If vibe coded work is not copyrightable, it cannot be reassigned to the employer and become copyright protected.
        • senaevren5 hours ago
          This is the sharpest point in the thread. You are right if the output has no copyright to begin with, there is nothing to assign. The employer&#x27;s contractual claim over purely AI-generated code is not a copyright claim, it is a trade secret and confidentiality claim. Those are weaker protections: they require the information to remain secret, they do not survive disclosure, and they cannot be enforced against independent creation of the same code. Most IP assignment clauses in employment contracts were not drafted with this scenario in mind and may be claiming rights that do not legally exist.
        • conartist66 hours ago
          correct
    • croes6 hours ago
      How is it for human developers now if the company tool is a cloud tool and not running on company servers?
  • mensetmanusman6 hours ago
    It’s the same as photography. No photographer built the multibillion dollar supply chain for the optics train in a camera, nor did they build the city scape they are enjoying as a background, they simply set the stage and push a button.
  • DeathArrow6 hours ago
    I have a wood cutting machine and some wood. Who owns the timber?
    • bell-cot6 hours ago
      Sadly, IP &quot;ownership&quot; and copyright law are <i>vastly</i> more complex than ownership of physical stuff.<p>Or were you planning to reproduce the (say) Ford Motor Company&#x27;s trademarked symbol in wood? If so, you&#x27;re right back in the stinkin&#x27; swamp.
    • croes6 hours ago
      What is the wood in your example?<p>This is like a machine you ask for timber and you get timber but you didn’t need to provide any wood
  • senaevren6 hours ago
    [dead]