There's No Moral Objection to AI Art
"Pirate" training of generative AI is fair use and in the public interest
Property is essentially coercive: property rights exclude others from use of the “owned” good. But there are obvious reasons why property is nonetheless a socially valuable institution (especially for “rival” goods, like a sandwich, that cannot be shared without loss). At least in a sufficiently just society, you can often justify excluding others from use of your stuff, when their use of it would otherwise exclude you.
Digital goods, unlike material ones, are non-rival: free copying means that sharing leaves the original holder no poorer. This makes intellectual property gratuitously exclusionary in a way that is presumptively bad (at least to some extent), and calls out for justification. It’s not just about excluding others from the use of the particular digital token that you’re using; it’s about preventing others from making copies of the same type — copies that they could make, and enjoy, at no cost to you. There is surely no “natural right” to such gratuitous exclusion.
Nonetheless, we can justify (limited forms of) intellectual property on the pragmatic grounds that it helps to incentivize valuable innovation and creative works. Restricting access allows creators to profit from their creations, which incentivizes more creative acts than would otherwise occur. If the value of the extra innovation thereby gained outweighs the disvalue of restricting access to past goods, then the IP restrictions are overall worth it: a necessary evil to spur valuable innovation.
Still, it’s important to recognize that “necessary evils” are evils to be minimized as far as possible, not intrinsic goods to be cherished for their own sakes. If we can get the same instrumental benefits with fewer restrictions (e.g. shorter copyright terms, more liberal “fair use” exemptions, etc.), then we should leap at the opportunity. More innovation with less restriction would be ideal, which gives us reason to explore alternative incentive structures like publicly-funded prizes as an alternative to patents for medical or other scientific innovations.
This pragmatic stance on intellectual property (as a “necessary evil”) contrasts with the propertarian view that creators have a natural right to control how others interact with their creations. The latter view treats it as presumptively wrong to benefit from another’s work without their permission.
This is, in other words, the distinction between “free culture” vs “permission culture”.
Free vs. Permission Culture
As Lessig, in Free Culture, characterizes the opposing view:
Creative work has value; whenever I use, or take, or build upon the creative work of others, I am taking from them something of value. Whenever I take something of value from someone else, I should have their permission. The taking of something of value from someone else without permission is wrong. It is a form of piracy.
He adds:
[This] is the perspective that led a composers’ rights organization, ASCAP, to sue the Girl Scouts for failing to pay for the songs that girls sang around Girl Scout campfires. There was “value” (the songs) so there must have been a “right” — even against the Girl Scouts.
The propertarian fear that someone, somewhere might obtain value from others without permission is not new. Lessig describes how, when the camera was first invented, the reactionary propertarians of the time argued that photographers should not be allowed to obtain free value by taking images of others without permission. Thankfully, the courts rejected propertarian extremism back then, and instead ruled in favor of the “pirates”. Thus we now enjoy photography without the burden of legal regulations that would effectively put this technology out of reach of ordinary citizens. Things could have turned out very differently.
Imagine how much emptier our lives would be if all value were privately “owned” and locked down by default. Even thoughts could constitute piracy: You may not hum a tune, even silently within your head, without paying extra to the artist for the mental licensing. You may never think any valuable thoughts inspired by another without their permission. You may not creatively build on their work, or parody them, or create original “mash-ups” from unoriginal parts; nothing without permission. Given how powerfully transaction costs can deter action, the end result here would be vastly less value in the world.
In prohibiting action by default, permission culture is deeply illiberal. (And clearly dystopian when taken to extremes.) This should make people very, very cautious of valorizing “permission-seeking” as a general obligation. Except for where there’s a clear utilitarian case for cultural regulation, our default presumption should be to favor freedom.
The Presumption of Liberty
We may grudgingly grant the need for (limited) intellectual property laws to incentivize production, much as we grant (limited) material property rights to encourage stability and investment (while still being subject to taxation). But the idea that creators have a presumptive moral right to restrict how others engage with their work—regardless of the social cost—is just abominable.
Imagine an artist in a patriarchal society complaining when women are allowed into the art museum for the first time: “I never gave permission for women to view my art!” This artist has no legitimate moral complaint, I’d say, because he has no moral right to make his work accessible only to men. Likewise, artists have no moral right to make their work accessible only to humans. They have no legitimate complaint if an AI trains on the work they post online, any more than they can complain about a young human artist “training on” (or learning from) their work. In claiming such veto powers—demanding that their permission be sought before such actions are undertaken—the artists are claiming power over others without adequate basis.
Generalizing from these examples, we may conclude that people do not have rights of restriction or exclusion by default. To seek to restrict others is (to some extent) bad in itself, and so stands in need of pragmatic justification in order to qualify as a “right”. Sometimes such justifications exist, so we do turn out to have certain rights of restriction: We may, for example, choose what we wish to make public at all; the things we keep private should not then be publicized without our consent. There are excellent utilitarian justifications for such rights to privacy. The same cannot be said for rights to share with the public but for a certain demographic, or but for training generative AI. There is no utilitarian justification for those rights, and they are not the sorts of rights that could exist without a utilitarian justification.
By contrast, note that training—as a form of education or capability-enhancement that tends towards innovation—is presumptively good. It’s the sort of thing we should favor by default, pending countervailing reasons.1
“AI is theft” is the modern-day version of “taxation is theft”: a slogan steeped in shallow propertarian ideology, revealing a failure to understand that property is an inherently coercive institution that (i) can only be justified instrumentally, and (ii) should thus allow every exception that would better serve the greater good.
Redistributive taxation that successfully serves the greater good is not any kind of “theft” worth worrying about, and nor is AI training (on existing cultural products) that successfully serves the greater good. In either case, there are serious debates to be had about how the empirical costs and benefits balance out: what we hope to do good may turn out otherwise. But moralizing about “theft” is a sideshow.
Below the paywall:
As per the above, there’s something inherently shitty about paywalls. I’d much prefer a situation in which (i) everyone who would be willing to pay to read my posts continues to do so even without a paywall, and (ii) I make everything available for free to everyone. Alas, in the absence of condition (i), it seems prudent to occasionally compromise on (ii) a bit, so that my massive time investment in blogging is at least slightly rewarded (plus it allows me do more good via donating more).
Since paywalls are, at best, a “necessary evil”, I’m generally supportive of efforts to get around them so long as it doesn’t undermine the author’s incentives to produce valuable content in the first place. (For example, subscribers should feel free to forward a full unlocked email to select individuals who might just be interested in that particular post.) Given the value ordering:
Paid ride > Free ride > No ride
Anything that (i) can help people to get a free ride who would otherwise not ride at all, (ii) without reducing the numbers of paying patrons (or having other ill effects), is overall to the good.
Below this particular paywall, as a bonus for my paid subscribers, I offer additional sections exploring:
(1) Why I think neo-Luddite data poisoning is morally vicious and anti-social; and
(2) Further thoughts on the general moral orientation that leads people into the error of demanding permission as a prerequisite to AI training. I think a lot of people don’t intuitively grok how deeply appalling this moral perspective is, so I’ll try my best to bring this out more clearly. (It intersects in interesting ways with some of my broader philosophical themes.)
Keep reading with a 7-day free trial
Subscribe to Good Thoughts to keep reading this post and get 7 days of free access to the full post archives.