Amazon’s has new ‘AI declaration’ rules, currently only being applied to ebooks…
We define AI-generated content as text, images, or translations created by an AI-based tool. If you used an AI-based tool to create the actual content (whether text, images, or translations), it is considered “AI-generated,” even if you applied substantial edits afterwards.
Book creators must declare any use such of generated AI, even if later heavily edited by a human.
Thus it seems important not to have book covers that include any AI created elements. Even if you use a stock AI-created backdrop for a book cover, and it’s only 20% of the final cover, Amazon requires the whole book be labelled “AI generated”.
If not labelled then there seems a real risk it will be pulled from the store. This will be especially relevant when AI watermarking is rolled out, as Amazon’s bots will then be able to auto-detect the AI. In the meanwhile there’s also a risk with content that might attract the attention of activists of either the right or the left, seeking a way to have it ‘cancelled’. They might pounce on an undeclared use of AI.
AI translation is also covered. Thus if a scholar uses an AI-powered translation service to translate just one required quote (from Latin, say), then presumably again the whole book has to be labelled “AI generated”. AI-made abstracts, tables-of-contents, cover blurbs (and eventually AI generated back-of-the-book indexes) could also fall foul of the new rules. Even if heavily edited by a human.
And you might say… how will they tell? Ah, well… AI output from the main corporate tools is set to be invisibly watermarked, with Google already rolling out its version of the watermarking last week. Nvidia just signed up to watermarking, raising the prospect of embedding at the graphics-card level. Steganography… look it up.
And where such labelling leads to is very uncertain. For instance, having your book labelled “AI generated” might soon mean it doesn’t appear in search, or is only to be found on the Amazon store with difficulty. You may even find it’s blocked by some third-party Web browser add-on, cooked up by an AI-hater.
An example is DeviantArt’s AI declaration, required of people posting pictures. This seemed benign at first… until it wasn’t. Some weeks later, users found they could block all those “AI” tagged images. Those who had been honest and trusting of the company suddenly found their work being automatically ‘disappeared’.