
I received the following email last week:
Ms. Hoy:
I have a question for you. I hate to burden you with it, but I have nobody else to turn to.
Last week I finished a story and sent it to (as well-known business magazine). Almost immediately, I received am email that said what I had done was unprofessional and unethical. Further, she referenced a previously published piece by someone for Business Insider. I looked at the piece she referenced and all it was was a bunch of pictures with some 2-3 line captions.
My article was over 1400 words long. I’m not sure how you plagiarize a few captions into a 1,400 word article, but somehow I guess I did it.
To prove my point, I ran my article through Copyscape, and it came back clean. No copying at all. I even sent the editor a photo of the results. I think the editor cut me off because I never heard from her again so I sent the managing editor an email explaining the situation.
Today, he responded that they believe my article had appeared somewhere. Whether it was generated using AI or not was unknown to them.
How do you prove you haven’t done something? You might also be interested to know I have never worked with AI and I have no idea how to use it. Thoughts?
Unfortunately, we’re going to be hearing about this more and more.
If a writer has previously published an article (or one similar), the AI systems can pick that up. If your article was similar to another one written by someone else, same thing. I really can’t explain the caption issue but it seems they’re using a really stringent AI detector (that’s not necessarily a good thing), and it probably came back with a link to those photos with captions.
Once you pointed that out to the editor, she may have gotten embarrassed (because she didn’t do her job right!), and that’s why she stopped responding. The managing editor’s response was ridiculous.
The problem, which University of Tennessee English Professor and Professional Editor Clayton Jones explained in Episode 6 of the WritersWeekly podcast, is that AI pulls data from the Internet, duplicates it, then puts it back out there…and then pulls it again, duplicates it, etc., etc. Clayton likened it to making a Xerox copy of a Xerox copy of a Xerox copy. In the end, you have a watered down, poor-quality piece of paper. But, where AI is concerned, it’s watered down duplications of data.
You’re more apt to see this with non-fiction because there is so much information on pretty much every subject on the Internet.
What’s worse? Authors who are using AI to make their books better, or uploading their articles, etc. (any writing at all) thinking AI can make them better, are FEEDING THEIR OWN WORK into the AI systems, where their words will likely exist forever. The problem with that is those writers’ work will eventually get those writers’ themselves accused of plagiarism just because they shared their work with an AI program.
These are the AI detectors we use (not that they’re innocent of doing what I described above!):
https://quillbot.com/ai-content-detector
https://www.grammarly.com/ai-detector
For Grammarly, you can only use it once a day for free. If you try to use it twice, they’ll hit you up for $$.
RELATED
- Have YOUR Books Been Used to Train AI? – by James M. Walsh, Esq. – 04 2025
- All’s Fair in Love and…Hey! Wait a Minute! That’s Mine! A Brief Discussion on FAIR USE By Neil Wilkinson
- AI: How to Help Students Avoid the New Plagiarism – By Rickey Pittman
- Generative vs. Assistive AI…and When Writers Need To Disclose By K.M. Robinson
- Beware the (False!) AI Sniff Test By Diona L. Reeves
- The Impact of AI on Screenwriting in Hollywood By Mark Heidelberger
HAVE A QUESTION ABOUT SELF-PUBLISHING A BOOK?
a self-publishing services company that has been in business since 1998. Ask her anything.
ASK ANGELA!








