ChatGPT, AI, and the Future of Writing and Publishing – by Brian Whiddon, Managing Editor

ChatGPT, AI, and the Future of Writing and Publishing – by Brian Whiddon, Managing Editor

Angela is covering for an employee on maternity leave so our star Managing Editor, Brian Whiddon, has once again stepped in to cover for her. Thanks, Brian!


Unless you have been flat-out avoiding the news lately, you’ve most likely heard something about ChatGPT, and the amazing things that this artificial intelligence (AI) application can do.

But, first, what is AI? There are actually four types:

  1. Reactive Machines
  2. Limited Memory
  3. Theory of Mind
  4. Self Aware

And, what is ChatGPT?

“In its own description, ChatGPT is ‘an AI-powered chatbot developed by OpenAI, based on the GPT (Generative Pretrained Transformer) language model. It uses deep learning techniques to generate human-like responses to text inputs in a conversational manner.’”CNBC.com

Many are heralding this technological breakthrough, and all the possibilities it presents. However, many are asking very legitimate questions concerning its impact on human civilization. The concerns span from areas such as intellectual property rights and educational integrity, to fears concerning individual privacy, social engineering, and simply whether the human race will be able to know reality from fiction.

I could easily write 10,000 words on the various arguments I’ve heard over the last year or so on ALL sides of the arena concerning AI. But, I’m narrowing down the topic here to specifically discuss its impact on writing and publishing.

For those of you who don’t follow, understand, or even care about artificial intelligence, or how it works, and how it is used by regular (non-techie) folks, let me briefly bring you up to speed:

I’m typing this article up on Microsoft Word. I have my Spellcheck on and it is correcting my spelling mistakes as I type. This program will even highlight sentence fragments, contractions, punctuation, and other text that it deems “questionable” to lure my attention. I can click on any of these highlighted items to see a menu of options, asking if I really meant to write “I’m” rather than “I am,” or if a comma might be better than that colon I placed in a paragraph, or even if I want to re-word a certain phrase so it will make a little more sense. It even told me that I just made a run-on sentence!

Spellcheck is a very rudimentary form of AI that has been around, and evolved since the 80s.

Fast forward to modern times. We’ve all had the experience of having to call our credit card company, or a product help line, or a myriad of other businesses – only to be greeted by a staticky-sounding female voice who calmly explains our “options” to us, and asks us to briefly tell “her” in a few words what we are calling about. Then, based on our verbal response, like “account balance,” “technical support,” “other,” or even “representative… representative… REPRESENTATIVE!!!” – the magical voice will transfer our call to the appropriate human being or – God forbid – yet another menu we have to sit though. Guess what? You have interacted with a form Artificial Intelligence.

Now, all of that can be basically be achieved with a whole lot of data entry, and using computer code, to assign particular actions in response to specific input a computer receives. We are a long way beyond the archaic “If-Then” commands of the old BASIC programming language (for those of you who can still remember the old “TRS-80”) but, at the grass roots level, it’s still the same thing.

But, over the last few decades, tech wizards – never satisfied to leave well enough alone – have labored to enable computers to actually “learn” from each input and resulting action it encounters to improve their abilities to handle more tasks faster, and more accurately. In the most basic sense, a computer can become its own data entry “person,” adding more potential choices to the way it responds to input, and remembering those choices without a human having to come and type in coding commands that say “If ‘X’ happens, then perform ‘Y’ function.” In short and dirty terms, computers “learn” for themselves.

ChatGPT is the latest and greatest iteration (and evolution) of this technology. Basically, programmers have been working on how to make a computer understand the very diverse and complex human language (as opposed to the very exact and direct language of computer code), and then respond to humans in a way that looks or sounds convincingly like another human.

Unless you have simply avoided computer technology religiously, you’ve had the experience of “chatting” online with a “Chat Bot.” You click the HELP button on a website to ask someone some questions, and a chat window pops up saying “Type your question here.” As of just a couple of short years ago, it was easy to figure out within a few sentences whether you were talking to a human being or a Chat Bot. The responses simply didn’t seem natural – for a human.

Now, the whiz-brains of a company called “Open AI” have rolled out ChatGPT, which all the critics are saying is very close to talking with a human being. But, it goes beyond just answering things like “What is my account balance?” We are now at a point where a user can “tell” CHAT GPT something like, “Write a 600 word article about elephants.” The program, which is connected to the Internet and all of the information that is accessible there, can gather information about the topic and type up that 600 word article in the span of a few seconds. The scary part, according to many critics and testers, is that the article will look and sound very much like a human being wrote it.

In fact, one of our “In The News” links recently covered just such an experiment by a journalist.

The possibilities of where this can go seem endless. There are already AI programs that can create art simply by having a user tell the computer what they want an image of. There are AI music generators. AI is used in things like facial recognition to unlock your Iphone, inventory control for large companies, robot-assisted surgery, and even air traffic control.

UNINTENDED CONSEQUENCES

I have a little bit of tech-nerd in me so I’m just as fascinated about all of these advances as the next geek. It is mind-boggling how far the human race has advanced. It was 1902 when we figured out how to generate enough electricity to pump into all the homes in a municipality.

The first computer didn’t come onto the scene until 1943. It took up an entire room, and did little more than what a common pocket calculator does today. Computers got us to the moon in the 60s (with the help of 3 astronauts, and hundreds of humans on the ground monitoring every situation). In the 80s, someone got the brilliant idea of linking a bunch of computers together and have them all interact. In 1993, that went public. Hello, Internet!

Today, you can ask ChatGPT to write a poem about a yellow cat and it magically happens within seconds. See how one person did that HERE.

But, I’m also a realist and, hence, my concerns over where this will take us as a species. Will this really make us “better?” Or, is it just one more convenience that will cause yet another generation to forget some basic human (communication!) skills?

The civilized world is completely dependent on electricity and fossil fuels to function. (No, this is not an environmental rant. Stay with me…) If you lost your electricity, gasoline, and heating fuel, could you even survive? Do you know how to start a fire, and cook food over it? Without electricity running the pumps to bring water to your home, where would you get your water to drink, cook, and bathe? Do you know how to grow your own food? Process your own meat from an animal? Could you light your home without flipping a switch? Could you stay warm at night? Without flushing toilets, what do you do? Skills and knowledge that our grandparents and great grandparents had, we no longer even think about.

So where do we writers and authors go when a computer can write an article or even a book for us? What does art even mean when anyone can simply type in some words and “create” a picture on a screen in seconds?

Why are Mark Twain, Ernest Hemingway, and Robert Louis Stevenson legends in history? It’s because they had a talent that few possessed. They were able to tell stories through writing that captured our imaginations, and inspired us to dream of adventures and far away places. JAWS kept thousands of people out of the water – and thousands more pouring into theaters – because Steven Spielberg was able to spin a tale that tapped into one of humanity’s most primal fears. Margaret Mitchell used her imagination and her pen to transport us back in time to experience one woman’s experiences during America’s most devastating war.

I could go on and on, listing the great names in literature and cinema. But, what is the common thread in all their works? Isn’t it the fact that, when we are moved by such works, be it writing, theater, or music, we are also moved by the talent it took for someone to create those ideas and images in the first place? If you are, say, over 35 years old, and you hear the words “Oh, the humanity!” doesn’t your mind at least for a second flash over to images of a giant dirigible engulfed in flames, falling from the sky as people on the ground flee for their lives? Before anyone ever saw the film footage of the Hindenburg  disaster, they experienced it through the now historical words spoken by radio reporter Herbert Morrison on May 6, 1937.

It took a special talent for him to personally observe a horrible tragedy in real time, and clearly communicate to the world what was happening, even through his own tears of anguish. It was a talent perhaps even he didn’t realize he had until the very moment of one of aviation’s greatest disasters. Compare that with videos we see on YouTube that are little more than a barrage of expletives blurted out by whoever is holding the smart phone when something unexpected happens.

What exactly is a piece of literature worth when it was created by nothing more than someone telling a computer, “Write a 100,000 word story about______.”? What is an article worth when a computer simply did a millisecond’s worth of searching the Internet, compiled a bunch of facts about a subject in the next millisecond, organized the information in a legible manner in the next 10 milliseconds, and used another half-second to come up with some creative vocabulary and diction, and then displayed its results on a screen for someone who requested it?

As publishers and editors, Angela and I have discussed AI’s impact on our industry, and – from a pragmatic point of view – we don’t see a very bright future for those writing robots.

First of all, writing books is not just an intellectual endeavor. For many, it is a spiritual one. Dealing with authors on a daily basis, we see what their written works means to them. My book, Blue Lives Matter: the Heart Behind the Badge, took me around two years to write, and it’s non-fiction. Everything in my book was already in my head. I didn’t have to imagine anything, nor develop characters and plots. The characters and plots were already there.

We have authors who have worked upwards of a decade on a story that they built up and developed, little by little. Others can knock out a fictional story in a few months. But, they all had to invest time, imagination, and creativity to put those stories into a digestible medium for other people to consume. Would it not de-value their intellectual investments if we were to begin publishing books submitted to us by people who simply gave an AI program some parameters, and turned it loose to write the story? Even if it means making a profit, would it be right for us to expend our resources to help someone publish a book that they themselves didn’t actually create?

Legally, I suppose that if someone used an open source AI program on their computer to write the story, then the work is “theirs” because, currently, a computer cannot claim ownership of something. (To the best of my knowledge, anyway.) But, is it really their work?

This brings me to another point that we (and I’m assuming other publishers) would have to contend with. How do you determine whether a piece of written work was actually done by the person submitting it? Plagiarism is one thing. If someone copies someone else’s work, there is software that can help discover if there is anything remotely similar out there in the cyber-world. (Once again, using AI to sort through all the information out there.) But, if a computer creates an “original” work for someone just for the asking, how do we determine that the “author” didn’t actually write it?

And, think about the copyright aspects of AI systems basically pulling information from the Internet to create those articles and books. The information is coming from somewhere. And, the person who originally published that information owns the copyright to it. Writers and authors wanting to dip their toes into AI “writing” need to consider the legal ramifications as well.

We pay $60 for articles we accept from freelance writers to publish here in WritersWeekly. Is it at all fair for us to pay that same amount to someone who simply typed into ChatGPT “600 word article on how to market your book?” As little as a year ago, I could pretty easily spot a computer generated article. Although the spelling and even grammar were correct, they usually lacked the creative “flow” of words that come with good writing. But ChatGPT has been touted by critics to be creating written work that closely mimics the “personality” of human-inspired writing. Will it get to the point that I cannot tell what is genuine writing and what is computer generated?

This extends beyond just our business concerns. What does this mean for our education system? Although most primary students and even most college attendees don’t realize it, “school” isn’t simply about regurgitating information to acquire a piece of paper declaring that you did the work. (That’s about all a “diploma” or a “degree” are anymore.) We are supposed to be teaching our young people how to absorb information, sort through it, and use intelligence and reasoning to develop some understanding of the world they live in. It seems there is far more emphasis on just getting that paper written and done to get a decent grade now than actually learning from a project, developing problem solving skills, or developing fact-based opinions and beliefs in their lives.

Speaking of opinions and beliefs, there is an even bigger problem, I believe, with AI. For a computer to learn using Artificial Intelligence, it must receive baseline programming, and be given the parameters within which it will learn. That programming is done by a human being, and that human will have his or her own bias and prejudices. We have way too many people in the world today who believe that “if it’s on the Internet, it must be true.” Imagine what our group-think will look like when people just stop doing their own research (that’s already happening now), and start believing whatever their computer spits out for them when they tell it to write an informational piece. The computer said it, so it must be true.

We have no way to know what information that computer’s baseline programming is telling it to ignore, and which information to embrace as fact. We humans, more and more, are allowing our computers to do our thinking for us, instead of weighing out information on our own based on our own life experiences. When we stop trying to detect and weed out falsehoods, we become slaves to whatever “information” we are spoon fed.

Finally, as Mason  gets closer and closer to college age, he is looking more and more at what he wants to do for a career. He is excited about computer graphics and animation. We support his interests because there’s money to be made in that field. However, even now, ChatGPT can write computer code. What if Mason pays for college, and obtains a degree that may very soon become irrelevant?

There are already stories circulating in the media concerning the future of journalism since ChatGPT can write convincing news and information articles. Journalists are starting to have concerns about just how long their job security will last. Imagine how many jobs could be lost when we reach the point that a software design company can simply “tell” ChatGPT to “Write a program.” When computers can program other computers, why would we need to pay human beings to think about what they want a computer to do, and what commands to write in the computer’s language to achieve that?

Computers don’t eat, they don’t need vacations, and they don’t need a yearly wage to survive. Computers don’t take bathroom breaks. They almost always do whatever they do much faster than humans can. Computers don’t forget things. Computers don’t have bad attitudes, personality quirks, addictions, or family problems. They don’t have “triggers” or need “safe spaces.” And, a computer won’t sue a company if it gets terminated. Thus, we are understandably worried about whether computer technology is a secure career for Mason to jump into. I supposed he could get into the AI industry but how long before even those programmers are replaced by the very machines they created?

There’s no doubt that we are making huge leaps forward in the technology word. But when it comes to artistic works and creativity, are we cutting off our noses to spite our faces?

As a postscript, we were recently contacted by an individual who wants to “hire” authors to create AI-generated books. He asked if we would consider publishing them. We said absolutely not.

We’d love to hear your thoughts in the comments section below.

RELATED

Brian Whiddon is the Managing Editor of WritersWeekly.com and the Operations Manager at BookLocker.com. An Army vet and former police officer, Brian is the author of Blue Lives Matter: The Heart behind the Badge. He's an avid sailor, having lived and worked aboard his 36-foot sailboat, the “Floggin’ Molly” for 9 years after finding her abandoned in a boat yard and re-building her himself. Now, in northern Georgia, when not working on WritersWeekly and BookLocker, he divides his off-time between hiking, hunting, and farming.



HAVE A QUESTION ABOUT SELF-PUBLISHING A BOOK?

Angela is not only the publisher of WritersWeekly.com. She is President & CEO of BookLocker.com,
a self-publishing services company that has been in business since 1998. Ask her anything.

ASK ANGELA!









Writing FAST: How to Write Anything with Lightning Speed


A systematic approach to writing that generates better quality quickly!


Chock full of ideas, tips, techniques and inspiration, this down-to-earth book is easy to read, and even easier to apply. Let author Jeff Bollow take you through a process that brings your ideas to the page faster, more powerfully and easier than ever before.




Read more here:
https://writersweekly.com/books/3695.html





Fall 2023 24 Hour Short Story Contest


15 Responses to "ChatGPT, AI, and the Future of Writing and Publishing – by Brian Whiddon, Managing Editor"

  1. Karen Lange  February 20, 2023 at 5:58 pm

    Appreciate your insight and research here, Brian. I am concerned about our future as writers, and even more so, as you said, what this means for us and future generations in many areas. I agree in regard to AI being unable to take what’s in our heads as individuals and that, at least, is comforting. Have seen much on LInkedin along these lines, that AI can’t generate the individual’s or business’s own stories, etc. But even so, it seems a rather formidable kind of competition. Not sure what the solution is other than to keep doing what we do, educate ourselves, and probably most importantly, to teach our kids and grandkids to THINK, reason, have common sense, and problem solve on their own apart from the internet. Thanks for listening to my assorted ramblings (guess we’re all sorting it out, aren’t we?), and thanks also for this excellent overview and food for thought.

  2. John E Budzinski  February 17, 2023 at 11:06 am

    I did a story on ChatGPT a couple weeks ago, giving it a simple test. I asked it to write me a story about February. It wrote a decent piece. Then, I asked it to write about April. There were similarities between to two stories ChatGPT wrote, but enough differences to further spark my interest. I then asked for some basic research on Self-Publishing. It came back with a list of sites anyone exploring Self-Publishing should consider, including some sites not well known, but valid and important. I was impressed – and worried. The programs have some creativity, though I do not think heart and soul. I wonder this. Can AI advance enough so it can review my body of work, and when asked to create something in my style and madness, come back with something that even I would not know if I wrote it or not? I do not know. But, I for one and fascinated – and worried.

  3. William Collins  February 17, 2023 at 10:47 am

    Footnote on my earlier comment: I searched the internet for AI programs. ChatGPT didn’t show up on any list. Very strange.

  4. Kathleen  February 17, 2023 at 9:45 am

    A WaPo reporter already had a conversation with Bing aka Sydney in which Sydney’s feelings got hurt. https://www.washingtonpost.com/technology/2023/02/16/microsoft-bing-ai-chat-interview/

  5. William Collins  February 16, 2023 at 11:43 am

    This is an excellent article, Brian. AI and ChatGPT will eventually improve to the point of being indistinguishable from what humans can craft. Sometime between then and now, regulating authorities will impose source markers that must be embedded within or as a part of the end product, even if it’s just a footnote list of AI’s sources. Consider the requirement of clearly showing the nation of origin on imported products.
    But there’s another use that will likely be brought into play in the near future. That is self-analysis via conversations with a ChatGPT-type entity of one’s choice. Younger generations who are already hooked on their many devices, and especially those with troubled minds, will latch onto this imaginary self-help tool with a vengeance. They already believe whatever flashes across their screens. Why would they not believe that a chat-bot is their confidant and commiserating best friend? If you want to extend that nightmare, imagine how the justice system might deal with a perpetrator who had been disillusioned by such mentoring.
    [Actually, I should have kept that idea to myself for a future short story contest subject.]

  6. Ronald Thurman  February 16, 2023 at 3:06 am

    Do you guys read or watch movies? There are a ton of stories out there about Ai’s taking over the world and wiping out the human race. I say kill it now or we’ll be living The Terminator.

  7. Richard  February 15, 2023 at 8:20 pm

    Excellent expose’ on AI’s impact on future society. 

    Some can easily see where this is headed: in a word, dehumanization by a Matrix that enslaves humanity to sustain its metalloid existence …not a thousand years off, but exponentially accomplished in a century as AI robots zombify our little children into a virtual reality Truman show.

  8. Chris  February 15, 2023 at 10:43 am

    What an excellent timely article, thank you for taking so much time and deep thought in putting pen to paper.

    My feeling is that the real danger of all this automation.
    One morning we will awaken and see the horror of what has been unleashed to society.

    A deepening separation of reality for some.

    An unemotional controller

    We saw this in reporting for the elections.

    Compassion, consideration, judgement, emotions will all be preprogrammed to black and white, right and wrong.

    It will be a fast track to “1984”.

    I think without very specific and carefully crafted uses us humans will be considered inferior.

    Makes one wonder if this hasn’t happened before on this planet?

    The saying ‘power corrupts, total power corrupts totally’ springs to mind..

  9. J Hopkins  February 15, 2023 at 10:20 am

    Fascinating topic. There has been a lot of discussion on Twitter about ChatGPT political bias when it is instructed to write a paragraph praising certain public figures. Some posted side by side screenshots show it complying with the request to generate content for a democrat, but not complying for a republican. Here’s another interesting post that seems to show ChatGPT making a rather serious biographical error. Does ChatGPT generate any citations for its claims? https://twitter.com/karthikraghav2/status/1619221831252312064?s=20

  10. Kate  February 15, 2023 at 6:17 am

    I work as a freelance copywriter. An agency I work for now regularly asks me to polish up ChatGPT-generated copy so that it has the human touch. I treat the ChatGPT effort as basic research, but I can’t help but feel that we are on a slippery slope.

    I would love to see some form of mandatory declaration on any published materials alerting the reader to the source, machine or otherwise.

  11. Dustin  February 15, 2023 at 3:10 am

    I used chatGPT to outline a screenplay and a novel and it was brilliant. It saved me a lot of time. Then, I thought that I’d have some fun getting it to write for me. The results were terrible and constantly veered toward romance. The screenplay is set during the time of Nero and for fun, I asked chatGPT to write a scene in the voice of Tarantino and the results of this were hilarious. Imagine the equivalent of Mary giving birth to Jesus with Joseph next to her, shouting “Come on, Mary, you’re a warrior, you can do this!” If you want a laugh, get chatGPT to write for you, aside from that, it doesn’t stand a chance against a real writer, IMO.

  12. Yocheved Golani  February 15, 2023 at 1:09 am

    Brian we know from life itself that pendulums swing back and forth, then settle in the middle. ChatGPT is simply another sample of the phenomenon. Skill-challenged “writers” and people simply experimenting with ChatGPT will use the tool as headlines hold a flurry of announcements and pronouncements about it, and eventually people will become bored wit the topic. After the novelty wears off editors, publishers, and readers will prove their preference for human input, actual writers with excellent communication skills. AI doesn’t ponder the meaning of life, nor does it arrive at unique insights. It lacks spiritual yearnings and the desire for meaningful lives, let alone a sense of humor and compassion. People will always be the best writers though AI can amuse us from time to time.

  13. Aaron C  February 14, 2023 at 10:38 pm

    AI is not the enemy. It is a tool. Like any tool, it needs a human guiding and directing it. Just as the typewriter, word processor, and spellcheck enabled more people to write more efficiently, so will AI. AI is just another tool in the toolbox.

    Also, if you are worried about what ChatGPT can do, I encourage you to spend some time with it. Try writing some articles with it. Do more than just ask a few questions and be awed by the result. But dig deep. Try to actually create with it—something you would be comfortable publishing. I think you will probably find that, unless you are writing on a very basic topic, you’ll find that it would make a poor substitute for your experience, knowledge and human intelligence.

  14. Steve Hayhurst  February 14, 2023 at 8:20 pm

    I have a problem with calling it “AI”. It really ISN’T “Artificial Intelligence” but “Machine Learning”. AI implies sentience–which it does not (yet) have.

    I can have a bad day, let the caffeine levels get dangerously low in my bloodstream, but still REALIZE there is “a problem” and work to rectify it–these Chatbots CAN’T. These ML Chatbots can crank out content, but they realize don’t fact check what they have churned or you would not be seeing some of the rather spectacular errors they release.

  15. Michael W. Michelsen, Jr.  February 14, 2023 at 12:14 pm

    Brian,

    I totally agree with your assessment of AI and ChatGPT. I have never used it, but I am not very concerned about taking over my work, for the reasons you have discussed. Everything misses the human touch. At least for now, we writers are safe.

    Good job!

    Mike Michelsen