Chatting with the Famous and Infamous

THE TRUE-LIFE ADVENTURES OF A SENIOR NEWSPAPER PUBLISHER
March 2, 2024 at 2:41 p.m.


...by Michelle Roedell, Editor, Northwest Prime Time

My last few blogposts covered some of the celebrities I've interviewed over the last 24 years. I continue the series here. (Sort of.)

For the past several months, I've grown increasingly aware of chatting with an extremely famous entity. That entity is Artificial Intelligence.

Chat GPT, the chatbot powered by artificial intelligence (AI), gained worldwide attention when it launched in November 2022. Chat GPT is completely free for anyone to use. You type in a short description of what you want, even including the style of language you're looking for, and -- voila -- out it pops. You can ask Chat GPT to create something for a social media post, an email, an article... just about any type of written content. 

As someone who is constantly writing for and accepting material to post on our website, I was curious.

Having researched and written countless articles over the years about topics related to aging and retirement, I asked Chat GPT to write a 1000-word article advising seniors how to choose a retirement community. A few minutes later, the article appeared. Wow. It was informative and easy to read. In fact, it sounded a lot like the many articles on this topic that I, myself, have written.

The big but...

I soon learned that Chat GPT cannot be trusted. You can't count on accuracy. In fact, Chat GPT can lie! Well, maybe not outright lie, but since it was created with a wide range of data sources, including unreliable sources, Chat GPT can certainly give false statements. Plus, "Chat GPT will not answer a question by saying it does not know the answer; instead, if the data it has doesn't provide an answer, it will simply make one up," wrote Marina Hochstein about the pitfalls of journalists using Chat GPT. 

My conclusion: Chat GPT is a potentially useful tool for a publisher, but only if you take the time and effort to verify all the information. I wondered if the fact-checking would take more time than creating the article from scratch. I decided that I prefer to use trusted sources and real people and I tabled the idea of using AI in creating content for the website.

Another big but... (In case you are counting, that is two big buts.)

I thought I tabled the idea of using AI in creating content for the website. 

As you read on, keep in mind that I'm old-school, not technologically savvy. I can be naive and gullible. I want to believe the best in others. Plus, the website is a hungry maw continuously craving content. Free, well-written articles relevant to our demographic are a welcome addition to my inbox. 

Until a few months back, that is, when I became aware that a lot of new submissions were a bit unusual. I couldn't quite put my finger on it. Although the submissions were on different topics, from different emails and using different names, they were strangely similar.

I kept trying to solve the puzzle. I'd remind the sender that we don't provide payment. "I don't care about payment. I'm just trying to get my work out there in public to increase my profile," they responded. Okay. Sounds reasonable, except that the entity didn't seem to care about having a byline, let alone payment. Complicating matters is that the sender often had a website, and the website had a few relevant posts. Sounds good. Sounds like a real person. Except there seemed to be only about four posts on each of the websites. There might be a brief author bio, but not a photo.  I grew increasingly suspicious and began asking for a photo. "Oh, I'm too shy to provide a photo." 

Can AI be shy?

While the emails may not all be generated by AI, just so you know, I'm turning down the continuous stream of submissions from seemingly real but unverifiable sources. 


(SIDENOTE: Even if the person is real, we don't knowingly accept articles with links that provide payment to the writer. A writer needs to be upfront and disclose any such link or be willing for me to publish the article without the links.)

I've heard that Chat GPT and AI bots have trouble understanding humor, sarcasm, or a non-sequitur. Apparently, that's what's called a "Crazy Ivan" and -- at least so far -- AI doesn't know how to handle it. 

If you are a real person without a hidden agenda, please send stuff my way. But, in the middle of an email conversation with me, don't be surprised if I suddenly ask: "Knock, knock. Why did the senior citizen cross the road?"


If you answer, "Crazy Ivan," welcome to the club. 

Share this story!