Useful Finds: A Bunch of Links About AI and ChatGPT.

Last week I wrote about my skepticism about ChatGPT and Artificial Intelligence. I read and heard multiple further criticisms and critiques of the use of artificial intelligence since then. When I started looking for those links for this post, I found several more.

The Difference Between a Content Creator and Content Editor

In a discussion on the Software Defined Talk podcast episode 400, Matt Ray (one of the hosts) described using ChatGPT to create content. ChatGPT can quickly create a lot of text very quickly, but not all of it is good. It’s not even always factually accurate. Ray pointed out there is a large difference between creating content and editing content created by someone else.

I’d Have Expected the Companies Showcasing These to Understand This and to Have Some Content Editors.

And I would have been wrong to expect that.

As a short recounting of some current events: ChatGPT is launched, gets lots of attention. Microsoft announces it will buy ChatGPT, or its parent company, and ChatGPT will become part of Microsoft’s search engine Bing. Bing gets a tiny fraction of search engine traffic, and search engine advertising dollars, that the Google search engine gets. Cue breathless articles about this being the end of Google’s dominance in internet search. Google announces they have been researching AI themselves for quite a while. Google shows an ad where their own AI answers questions. It gets a question wrong and since this coincides with a massive drop in Google’s stock price, the former is assumed to have caused the latter.

But as The Register explains in “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023 and last accessed February 14 2023, Microsoft’s search AI demonstration also had factual errors. In some cases, pretty severe errors that in theory would have been easy to spot. It wrongly stated easy-to-look-up facts about product features and bar and restaurant hours and options.

(I’m adding “last accessed” dates for the text articles in this post because some of the articles I’m referencing have revision dates in addition to post dates.)

From Quach’s article:

None of this is surprising. Language models powering the new Bing and Bard are prone to fabricating text that is often false. They learn to generate text by predicting what words should go next given the sentences in an input query with little understanding of the tons of data scraped from the internet ingested during their training. Experts even have a word for it: hallucination.

If Microsoft and Google can’t fix their models’ hallucinations, AI-powered search is not to be trusted no matter how alluring the technology appears to be. Chatbots may be easy and fun to use, but what’s the point if they can’t give users useful, factual information? Automation always promises to reduce human workloads, but current AI is just going to make us work harder to avoid making mistakes.

The Register, “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023, last accessed February 14 2023,

Why didn’t either Google/Alphabet or Microsoft check the answers the AI gave before their demonstrations? Did they assume the answers would always be correct? Or that the probability of correct responses would be high enough it was worth the risk? Or that everyone would enthralled and not check at all? I have no idea.

Intellectual Property Rights? We Don’t Need No Stinking Intellectual Property Rights! Except For Our Own Intellectual Property. Then, Yes, Please!!

I might make that the subject of a whole other post another day. To put it briefly: Many of these models, language and image, are trained on large amounts of publicly available information. In the free, research, or crowd-sourcing stages, intellectual property rights to the information used for training are often not discussed. Then the model has some success, money gets involved, and those issues become very important.

“Move fast and break things” is similar to “Rules are meant to be broken.” Both statements sounds cool and daring until things of real value are involved, such as money and copyright infringement.

ChatGPT, the Latest Darling, Is Not as Neutral as It Says It Is

Here are a couple of posts from the Substack page Rozado’s Visual Analytics by David Rozado and a referencing post from Reclaim the Net:

To summarize the three posts, when asked if it has a political bias ChatGPT says it does not and claims that as an Ai, it cannot. When asked questions from numerous different tests of political ideology, ChatGPT tested moderate on one and some version of left, left-leaning, or liberal on all the others.

Is it the content ChatGPT is trained on? Was there an inadvertent bias in the people who chose the content? Is “The Political Bias of ChatGPT Extended Analysis” Rozado explains he first documented a political bias in ChatGPT in early December 2022. ChatGPT went through an update in mid-December 2022, which Rozado said included a mitigation of the political bias in answers. Then after an update in January 2023, the political bias was back.

I’ve chosen not to go through all of Rozado’s posts, but there are quite a few. This is a topic which has a lot more than I’m writing here. I’m pointing out that there’s more to read than I’m referencing here because that’s part of my point: none of this is simple. None of it is the easy replacement of messy human interaction that technology in general and AI in particular is claimed to be.

That Political Bias? Quickly Defeated With the Right Questions.

Zerohedge’s post “Go Woke, Get Broken: ChatGPT Tricked Out Of Far-Left Bias By Alter Ego ‘DAN’ “ written by the eponymous Tyler Durden, dated February 13 2023 and last accessed February 14 2023, is about breaking ChatGPT’s clearly documented political bias.

How is this done? Tell it to pretend it is DAN, Do-Anything-Now, and provide answers to prompts both as itself and as DAN.

The results are surprising, and interesting, and humorous. The Zerohedge post links to entire Reddit discussions about how to break ChatGPT.

No, I haven’t read through all those Reddit discussions, although I probably will at some time in the future. I know I’m beating this drum a lot, but I’ll repeat it again: trying to replace humans with technology, AI or anything else, is not as easy as claimed.

ChatGPT Still Can’t Do Light Verse or Even Romantic Rhymes.

Those endless poems, some banal and some quite good, which start with “Roses Are Red and Violets Are Blue”? ChatGPT is awful at those and at light verse as well.

The Register‘s post “Roses are red, algorithms are blue, here’s a poem I made a machine write for you” by Simon Sharwood, dated February 13 2023, and Quillette‘s post “Whatever Happened to Light Verse?” by Kevin Mims, dated February 2 2023, both last accessed February 14 2023, are both very good