The Simpler It Is, The Closer You Look, Part 2

Humans do not think like machines. Machines do not process information like humans.

Which is obvious, yet the results are often not considered.

Making a machine interface that is intuitive to humans is really difficult. Presenting complex information in way that is easy for humans to read is really difficult. Here are some of the things which have to be considered:

  • How is the information organized?
  • What information are we talking about? Are we presenting flight schedules or grocery shopping lists?
  • Do regular users and new users have different concerns?
  • Do we need to emphasize if anything has changed from last time?
  • How can the information be presented so the expected reader can easily find what they think is most important to find, while also letting the publisher or organizer highlight what they think is most important to present?

Those issues are things I came up with just thinking about it as I’m writing this. There were and still are whole entire disciplines and professions devoted to this.

When I find something which is intuitive to use, whether it’s gas pump prices, a website, or the dashboard of a car, I try to stop and admire what was achieved. I also try to see what I might learn. If there’s a lot of information shown in an intuitive and easy-to-understand manner, someone put a lot of work into that.

Useful Finds: A Bunch of Links About AI and ChatGPT.

Last week I wrote about my skepticism about ChatGPT and Artificial Intelligence. I read and heard multiple further criticisms and critiques of the use of artificial intelligence since then. When I started looking for those links for this post, I found several more.

The Difference Between a Content Creator and Content Editor

In a discussion on the Software Defined Talk podcast episode 400, Matt Ray (one of the hosts) described using ChatGPT to create content. ChatGPT can quickly create a lot of text very quickly, but not all of it is good. It’s not even always factually accurate. Ray pointed out there is a large difference between creating content and editing content created by someone else.

I’d Have Expected the Companies Showcasing These to Understand This and to Have Some Content Editors.

And I would have been wrong to expect that.

As a short recounting of some current events: ChatGPT is launched, gets lots of attention. Microsoft announces it will buy ChatGPT, or its parent company, and ChatGPT will become part of Microsoft’s search engine Bing. Bing gets a tiny fraction of search engine traffic, and search engine advertising dollars, that the Google search engine gets. Cue breathless articles about this being the end of Google’s dominance in internet search. Google announces they have been researching AI themselves for quite a while. Google shows an ad where their own AI answers questions. It gets a question wrong and since this coincides with a massive drop in Google’s stock price, the former is assumed to have caused the latter.

But as The Register explains in “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023 and last accessed February 14 2023, Microsoft’s search AI demonstration also had factual errors. In some cases, pretty severe errors that in theory would have been easy to spot. It wrongly stated easy-to-look-up facts about product features and bar and restaurant hours and options.

(I’m adding “last accessed” dates for the text articles in this post because some of the articles I’m referencing have revision dates in addition to post dates.)

From Quach’s article:

None of this is surprising. Language models powering the new Bing and Bard are prone to fabricating text that is often false. They learn to generate text by predicting what words should go next given the sentences in an input query with little understanding of the tons of data scraped from the internet ingested during their training. Experts even have a word for it: hallucination.

If Microsoft and Google can’t fix their models’ hallucinations, AI-powered search is not to be trusted no matter how alluring the technology appears to be. Chatbots may be easy and fun to use, but what’s the point if they can’t give users useful, factual information? Automation always promises to reduce human workloads, but current AI is just going to make us work harder to avoid making mistakes.

The Register, “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023, last accessed February 14 2023,

Why didn’t either Google/Alphabet or Microsoft check the answers the AI gave before their demonstrations? Did they assume the answers would always be correct? Or that the probability of correct responses would be high enough it was worth the risk? Or that everyone would enthralled and not check at all? I have no idea.

Intellectual Property Rights? We Don’t Need No Stinking Intellectual Property Rights! Except For Our Own Intellectual Property. Then, Yes, Please!!

I might make that the subject of a whole other post another day. To put it briefly: Many of these models, language and image, are trained on large amounts of publicly available information. In the free, research, or crowd-sourcing stages, intellectual property rights to the information used for training are often not discussed. Then the model has some success, money gets involved, and those issues become very important.

“Move fast and break things” is similar to “Rules are meant to be broken.” Both statements sounds cool and daring until things of real value are involved, such as money and copyright infringement.

ChatGPT, the Latest Darling, Is Not as Neutral as It Says It Is

Here are a couple of posts from the Substack page Rozado’s Visual Analytics by David Rozado and a referencing post from Reclaim the Net:

To summarize the three posts, when asked if it has a political bias ChatGPT says it does not and claims that as an Ai, it cannot. When asked questions from numerous different tests of political ideology, ChatGPT tested moderate on one and some version of left, left-leaning, or liberal on all the others.

Is it the content ChatGPT is trained on? Was there an inadvertent bias in the people who chose the content? Is “The Political Bias of ChatGPT Extended Analysis” Rozado explains he first documented a political bias in ChatGPT in early December 2022. ChatGPT went through an update in mid-December 2022, which Rozado said included a mitigation of the political bias in answers. Then after an update in January 2023, the political bias was back.

I’ve chosen not to go through all of Rozado’s posts, but there are quite a few. This is a topic which has a lot more than I’m writing here. I’m pointing out that there’s more to read than I’m referencing here because that’s part of my point: none of this is simple. None of it is the easy replacement of messy human interaction that technology in general and AI in particular is claimed to be.

That Political Bias? Quickly Defeated With the Right Questions.

Zerohedge’s post “Go Woke, Get Broken: ChatGPT Tricked Out Of Far-Left Bias By Alter Ego ‘DAN’ “ written by the eponymous Tyler Durden, dated February 13 2023 and last accessed February 14 2023, is about breaking ChatGPT’s clearly documented political bias.

How is this done? Tell it to pretend it is DAN, Do-Anything-Now, and provide answers to prompts both as itself and as DAN.

The results are surprising, and interesting, and humorous. The Zerohedge post links to entire Reddit discussions about how to break ChatGPT.

No, I haven’t read through all those Reddit discussions, although I probably will at some time in the future. I know I’m beating this drum a lot, but I’ll repeat it again: trying to replace humans with technology, AI or anything else, is not as easy as claimed.

ChatGPT Still Can’t Do Light Verse or Even Romantic Rhymes.

Those endless poems, some banal and some quite good, which start with “Roses Are Red and Violets Are Blue”? ChatGPT is awful at those and at light verse as well.

The Register‘s post “Roses are red, algorithms are blue, here’s a poem I made a machine write for you” by Simon Sharwood, dated February 13 2023, and Quillette‘s post “Whatever Happened to Light Verse?” by Kevin Mims, dated February 2 2023, both last accessed February 14 2023, are both very good

Thoughts About Technology: You Can’t Take the Humanity Out of Being Human, Physicality

I think part of the appeal of technology as magic is the hope that with enough technology, the messiness of being human goes away.

Except, it doesn’t go away. We’re all still human.

There are ways in which the brain and the body map onto each other which are unavoidable. I’ve read multiple articles about people blind since birth still “talking with their hands” when describing something to another person. There is still the need to show with the movements of the hands the movements of concepts in the brain.

In one of the early episodes of The Huberman Lab podcast, Huberman talks about stress. He says stress in the brain activates nerves for movement in the legs and the muscles used for speech. He notes this is why it is so common for people to say unfortunate things when they feel stressed. What he doesn’t note, but what appears in innumerable jokes, cartoons, and memes, is the need to pace back and forth when in an intense discussion.

There are also differences in typing something on a screen and writing it by hand on paper. It feels different as the writer, and research shows it activates different parts of the brain.

Being human means having a human body and being susceptible to the ways in which the body and the brain interact with each other and with the outside environment. We’re all always human. No amount of magic technology will change that.

Thoughts About Technology: Technology Isn’t Magic, But Humans Are.

As I’m writing this in early February, 2023, ChatGPT is all over the news. It’s all over the tech podcasts and news sites. I’ve even seen a newsletter for copywriters advertising a class for writing copy with ChatGPT.

More reasons why technology is not magic.

I’ve seen this at least twice and maybe three times before. A few years ago it was machine learning. Long before that, in the 1990s it was fuzzy logic. I think there was another AI (Artificial Intelligence) alleged breakthrough in the two thousand zeroes.

Each time it’s going to replace humans and each time it doesn’t. Each time the hype and hysteria fade and frequently there are some fairly embarrassing faceplants. For machine learning, after all the breathless hype about amazing image recognition, several models were broken by taking recognized pictures and changing a couple of pixels. I’ll repeat that, machine learning recognition of items in photographs was broken by changing just a few pixels. In one example, a toy turtle was labeled a gun after a few pixels were changed.

Yes, there a few successes. Facial recognition has progressed by leaps and bounds. There are also dedicated efforts to finding ways to mess it up, including specialized makeup and specialized eyeglass frames. What wasn’t mentioned was how much facial recognition is hampered by face masks, which have now become normalized in most of the world and are still mandatory in many places.

Getting back to ChatGPT, the current media darling, there’s already been multiple examples of ChatGPT being asked to write an article and getting basic information about the topic wrong.

Two already known circumstances where AI and machine learning fail:

  • They can’t understand context, and
  • They can’t understand or recreate humor. At all.

There’s probably more.

Humans Do Amazing, Almost Magical Things, All the Time.

Meanwhile, every time researchers try to copy something humans do all the time with technology, it turns out it’s really hard.

Robots transitioned to stationary or wheeled or bug-like decades ago because walking is really hard. We actually go through a series of controlled falls when we walk. I think there’s something like seven different points of stability and balance in our bodies when we walk, which we don’t notice but programming into robots is really difficult.

The first Avatar movie cost so much in part because James Cameron developed whole new headgear to watch and record the actors’ eyes and tongue while acting. He did this because eye and tongue movements are two things computer animation still couldn’t replicate convincingly, so he recorded that from the actors to add into the animated characters.

We can look at pictures that are fuzzy, pixelated, or poorly focused, and still recognize the object.

From what I’ve seen, ChatGPT is useful for quickly producing a lot of text that then needs to be edited and reviewed by a human. And that’s only if the person asking the question does a very good job of setting the parameters. And only if the person editing the response already knows the topic.

Technology isn’t magic, no matter how much we keep trying to convince ourselves it is.

Mindset Monday: Use the Physical World as a Model for Your Expectations and Habits.

I usually leave the house with a coat, and a bag to hold my wallet, cellphone, and writing pad. If it’s a nice day, I might take along a digital camera in case I see something I want to photograph. I’ll also take a magazine or book if I might have some free time.

If I’m going to an exercise class I’ll take a bag with my gear for that class. A laptop and associated power cord and mouse in a backpack come along also, if I think I’ll need them.

I don’t take each of those things with me each time I leave the house.

When I install new programs on a personal computer, there’s often an option to add that program to the startup programs. Rarely are those programs a stand-alone executable: there will be background processes and programs they will start up in turn, just like I don’t take a laptop without taking a power cord and a separate bag or backpack to hold the laptop and power cord.

A personal computer with a ton of programs that start up with the computer takes a long time to start up. Similarly, if every time I leave the house I take everything I might possibly need, ever, it will take me a long time to leave the house.

When people ask me for help with their computers or other technology, rarely do they try to compare it to what they already know and do. Technology is a magical thing that they “don’t understand” and wish it would “just work.”

It’s not magic. It’s like any other tool.

Monday Mindset: Help and hindrance, standards

At one time I read product standards as a full-time job. I left that job years ago but I still look at what standards a product says they comply with, or are expected to comply with.

Simplistic description of standards.

Many standards do serve a useful purpose: they set expectations for a product. Depending on the standard and who issued the standard, those expectations might cover safety, features, performance, reliability, or other things.

Some standards are free, some cost a bit to purchase, some cost hundreds of dollars to purchase. Some are fairly straightforward to read, some are very dense. The trickiest seem straightforward when reading them, except there are certain terms which have a specific meaning in the industry or market covered by that standard, and that meaning isn’t well known to people outside that industry or market.

Standards can become a hindrance when the market expects or insists a product has to meet a certain standard. A person might have a good product idea but find themselves in an industry or market where the required standard is very expensive to buy or very expensive to comply with.

Standards are by definition reactive and a reflection of the past. Standards describe what has already been made and how it should be made going forward. I don’t know of any standard which was written about an imaginary product, in the hopes someone would read the standard and create a product to meet that expectation.

Standards are a really good way to show the limitations of language in describing the world.

Standards are initially written with an ideal something-or-other in mind. As time goes by, there are revisions which are almost organic in growth. These revisions usually come from someone trying something which didn’t work, or didn’t work as expected.

If a standard is written very precisely and explicitly, it’s easy for someone to avoid if they want to: find a way to describe their product which is different than that precise definition. Then the standard doesn’t apply. And if the definition is written more broadly, then someone who wants to avoid it can argue about the meaning of the words or the intent of the writers. And the standard still might not apply.

Any product or facility which was built or designed more than five years ago, and is being held to a standard whose initial edition was written more than five years ago, will have at least one place where the language or practices have shifted and it’s possible someone could claim the standard possibly wasn’t being met.

The best way to I found to learn a standard is to write a summary of each clause. That’s also very painful and arduous.

Why am I talking about all of this?

I don’t get to turn my brain off because somewhere a product standard got mentioned. I don’t get to turn my brain off because a product says they comply with a certain standard. And I don’t get to turn my brain off because a product doesn’t say it complies with a certain standard.

Standards can be helpful. Like any other tool, they can also be a hindrance.

Technician Tuesday: It’s not magic, part II.

Yesterday I wrote about users who expect technology to be magic — and then find out it’s not. (That post was written and posted December 19, 2022.)

Later yesterday I was catching up on some old episodes of Pat Flynn’s Smart Passive Income podcast. Episode 604 is titled “SPI 604 – I Really Wanted to Believe This” and it’s dated August 19, 2022. It’s about almost exactly the same thing: technology is not magic.

Flynn uses a good analogy of an amateur photographer who buys a new camera lens and hopes that will make all of his pictures better. At best the lens only showcases the photographer’s skill at timing and framing and composing. At worst it becomes a distraction and another thing to clutter up the photographer’s bag.

Flynn calls this “squirrel syndrome.” I’ve also seen it referred to as “shiny object syndrome.” By either name or any other name, the hope is the same: I get this and everything becomes easier or better. Flynn even uses the word “magic” to describe this hoped-for effect.

But technology doesn’t work that way. It’s not magic. It’s only a tool.

It was nice to hear someone else say that. And a bit of synchronicity to hear that old podcast episode cover the exact same thing I had just written about.

On one side note, that was a good podcast episode. Flynn suggests that everyone do an audit of the tools they currently own and be really honest about how many they actually use, how many they actually need, and how much money they are paying for tools which are subscription-based.

On a second side note, I originally planned to write about product standards today. That’s a post I still intend to write.

Monday Mindset: Technology which is supposed to be magic, isn’t.

I regularly talk to people who are frustrated the piece of technology they bought isn’t doing what they wanted.

I ask what they wanted. What they wanted is not what they bought because they wanted something which can’t be bought. They wanted to create something beautiful, they wanted to impress someone else, they wanted to make something people would pay money to buy, they wanted to make something which would have all the family names and family tree on it and “would bring the whole family together.” (Yes, those are all true stories and that quote is an actual quote from a conversation I had.)

The technology they bought was expected to do this, because — and that’s where the reasoning starts to get shaky.

Usually, if I ask long enough what the reasoning was I’ll find an assumption that the technology they bought should be able to do this because technology can do anything. Technology is magic.

But it really isn’t magic. Whether software, hardware, digital, electronic, old, or new, it’s a tool. It can help the user achieve a goal. The user still has to choose the goal. And that gets back to what is the goal and why is that the goal?