Useful Link: Humble Bundle For Good (And Sometimes Unexpected) Deals

I can’t remember how I found Humble Bundle. It’s a really great and thoroughly addictive site. They have bundles of various things such as software, games, and digital books. The prices are usually unbeatable. Part of the price will go to a charity.

There are usually online class bundles for something software related at any given time. They also have many book bundles. I’ve seen book bundles on software, but also on hardware, sewing, outdoors skills, and many other things.

The bundles are limited in time and it’s unpredictable (at least to me) what will show up at any given time.

It’s definitely a good site. No, I’m not compensated by them in any way. I think it’s a good site.

The Simpler It Is, The Closer You Look, Part 1

The simpler a physical operation or technology seems to be, the closer I’ll be to worrying about material properties.

Every physical thing eventually goes back to a natural material. Even “synthetic” materials such as plastics, nylon, or viscose, eventually come from a natural material. And natural materials vary.

Whether a natural variance will affect the end use is often hard to predict. A few years ago Consumer Reports took a close look at gluten-free foods and found that one of the hidden dangers was arsenic poisoning. Many gluten-free foods contain rice flour. Rice is grown in different areas with different soils; and rice has a tendency to absorb arsenic from the soil (if the soil has arsenic; some soils don’t).

I used rice as an example, but every other natural material has equally unexpected variances somewhere. When I buy a good such as quilting cotton, there’s an unseen army of people I’m depending on. Someone grew the cotton, harvested it, processed it, and spun it into thread. Someone else took that thread and wove cloth out of it at a set width and tightness of fabric. And then someone after that dyed or printed the cloth.

The further I go back in any chain of assembly or manufacture or processing, the closer I’ll be to taking a very close look at physical properties in the material itself. If I have to do that, then I’ll probably start learning about how to specify those properties when buying, and how to test for those properties, and how often I’ll need to test.

The simpler it is, the closer I need to look at everything.

Useful Finds: Taking the time for a class rather than re-inventing the wheel, MS Excel

I generally avoid Microsoft Office if I can. It tries to do too much. And no matter how much I log in and confirm on whichever websites, if I am using Microsoft Office while I’m logged in to Windows on a different email than I bought the Microsoft Office license under, Windows and Microsoft Office throw fits.

Currently, I’m working on a project which needs Excel. I signed up for a couple of Udemy courses. I’m currently working my way through the first one, Unlock Excel VBA and Excel Macros by Lella Gharani. I’m only partway through and I’ve already learned a lot.

Thoughts About Technology: Our Brains Are Not Hard Drives. Write It Down.

I don’t like to admit mistakes. I think most people are the same way.

So none of us like to admit what we’ve forgotten. If we forget enough things, we start to forget what we’ve forgotten.

If it’s something I want to remember, I need to write it down. And if it’s worth keeping, I’ll eventually come back to it. Which means I’ll probably have to do some occasional reorganization of what I’ve written. Again, if the information is worth keeping, I’ll come back to it and it will be worth the time to reorganize.

It took me a long time to realize this. I thought it was just me, until I started to notice how few people keep notes on anything. And how much people struggle to recreate or rediscover information which I know they already had.

Write it down.

Useful Finds: A Bunch of Links About AI and ChatGPT.

Last week I wrote about my skepticism about ChatGPT and Artificial Intelligence. I read and heard multiple further criticisms and critiques of the use of artificial intelligence since then. When I started looking for those links for this post, I found several more.

The Difference Between a Content Creator and Content Editor

In a discussion on the Software Defined Talk podcast episode 400, Matt Ray (one of the hosts) described using ChatGPT to create content. ChatGPT can quickly create a lot of text very quickly, but not all of it is good. It’s not even always factually accurate. Ray pointed out there is a large difference between creating content and editing content created by someone else.

I’d Have Expected the Companies Showcasing These to Understand This and to Have Some Content Editors.

And I would have been wrong to expect that.

As a short recounting of some current events: ChatGPT is launched, gets lots of attention. Microsoft announces it will buy ChatGPT, or its parent company, and ChatGPT will become part of Microsoft’s search engine Bing. Bing gets a tiny fraction of search engine traffic, and search engine advertising dollars, that the Google search engine gets. Cue breathless articles about this being the end of Google’s dominance in internet search. Google announces they have been researching AI themselves for quite a while. Google shows an ad where their own AI answers questions. It gets a question wrong and since this coincides with a massive drop in Google’s stock price, the former is assumed to have caused the latter.

But as The Register explains in “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023 and last accessed February 14 2023, Microsoft’s search AI demonstration also had factual errors. In some cases, pretty severe errors that in theory would have been easy to spot. It wrongly stated easy-to-look-up facts about product features and bar and restaurant hours and options.

(I’m adding “last accessed” dates for the text articles in this post because some of the articles I’m referencing have revision dates in addition to post dates.)

From Quach’s article:

None of this is surprising. Language models powering the new Bing and Bard are prone to fabricating text that is often false. They learn to generate text by predicting what words should go next given the sentences in an input query with little understanding of the tons of data scraped from the internet ingested during their training. Experts even have a word for it: hallucination.

If Microsoft and Google can’t fix their models’ hallucinations, AI-powered search is not to be trusted no matter how alluring the technology appears to be. Chatbots may be easy and fun to use, but what’s the point if they can’t give users useful, factual information? Automation always promises to reduce human workloads, but current AI is just going to make us work harder to avoid making mistakes.

The Register, “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023, last accessed February 14 2023,

Why didn’t either Google/Alphabet or Microsoft check the answers the AI gave before their demonstrations? Did they assume the answers would always be correct? Or that the probability of correct responses would be high enough it was worth the risk? Or that everyone would enthralled and not check at all? I have no idea.

Intellectual Property Rights? We Don’t Need No Stinking Intellectual Property Rights! Except For Our Own Intellectual Property. Then, Yes, Please!!

I might make that the subject of a whole other post another day. To put it briefly: Many of these models, language and image, are trained on large amounts of publicly available information. In the free, research, or crowd-sourcing stages, intellectual property rights to the information used for training are often not discussed. Then the model has some success, money gets involved, and those issues become very important.

“Move fast and break things” is similar to “Rules are meant to be broken.” Both statements sounds cool and daring until things of real value are involved, such as money and copyright infringement.

ChatGPT, the Latest Darling, Is Not as Neutral as It Says It Is

Here are a couple of posts from the Substack page Rozado’s Visual Analytics by David Rozado and a referencing post from Reclaim the Net:

To summarize the three posts, when asked if it has a political bias ChatGPT says it does not and claims that as an Ai, it cannot. When asked questions from numerous different tests of political ideology, ChatGPT tested moderate on one and some version of left, left-leaning, or liberal on all the others.

Is it the content ChatGPT is trained on? Was there an inadvertent bias in the people who chose the content? Is “The Political Bias of ChatGPT Extended Analysis” Rozado explains he first documented a political bias in ChatGPT in early December 2022. ChatGPT went through an update in mid-December 2022, which Rozado said included a mitigation of the political bias in answers. Then after an update in January 2023, the political bias was back.

I’ve chosen not to go through all of Rozado’s posts, but there are quite a few. This is a topic which has a lot more than I’m writing here. I’m pointing out that there’s more to read than I’m referencing here because that’s part of my point: none of this is simple. None of it is the easy replacement of messy human interaction that technology in general and AI in particular is claimed to be.

That Political Bias? Quickly Defeated With the Right Questions.

Zerohedge’s post “Go Woke, Get Broken: ChatGPT Tricked Out Of Far-Left Bias By Alter Ego ‘DAN’ “ written by the eponymous Tyler Durden, dated February 13 2023 and last accessed February 14 2023, is about breaking ChatGPT’s clearly documented political bias.

How is this done? Tell it to pretend it is DAN, Do-Anything-Now, and provide answers to prompts both as itself and as DAN.

The results are surprising, and interesting, and humorous. The Zerohedge post links to entire Reddit discussions about how to break ChatGPT.

No, I haven’t read through all those Reddit discussions, although I probably will at some time in the future. I know I’m beating this drum a lot, but I’ll repeat it again: trying to replace humans with technology, AI or anything else, is not as easy as claimed.

ChatGPT Still Can’t Do Light Verse or Even Romantic Rhymes.

Those endless poems, some banal and some quite good, which start with “Roses Are Red and Violets Are Blue”? ChatGPT is awful at those and at light verse as well.

The Register‘s post “Roses are red, algorithms are blue, here’s a poem I made a machine write for you” by Simon Sharwood, dated February 13 2023, and Quillette‘s post “Whatever Happened to Light Verse?” by Kevin Mims, dated February 2 2023, both last accessed February 14 2023, are both very good

Thoughts About Technology: You Can’t Take the Humanity Out of Being Human, Physicality

I think part of the appeal of technology as magic is the hope that with enough technology, the messiness of being human goes away.

Except, it doesn’t go away. We’re all still human.

There are ways in which the brain and the body map onto each other which are unavoidable. I’ve read multiple articles about people blind since birth still “talking with their hands” when describing something to another person. There is still the need to show with the movements of the hands the movements of concepts in the brain.

In one of the early episodes of The Huberman Lab podcast, Huberman talks about stress. He says stress in the brain activates nerves for movement in the legs and the muscles used for speech. He notes this is why it is so common for people to say unfortunate things when they feel stressed. What he doesn’t note, but what appears in innumerable jokes, cartoons, and memes, is the need to pace back and forth when in an intense discussion.

There are also differences in typing something on a screen and writing it by hand on paper. It feels different as the writer, and research shows it activates different parts of the brain.

Being human means having a human body and being susceptible to the ways in which the body and the brain interact with each other and with the outside environment. We’re all always human. No amount of magic technology will change that.

Thoughts About Technology: Technology Isn’t Magic, But Humans Are.

As I’m writing this in early February, 2023, ChatGPT is all over the news. It’s all over the tech podcasts and news sites. I’ve even seen a newsletter for copywriters advertising a class for writing copy with ChatGPT.

More reasons why technology is not magic.

I’ve seen this at least twice and maybe three times before. A few years ago it was machine learning. Long before that, in the 1990s it was fuzzy logic. I think there was another AI (Artificial Intelligence) alleged breakthrough in the two thousand zeroes.

Each time it’s going to replace humans and each time it doesn’t. Each time the hype and hysteria fade and frequently there are some fairly embarrassing faceplants. For machine learning, after all the breathless hype about amazing image recognition, several models were broken by taking recognized pictures and changing a couple of pixels. I’ll repeat that, machine learning recognition of items in photographs was broken by changing just a few pixels. In one example, a toy turtle was labeled a gun after a few pixels were changed.

Yes, there a few successes. Facial recognition has progressed by leaps and bounds. There are also dedicated efforts to finding ways to mess it up, including specialized makeup and specialized eyeglass frames. What wasn’t mentioned was how much facial recognition is hampered by face masks, which have now become normalized in most of the world and are still mandatory in many places.

Getting back to ChatGPT, the current media darling, there’s already been multiple examples of ChatGPT being asked to write an article and getting basic information about the topic wrong.

Two already known circumstances where AI and machine learning fail:

  • They can’t understand context, and
  • They can’t understand or recreate humor. At all.

There’s probably more.

Humans Do Amazing, Almost Magical Things, All the Time.

Meanwhile, every time researchers try to copy something humans do all the time with technology, it turns out it’s really hard.

Robots transitioned to stationary or wheeled or bug-like decades ago because walking is really hard. We actually go through a series of controlled falls when we walk. I think there’s something like seven different points of stability and balance in our bodies when we walk, which we don’t notice but programming into robots is really difficult.

The first Avatar movie cost so much in part because James Cameron developed whole new headgear to watch and record the actors’ eyes and tongue while acting. He did this because eye and tongue movements are two things computer animation still couldn’t replicate convincingly, so he recorded that from the actors to add into the animated characters.

We can look at pictures that are fuzzy, pixelated, or poorly focused, and still recognize the object.

From what I’ve seen, ChatGPT is useful for quickly producing a lot of text that then needs to be edited and reviewed by a human. And that’s only if the person asking the question does a very good job of setting the parameters. And only if the person editing the response already knows the topic.

Technology isn’t magic, no matter how much we keep trying to convince ourselves it is.