How It Fits Together, How It Moves

It’s just as important to figure out how things move together, as it is to figure out how they fit together.

It’s also a lot more difficult. When things aren’t moving correctly, it’s easy to see. My computer doesn’t boot up, my kitchen appliances don’t work, my sewing machine doesn’t sew. These are all things that happen when things don’t move together correctly.

Intended movement isn’t usually shown in user manuals or service manuals either. I suppose in some cases it might be a trade secret. In other cases it might be something difficult to document. Seeming odd or arbitrary troubleshooting in user and service manuals often seems to be focused at getting parts aligned to move the way they’re intended.

Timing in software is an entire other black art.

Slowing Down to Speed Up, Writing Edition

I listen to some small business and entrepreneur podcasts. One of the phrases I frequently hear is “slow down to speed up.”

I’ll be honest, I typically hear that right before the host explains why they fought that idea when they first heard it, before having to learn it the hard way. And by “hard way,” I mean by repeated painful experience. Anyway, I’ll get back on topic.

Slowing down to speed up also applies to writing. I used to wonder why there were so many different types of notebooks and stationery. For that matter, why were there so many different types of accounting ledger books?

In both cases, writing something down and then rewriting it somewhere else in a different way helps focus the mind.

For writing, I’ve seen guidelines which say there is a creative mode which runs fast and often a bit too free, then there is the editing mode. These are different parts of the brain, and trying to switch in and out of editing mode while ostensibly being in creating mode doesn’t work that well.

I’ve tried that with writing and it does work. I’m still not fully in the habit. But each time I get a little bit better are remembering to let it flow first and then go back and correct later.

I’m also finding it helps to do that with money. I don’t write down every cent of every transaction, but I’m starting to create a list of regular expenses, pulling the information from multiple other places it’s recorded. And it is helping me focus on what I want to keep and what I’m fine letting go.

Why am I writing this on a blog about making technology work for you?

Technology has created so many time-saving services, it’s erased the friction which used to exist. So we all, myself included, want to let the apps and programs and whatever do it all for us. When we do that, we convince ourselves we’re going faster and faster. But we’re planning and considering less and less.

A re-read and rewriting of a good idea is better than writing it hurriedly fifteen times. And it will be fifteen times because we’re moving so fast we forgot what we already wrote.

An inventory and accounting of what classes and guides and books have already been purchased is better than purchasing more variations of the same thing. But it’s faster and feels faster to just buy more of what has already been purchases.

Slowing down to go faster is a real thing.

Warning: Zipped Files in Windows Are Not Locked

I’ve recently been helping a friend organize some files from a series of backup drives and thumb drives.

I’m finding several zipped file folders. I’m often looking into the contents of those zipped file folders.

I’ve found that while I can’t paste anything into the zipped file folder while it’s still zipped and compressed, I can delete files and folders out of those zipped file folders. I can do that while the folder is still zipped.

My friend thinks the “zip files” are a sacrosanct golden standard. I’ll explain to them that they’re not.

The Easy Way Is Usually Mined

Last week I wrote about human-machine interfaces and how difficult it is to make an interface which is intuitive to use.

One of the promises of modern software, smart devices, and development, and software frameworks is how much easier it will make things.

But does it really?

One examples I run into a few times a year is a scoring program for a kids’ competition. The competition is archery, there are multiple age brackets, types of bow, and clubs. Each round generates anywhere from 20 to 40 scores per competitor who competed that round. There are two software programs I’ve heard of which are written to keep track of all this for competitors (and more importantly, competitors’ parents and coaches).

One program is an Excel spreadsheet with a bit of macros and VBA programming. The other is a tablet-based app.

Hot and New

The tablet-based program is “simpler” and “easier” and its fans describe it as simpler and easier. I have not looked at it closely, but questioning people who have used it or been present at matches where it has been used, I’ve found out a bit about how it works. The tablet-based app won’t work without an internet connection. So some major part of it’s functioning does not take place on the tablets.

An internet connection with multiple devices requires a router. All routers have a finite amount of connections they can handle at one time. How the router handles more devices talking to it than it has channels to talk depends on the router and the devices.

In addition, because the tablet-based app is “simpler” and “easier,” and unspoken is the always present belief that technology is magic and always makes things better, paper scorecards are not used. Score are entered on the tablets. I don’t know the exact interface for the competitor to confirm yes, that is their score. But I have heard from multiple parents and coaches that scores can be lost if a judge or competitor presses the wrong button on the screen. I’ve even heard that multiple competitors’ scores can be lost if a wrong button is pressed on the screen.

Assuming all goes well, the score will be sent to wherever it is processed. Entered scores can be accessed via the internet with anyone with an internet connection. So people present at the match can look up scores on their smart phone.

Old and Busted

Now I will discuss the old, difficult, outdated Excel spreadsheet method. Scores are written down by judges on paper scoresheets. The competitors get to see their scores and agree to them before the scores are sent to the scorekeeper.

The scorekeeper must have a Windows PC with Microsoft Excel running on it. The scores are entered by hand. The Excel spreadsheet does have an option to compute what has been entered. When it does so, it creates a page in the spreadsheet which is formatted to be printed on 8-1/2″ x 11″ inch paper. The paper gets posted when a new copy with new scores is printed.

If Microsoft Excel is running locally on the Windows PC, then no internet connection is required. It is not possible to lose all scores for a competitors’ round by hitting the wrong button on a screen; the paper scorecard still exists, regardless of how many buttons are pressed on which screens.

“We started telling our kids to keep track of their own scores”

A parent in this sport told me their club started telling competitors to keep their own copies of their scores. They said this at matches where the newer, simpler tablet-based app was being used. They said this because there were so many problems with the tablet-based app losing scores. And once a score was lost, it was unrecoverable because there was no paper copy.

Technology is not magic. “There’s an app for that” is not the answer to everything. The easy way is usually mined.

The Simpler It Is, The Closer You Look, Part 2

Humans do not think like machines. Machines do not process information like humans.

Which is obvious, yet the results are often not considered.

Making a machine interface that is intuitive to humans is really difficult. Presenting complex information in way that is easy for humans to read is really difficult. Here are some of the things which have to be considered:

  • How is the information organized?
  • What information are we talking about? Are we presenting flight schedules or grocery shopping lists?
  • Do regular users and new users have different concerns?
  • Do we need to emphasize if anything has changed from last time?
  • How can the information be presented so the expected reader can easily find what they think is most important to find, while also letting the publisher or organizer highlight what they think is most important to present?

Those issues are things I came up with just thinking about it as I’m writing this. There were and still are whole entire disciplines and professions devoted to this.

When I find something which is intuitive to use, whether it’s gas pump prices, a website, or the dashboard of a car, I try to stop and admire what was achieved. I also try to see what I might learn. If there’s a lot of information shown in an intuitive and easy-to-understand manner, someone put a lot of work into that.

Useful Finds: A Bunch of Links About AI and ChatGPT.

Last week I wrote about my skepticism about ChatGPT and Artificial Intelligence. I read and heard multiple further criticisms and critiques of the use of artificial intelligence since then. When I started looking for those links for this post, I found several more.

The Difference Between a Content Creator and Content Editor

In a discussion on the Software Defined Talk podcast episode 400, Matt Ray (one of the hosts) described using ChatGPT to create content. ChatGPT can quickly create a lot of text very quickly, but not all of it is good. It’s not even always factually accurate. Ray pointed out there is a large difference between creating content and editing content created by someone else.

I’d Have Expected the Companies Showcasing These to Understand This and to Have Some Content Editors.

And I would have been wrong to expect that.

As a short recounting of some current events: ChatGPT is launched, gets lots of attention. Microsoft announces it will buy ChatGPT, or its parent company, and ChatGPT will become part of Microsoft’s search engine Bing. Bing gets a tiny fraction of search engine traffic, and search engine advertising dollars, that the Google search engine gets. Cue breathless articles about this being the end of Google’s dominance in internet search. Google announces they have been researching AI themselves for quite a while. Google shows an ad where their own AI answers questions. It gets a question wrong and since this coincides with a massive drop in Google’s stock price, the former is assumed to have caused the latter.

But as The Register explains in “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023 and last accessed February 14 2023, Microsoft’s search AI demonstration also had factual errors. In some cases, pretty severe errors that in theory would have been easy to spot. It wrongly stated easy-to-look-up facts about product features and bar and restaurant hours and options.

(I’m adding “last accessed” dates for the text articles in this post because some of the articles I’m referencing have revision dates in addition to post dates.)

From Quach’s article:

None of this is surprising. Language models powering the new Bing and Bard are prone to fabricating text that is often false. They learn to generate text by predicting what words should go next given the sentences in an input query with little understanding of the tons of data scraped from the internet ingested during their training. Experts even have a word for it: hallucination.

If Microsoft and Google can’t fix their models’ hallucinations, AI-powered search is not to be trusted no matter how alluring the technology appears to be. Chatbots may be easy and fun to use, but what’s the point if they can’t give users useful, factual information? Automation always promises to reduce human workloads, but current AI is just going to make us work harder to avoid making mistakes.

The Register, “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023, last accessed February 14 2023,

Why didn’t either Google/Alphabet or Microsoft check the answers the AI gave before their demonstrations? Did they assume the answers would always be correct? Or that the probability of correct responses would be high enough it was worth the risk? Or that everyone would enthralled and not check at all? I have no idea.

Intellectual Property Rights? We Don’t Need No Stinking Intellectual Property Rights! Except For Our Own Intellectual Property. Then, Yes, Please!!

I might make that the subject of a whole other post another day. To put it briefly: Many of these models, language and image, are trained on large amounts of publicly available information. In the free, research, or crowd-sourcing stages, intellectual property rights to the information used for training are often not discussed. Then the model has some success, money gets involved, and those issues become very important.

“Move fast and break things” is similar to “Rules are meant to be broken.” Both statements sounds cool and daring until things of real value are involved, such as money and copyright infringement.

ChatGPT, the Latest Darling, Is Not as Neutral as It Says It Is

Here are a couple of posts from the Substack page Rozado’s Visual Analytics by David Rozado and a referencing post from Reclaim the Net:

To summarize the three posts, when asked if it has a political bias ChatGPT says it does not and claims that as an Ai, it cannot. When asked questions from numerous different tests of political ideology, ChatGPT tested moderate on one and some version of left, left-leaning, or liberal on all the others.

Is it the content ChatGPT is trained on? Was there an inadvertent bias in the people who chose the content? Is “The Political Bias of ChatGPT Extended Analysis” Rozado explains he first documented a political bias in ChatGPT in early December 2022. ChatGPT went through an update in mid-December 2022, which Rozado said included a mitigation of the political bias in answers. Then after an update in January 2023, the political bias was back.

I’ve chosen not to go through all of Rozado’s posts, but there are quite a few. This is a topic which has a lot more than I’m writing here. I’m pointing out that there’s more to read than I’m referencing here because that’s part of my point: none of this is simple. None of it is the easy replacement of messy human interaction that technology in general and AI in particular is claimed to be.

That Political Bias? Quickly Defeated With the Right Questions.

Zerohedge’s post “Go Woke, Get Broken: ChatGPT Tricked Out Of Far-Left Bias By Alter Ego ‘DAN’ “ written by the eponymous Tyler Durden, dated February 13 2023 and last accessed February 14 2023, is about breaking ChatGPT’s clearly documented political bias.

How is this done? Tell it to pretend it is DAN, Do-Anything-Now, and provide answers to prompts both as itself and as DAN.

The results are surprising, and interesting, and humorous. The Zerohedge post links to entire Reddit discussions about how to break ChatGPT.

No, I haven’t read through all those Reddit discussions, although I probably will at some time in the future. I know I’m beating this drum a lot, but I’ll repeat it again: trying to replace humans with technology, AI or anything else, is not as easy as claimed.

ChatGPT Still Can’t Do Light Verse or Even Romantic Rhymes.

Those endless poems, some banal and some quite good, which start with “Roses Are Red and Violets Are Blue”? ChatGPT is awful at those and at light verse as well.

The Register‘s post “Roses are red, algorithms are blue, here’s a poem I made a machine write for you” by Simon Sharwood, dated February 13 2023, and Quillette‘s post “Whatever Happened to Light Verse?” by Kevin Mims, dated February 2 2023, both last accessed February 14 2023, are both very good

Thoughts About Technology: Technology Isn’t Magic, But Humans Are.

As I’m writing this in early February, 2023, ChatGPT is all over the news. It’s all over the tech podcasts and news sites. I’ve even seen a newsletter for copywriters advertising a class for writing copy with ChatGPT.

More reasons why technology is not magic.

I’ve seen this at least twice and maybe three times before. A few years ago it was machine learning. Long before that, in the 1990s it was fuzzy logic. I think there was another AI (Artificial Intelligence) alleged breakthrough in the two thousand zeroes.

Each time it’s going to replace humans and each time it doesn’t. Each time the hype and hysteria fade and frequently there are some fairly embarrassing faceplants. For machine learning, after all the breathless hype about amazing image recognition, several models were broken by taking recognized pictures and changing a couple of pixels. I’ll repeat that, machine learning recognition of items in photographs was broken by changing just a few pixels. In one example, a toy turtle was labeled a gun after a few pixels were changed.

Yes, there a few successes. Facial recognition has progressed by leaps and bounds. There are also dedicated efforts to finding ways to mess it up, including specialized makeup and specialized eyeglass frames. What wasn’t mentioned was how much facial recognition is hampered by face masks, which have now become normalized in most of the world and are still mandatory in many places.

Getting back to ChatGPT, the current media darling, there’s already been multiple examples of ChatGPT being asked to write an article and getting basic information about the topic wrong.

Two already known circumstances where AI and machine learning fail:

  • They can’t understand context, and
  • They can’t understand or recreate humor. At all.

There’s probably more.

Humans Do Amazing, Almost Magical Things, All the Time.

Meanwhile, every time researchers try to copy something humans do all the time with technology, it turns out it’s really hard.

Robots transitioned to stationary or wheeled or bug-like decades ago because walking is really hard. We actually go through a series of controlled falls when we walk. I think there’s something like seven different points of stability and balance in our bodies when we walk, which we don’t notice but programming into robots is really difficult.

The first Avatar movie cost so much in part because James Cameron developed whole new headgear to watch and record the actors’ eyes and tongue while acting. He did this because eye and tongue movements are two things computer animation still couldn’t replicate convincingly, so he recorded that from the actors to add into the animated characters.

We can look at pictures that are fuzzy, pixelated, or poorly focused, and still recognize the object.

From what I’ve seen, ChatGPT is useful for quickly producing a lot of text that then needs to be edited and reviewed by a human. And that’s only if the person asking the question does a very good job of setting the parameters. And only if the person editing the response already knows the topic.

Technology isn’t magic, no matter how much we keep trying to convince ourselves it is.

Mindset Monday: Do You Actually Believe in What You Are Doing?

A company makes an item, or multiple items, and their finances look great. The finances fall apart. People dig into the books and find the company had stopped focusing on making money from making the items they were supposedly in business to sell.

Instead, the company had started making money from fancy footwork in their finances.

Fancy Financial Footwork in digital currency miners

The first place I heard about this recently was in Nathaniel Whitmore’s podcast The Breakdown with NLW. It’s a CoinDesk podcast, the specific episode is “Where Bitcoin Mining goes from here” from January 8 2023. In that episode Whitmore refers to the January 1 2023 CoinDesk article “What Will It Take for Bitcoin Mining Companies to Survive in 2023?” by George Kaloudis.

Before going on, I know bitcoin and crypto currency are controversial topics for many people.

The principle still applies. If a company makes money not from selling the things they claim to be making for a profit, but instead from playing financial games, something is deeply wrong. Kaloudis attributes bitcoin miners sitting on bitcoin and playing financial games to make money to two conditions: the price of the good supposedly being produced is increasing, and the cost of capital is low.

Fancy Financial Footwork in GE, which used to make physical things

I suppose General Electric’s financial arm had similar excuses in the 2000’s, but what excuse did GE’s top management have?

The second podcast I’m going to link is Jim Grant’s Grant’s Current Yield podcast. The episode is “Destruction of Value” from January 19 2023. Grant and his co-hosts interviewed William D. Cohan about his book Power Failure: The Rise and Fall of an American Icon.

General Electric (more precisely the General Electric which existed for most of its history and made many types of machines and physical goods) and Bitcoin are about as far apart as anything technical I can think of. Grant’s Interest Rate Observer and CoinDesk are probably as far apart as any two nonfiction publications I can think of.

Yet, the conversations were similar. Cohan had found that General Electric was more focused on GE Finance than the other parts of GE which made things. It was easier to make money from money than to make money from jet engines and whatever else GE made.

Large amounts of GE’s profits were coming from their finance arm. They financed an astounding amount in commercial paper markets. At one point, before things started crashing in 2007, they were one of the largest issuers of commercial paper.

This has nothing to do with the physical goods GE was once known for making. At the time of the Grant’s Current Yield episode, Cohan said GE is still breaking down into two or three smaller subsidiaries.

Why I am writing about this.

I use this blog to write about people using technology. There’s technology I use, and some of that I write about. I write about people who talk to me about using technology. I’ve written about people who ask me for recommendations on which technology I think they should use.

The theme I keep coming back to is the user of technology being honest with themselves. What do they want to do? Why do they want to do that? How are they planning on doing that? What results have they gotten in the past? What results are they hoping to get in future? And what results are they actually getting in the present?

It’s when people are not honest with themselves that I see the biggest problems with their use of technology. And it’s when people are not honest with themselves or others that I see the biggest problems in their lives in general.

Making money from moving money around is fundamentally different from making things and selling those things. As Cohan mentioned in the Grant’s Current Yield episode, making money from money is regulated in very different ways from making money from making things. A company which does one while saying they do the other is being dishonest at some level. And it will cause problems.

Technician Tuesday: Technicians Are Necessary.

I have been in many conversations where a lot of ideas and concepts were thrown around, but discussion of whether it would actually work was limited. If I pointed out times something had already been tried, and failed, and it sounded a lot like the ideas being discussed, people got uncomfortable. Sometimes the discomfort was sadness or anger that I was raining on their parade, or being too nitpicky. I preferred that to the times when the answer was “You don’t understand, I’m taking the thirty thousand foot view, so I’m not really getting into details right now.

Because of that, I created the category “Technician Tuesday” when I started this blog. Ideas are great, but how are they being implemented? How am I using technology around me? How do someone else’s ideas interact with the rest of the world.

Today, I listened to a recording of a discussion between Dr. Temple Grandin and Dr. Jordan Peterson. It was all about the importance of practical hands-on knowledge and experimentation. Applications of ideas are the true test of those ideas. A lot of that knowledge and experimentation is being lost.

The discussion was very interesting and very troubling. I’ll be buying a copy of Dr. Grandin’s most recent book.

Mindset Monday: Practice Makes Perfect, or At Least Better. Part 2 of 2.

This is a follow-up of last week’s post.

Here are some of the places I’ve seen recommendations to intentionally copy other people’s work to better my own practice:

  • A book on the modern atelier movement, where the author wrote a significant part of a four-year curriculum was devoted to drawings that are copies of works of the old masters. This helped the artist learn how previous artists had solved problems in their paintings.
  • A book on handwriting, which mentioned copy books. Those were books where people would write down famous quotes, their favorite quotes, and other quotes, and carry it with them. It helped them with handwriting practice. It also helped them to always have a handy reference of what had been written before.
  • If I look online, I can find several arrangements and analyses of famous classical music pieces, most of them centuries old.

In each case, the recommendation is to get better by copying particularly skillful examples of what came before.

I’ve even read comments that art has to be grounded in what came before, or it runs the risk of having no reference or meaning to the viewer today.

If I’m buying something I want to use, and I want it to make my life easier, ease of use and ease of learning how to use it matter. And for that, the designer probably needs to have spent some time analyzing and copying already existing works.