Error 79 on HP Laserjet M251nw. I changed the document scaling.

Spoiler to the story is in the title.

I’m not going to tell a three-page story full of angst, drama, and existential musings, when my solution was “I changed the document scaling and it printed.”

I am going to rant a bit about what happened before I found that solution.

The beginning of the story

More formally, the full name on the printer is “HP LaserJet Pro 200 color M251nw”. I bought this one used several years ago. The previous owner did not like how much the toner cost.

I was printing out a multipage document. I saw error code 79, firmware error. This sounded bad. The printer said to turn it off, turn it back on, try to reprint the document. I did. I still got error 79.

Multiple websites later, most of them recommended power cycling and trying to print again. I had already tried that.

The red herring: A surge suppressor???

At least one website said to disconnect it from any surge suppressor and plug it directly into the wall outlet. I was dubious, thinking that 1) I cannot see any way there would be enough line drop, current limitations, change to voltage waveform, change to line characteristics, or anything else I could think of which a surge suppressor would create which would prevent a previously functioning laser printer from continuing to function, and 2) if for some reason the circuitry is so tender and so balanced on knife’s edge that a surge suppressor does prevent it from functioning, and it got through HP’s design, design review, and QA teams like that and was still released, I would doubt all HP products forever after.

No, the surge suppressor had nothing to do with it. I have no reason to doubt HP’s products. I have no idea why that website said a surge suppressor could the cause of the firmware error.

What no one suggested (no one I saw, anyway)

After more troubleshooting, none of which I saw recommended on any of the sites I looked at, I narrowed the problems down to one page. It was one page, out of dozens, which caused the error 79 to show up when I tried to print it.

It was a PDF page, original scale 8.5″ x 11″. The page was a scan of an older document printed before laser printers existed. I had set my PDF reader to automatically scale to page margins or printer margins or something like that. It came up with a scale percentage around 99%. I changed to a custom scale, and reduced it to 97%. Then it printed fine. No errors, no problems.

I fixed the error, in that document, on my printer, by changing the document scaling. I have no idea if that will work for anyone else.

Great Power Brings Great Responsibility

Yes, it’s trite. It’s also true.

Not having to do repetitive routine tasks by hand is one of the benefits of technology.

An obvious example is using a spreadsheet program to create and calculate spreadsheet numbers, instead of having to write everything by hand. And then not having to rewrite everything by hand because one of the starting number changed.

A less obvious example is being able to model unsolvable math problems. Back in the 1990s I was told there were heat transfer problems which engineers and mathematicians had not been able to solve with calculus. Those same problems could be solved by a computer program modeling heat transfer over thousands and millions of small volumes.

However, with great power comes great responsibility.

That same computer will do other things we ask it to do, like delete every file we have. There was an article in The Register, “Automation is great. Until it breaks and nobody gets paid.” It was published on April 14, 2023 and written by Simon Sharwood. It is part of The Register‘s ongoing “On Call” series where readers write in with stories of tech problems they’ve had to fix.

Even more enlightening than the story was the comments section. There were quite a few comments in there about former co-workers who had written something “simple” which had very not-simple repercussions.

Technology is great and saves a lot of time, but only if it’s used responsibly and wisely.

The Simpler It Is, The Closer You Look, Part 2

Humans do not think like machines. Machines do not process information like humans.

Which is obvious, yet the results are often not considered.

Making a machine interface that is intuitive to humans is really difficult. Presenting complex information in way that is easy for humans to read is really difficult. Here are some of the things which have to be considered:

  • How is the information organized?
  • What information are we talking about? Are we presenting flight schedules or grocery shopping lists?
  • Do regular users and new users have different concerns?
  • Do we need to emphasize if anything has changed from last time?
  • How can the information be presented so the expected reader can easily find what they think is most important to find, while also letting the publisher or organizer highlight what they think is most important to present?

Those issues are things I came up with just thinking about it as I’m writing this. There were and still are whole entire disciplines and professions devoted to this.

When I find something which is intuitive to use, whether it’s gas pump prices, a website, or the dashboard of a car, I try to stop and admire what was achieved. I also try to see what I might learn. If there’s a lot of information shown in an intuitive and easy-to-understand manner, someone put a lot of work into that.

The Simpler It Is, The Closer You Look, Part 1

The simpler a physical operation or technology seems to be, the closer I’ll be to worrying about material properties.

Every physical thing eventually goes back to a natural material. Even “synthetic” materials such as plastics, nylon, or viscose, eventually come from a natural material. And natural materials vary.

Whether a natural variance will affect the end use is often hard to predict. A few years ago Consumer Reports took a close look at gluten-free foods and found that one of the hidden dangers was arsenic poisoning. Many gluten-free foods contain rice flour. Rice is grown in different areas with different soils; and rice has a tendency to absorb arsenic from the soil (if the soil has arsenic; some soils don’t).

I used rice as an example, but every other natural material has equally unexpected variances somewhere. When I buy a good such as quilting cotton, there’s an unseen army of people I’m depending on. Someone grew the cotton, harvested it, processed it, and spun it into thread. Someone else took that thread and wove cloth out of it at a set width and tightness of fabric. And then someone after that dyed or printed the cloth.

The further I go back in any chain of assembly or manufacture or processing, the closer I’ll be to taking a very close look at physical properties in the material itself. If I have to do that, then I’ll probably start learning about how to specify those properties when buying, and how to test for those properties, and how often I’ll need to test.

The simpler it is, the closer I need to look at everything.

Useful Finds: Taking the time for a class rather than re-inventing the wheel, MS Excel

I generally avoid Microsoft Office if I can. It tries to do too much. And no matter how much I log in and confirm on whichever websites, if I am using Microsoft Office while I’m logged in to Windows on a different email than I bought the Microsoft Office license under, Windows and Microsoft Office throw fits.

Currently, I’m working on a project which needs Excel. I signed up for a couple of Udemy courses. I’m currently working my way through the first one, Unlock Excel VBA and Excel Macros by Lella Gharani. I’m only partway through and I’ve already learned a lot.

Useful Finds: A Bunch of Links About AI and ChatGPT.

Last week I wrote about my skepticism about ChatGPT and Artificial Intelligence. I read and heard multiple further criticisms and critiques of the use of artificial intelligence since then. When I started looking for those links for this post, I found several more.

The Difference Between a Content Creator and Content Editor

In a discussion on the Software Defined Talk podcast episode 400, Matt Ray (one of the hosts) described using ChatGPT to create content. ChatGPT can quickly create a lot of text very quickly, but not all of it is good. It’s not even always factually accurate. Ray pointed out there is a large difference between creating content and editing content created by someone else.

I’d Have Expected the Companies Showcasing These to Understand This and to Have Some Content Editors.

And I would have been wrong to expect that.

As a short recounting of some current events: ChatGPT is launched, gets lots of attention. Microsoft announces it will buy ChatGPT, or its parent company, and ChatGPT will become part of Microsoft’s search engine Bing. Bing gets a tiny fraction of search engine traffic, and search engine advertising dollars, that the Google search engine gets. Cue breathless articles about this being the end of Google’s dominance in internet search. Google announces they have been researching AI themselves for quite a while. Google shows an ad where their own AI answers questions. It gets a question wrong and since this coincides with a massive drop in Google’s stock price, the former is assumed to have caused the latter.

But as The Register explains in “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023 and last accessed February 14 2023, Microsoft’s search AI demonstration also had factual errors. In some cases, pretty severe errors that in theory would have been easy to spot. It wrongly stated easy-to-look-up facts about product features and bar and restaurant hours and options.

(I’m adding “last accessed” dates for the text articles in this post because some of the articles I’m referencing have revision dates in addition to post dates.)

From Quach’s article:

None of this is surprising. Language models powering the new Bing and Bard are prone to fabricating text that is often false. They learn to generate text by predicting what words should go next given the sentences in an input query with little understanding of the tons of data scraped from the internet ingested during their training. Experts even have a word for it: hallucination.

If Microsoft and Google can’t fix their models’ hallucinations, AI-powered search is not to be trusted no matter how alluring the technology appears to be. Chatbots may be easy and fun to use, but what’s the point if they can’t give users useful, factual information? Automation always promises to reduce human workloads, but current AI is just going to make us work harder to avoid making mistakes.

The Register, “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023, last accessed February 14 2023,

Why didn’t either Google/Alphabet or Microsoft check the answers the AI gave before their demonstrations? Did they assume the answers would always be correct? Or that the probability of correct responses would be high enough it was worth the risk? Or that everyone would enthralled and not check at all? I have no idea.

Intellectual Property Rights? We Don’t Need No Stinking Intellectual Property Rights! Except For Our Own Intellectual Property. Then, Yes, Please!!

I might make that the subject of a whole other post another day. To put it briefly: Many of these models, language and image, are trained on large amounts of publicly available information. In the free, research, or crowd-sourcing stages, intellectual property rights to the information used for training are often not discussed. Then the model has some success, money gets involved, and those issues become very important.

“Move fast and break things” is similar to “Rules are meant to be broken.” Both statements sounds cool and daring until things of real value are involved, such as money and copyright infringement.

ChatGPT, the Latest Darling, Is Not as Neutral as It Says It Is

Here are a couple of posts from the Substack page Rozado’s Visual Analytics by David Rozado and a referencing post from Reclaim the Net:

To summarize the three posts, when asked if it has a political bias ChatGPT says it does not and claims that as an Ai, it cannot. When asked questions from numerous different tests of political ideology, ChatGPT tested moderate on one and some version of left, left-leaning, or liberal on all the others.

Is it the content ChatGPT is trained on? Was there an inadvertent bias in the people who chose the content? Is “The Political Bias of ChatGPT Extended Analysis” Rozado explains he first documented a political bias in ChatGPT in early December 2022. ChatGPT went through an update in mid-December 2022, which Rozado said included a mitigation of the political bias in answers. Then after an update in January 2023, the political bias was back.

I’ve chosen not to go through all of Rozado’s posts, but there are quite a few. This is a topic which has a lot more than I’m writing here. I’m pointing out that there’s more to read than I’m referencing here because that’s part of my point: none of this is simple. None of it is the easy replacement of messy human interaction that technology in general and AI in particular is claimed to be.

That Political Bias? Quickly Defeated With the Right Questions.

Zerohedge’s post “Go Woke, Get Broken: ChatGPT Tricked Out Of Far-Left Bias By Alter Ego ‘DAN’ “ written by the eponymous Tyler Durden, dated February 13 2023 and last accessed February 14 2023, is about breaking ChatGPT’s clearly documented political bias.

How is this done? Tell it to pretend it is DAN, Do-Anything-Now, and provide answers to prompts both as itself and as DAN.

The results are surprising, and interesting, and humorous. The Zerohedge post links to entire Reddit discussions about how to break ChatGPT.

No, I haven’t read through all those Reddit discussions, although I probably will at some time in the future. I know I’m beating this drum a lot, but I’ll repeat it again: trying to replace humans with technology, AI or anything else, is not as easy as claimed.

ChatGPT Still Can’t Do Light Verse or Even Romantic Rhymes.

Those endless poems, some banal and some quite good, which start with “Roses Are Red and Violets Are Blue”? ChatGPT is awful at those and at light verse as well.

The Register‘s post “Roses are red, algorithms are blue, here’s a poem I made a machine write for you” by Simon Sharwood, dated February 13 2023, and Quillette‘s post “Whatever Happened to Light Verse?” by Kevin Mims, dated February 2 2023, both last accessed February 14 2023, are both very good

Thoughts About Technology: Technology Isn’t Magic, But Humans Are.

As I’m writing this in early February, 2023, ChatGPT is all over the news. It’s all over the tech podcasts and news sites. I’ve even seen a newsletter for copywriters advertising a class for writing copy with ChatGPT.

More reasons why technology is not magic.

I’ve seen this at least twice and maybe three times before. A few years ago it was machine learning. Long before that, in the 1990s it was fuzzy logic. I think there was another AI (Artificial Intelligence) alleged breakthrough in the two thousand zeroes.

Each time it’s going to replace humans and each time it doesn’t. Each time the hype and hysteria fade and frequently there are some fairly embarrassing faceplants. For machine learning, after all the breathless hype about amazing image recognition, several models were broken by taking recognized pictures and changing a couple of pixels. I’ll repeat that, machine learning recognition of items in photographs was broken by changing just a few pixels. In one example, a toy turtle was labeled a gun after a few pixels were changed.

Yes, there a few successes. Facial recognition has progressed by leaps and bounds. There are also dedicated efforts to finding ways to mess it up, including specialized makeup and specialized eyeglass frames. What wasn’t mentioned was how much facial recognition is hampered by face masks, which have now become normalized in most of the world and are still mandatory in many places.

Getting back to ChatGPT, the current media darling, there’s already been multiple examples of ChatGPT being asked to write an article and getting basic information about the topic wrong.

Two already known circumstances where AI and machine learning fail:

  • They can’t understand context, and
  • They can’t understand or recreate humor. At all.

There’s probably more.

Humans Do Amazing, Almost Magical Things, All the Time.

Meanwhile, every time researchers try to copy something humans do all the time with technology, it turns out it’s really hard.

Robots transitioned to stationary or wheeled or bug-like decades ago because walking is really hard. We actually go through a series of controlled falls when we walk. I think there’s something like seven different points of stability and balance in our bodies when we walk, which we don’t notice but programming into robots is really difficult.

The first Avatar movie cost so much in part because James Cameron developed whole new headgear to watch and record the actors’ eyes and tongue while acting. He did this because eye and tongue movements are two things computer animation still couldn’t replicate convincingly, so he recorded that from the actors to add into the animated characters.

We can look at pictures that are fuzzy, pixelated, or poorly focused, and still recognize the object.

From what I’ve seen, ChatGPT is useful for quickly producing a lot of text that then needs to be edited and reviewed by a human. And that’s only if the person asking the question does a very good job of setting the parameters. And only if the person editing the response already knows the topic.

Technology isn’t magic, no matter how much we keep trying to convince ourselves it is.

Technician Tuesday: Technicians Are Necessary.

I have been in many conversations where a lot of ideas and concepts were thrown around, but discussion of whether it would actually work was limited. If I pointed out times something had already been tried, and failed, and it sounded a lot like the ideas being discussed, people got uncomfortable. Sometimes the discomfort was sadness or anger that I was raining on their parade, or being too nitpicky. I preferred that to the times when the answer was “You don’t understand, I’m taking the thirty thousand foot view, so I’m not really getting into details right now.

Because of that, I created the category “Technician Tuesday” when I started this blog. Ideas are great, but how are they being implemented? How am I using technology around me? How do someone else’s ideas interact with the rest of the world.

Today, I listened to a recording of a discussion between Dr. Temple Grandin and Dr. Jordan Peterson. It was all about the importance of practical hands-on knowledge and experimentation. Applications of ideas are the true test of those ideas. A lot of that knowledge and experimentation is being lost.

The discussion was very interesting and very troubling. I’ll be buying a copy of Dr. Grandin’s most recent book.

Mindset Monday: Practice Makes Perfect, or At Least Better. Part 2 of 2.

This is a follow-up of last week’s post.

Here are some of the places I’ve seen recommendations to intentionally copy other people’s work to better my own practice:

  • A book on the modern atelier movement, where the author wrote a significant part of a four-year curriculum was devoted to drawings that are copies of works of the old masters. This helped the artist learn how previous artists had solved problems in their paintings.
  • A book on handwriting, which mentioned copy books. Those were books where people would write down famous quotes, their favorite quotes, and other quotes, and carry it with them. It helped them with handwriting practice. It also helped them to always have a handy reference of what had been written before.
  • If I look online, I can find several arrangements and analyses of famous classical music pieces, most of them centuries old.

In each case, the recommendation is to get better by copying particularly skillful examples of what came before.

I’ve even read comments that art has to be grounded in what came before, or it runs the risk of having no reference or meaning to the viewer today.

If I’m buying something I want to use, and I want it to make my life easier, ease of use and ease of learning how to use it matter. And for that, the designer probably needs to have spent some time analyzing and copying already existing works.

Mindset Monday: Practice Makes Perfect, or At Least Better. Part 1 of 2.

Earlier last week I opened a computer program I hadn’t used in a while. Even though it was a program I’d used frequently in the past, it took me a few minutes to get my bearings. I had to look through menus and find where the menu options and commands I wanted to use were located.

Fortunately, I was working by myself and had the time to rediscover where everything was located. Every program has a logic to how the menus are organized and how actions are named. I had time to remind myself of how all that worked.

But what if I had been asked to demonstrate this program for someone else. What if I had been asked to teach someone else how to use this program?

I definitely would need some time to practice.

It is not unusual to practice a skill.

It is not unusual for myself or anyone else, even though I know many people who expect themselves and everyone else they work with to load into personal memory the use of a program as quickly as that program loads into computer memory.

I believe this is a relatively new attitude. I recently read a book about couture sewing, which is very high-end and expensive sewing, usually done by hand. And the recommendation in that book was to practice on a piece of scrap fabric before working on the actual garment. It’s quite common for crochet and knit patterns to recommend swatching to practice the pattern with the yarn being used.

It’s not unusual in many areas of life for practice to be recommended, or even mandated. For high profile jobs in technology, classes and books will often recommend practicing before performing in front of crowds or clients. It is usually people who use technology only in passing who expect that no practice and no reminders are needed.