Useful Links: Rugged Radios, description of uses of GMRS v BB

I was looking for places to buy portable handheld radios the other day, and stumbled across Rugged Radios.

In addition to their product pages, they have a lot of good information on their site. Here are some useful links I’ve been reading through:

Useful Sites: Cyclone and Dust Collection Research, courtesy of Bill Pentz

The site: Cyclone and Dust Collection Research. The home page says it was created in 2000 and was last updated in August 2022. That’s an impressive amount of dedication.

I found this through a link from The Wood Database.

Yes, he is advocating for products that he helped design. I’m fine with that, profit is part of what makes the world go round.

Obviously, it’s about dust collection. I’ve only just started reading through the site, but I already found this bit of interesting information: it’s dangerous for a person to vent their dust collection system inside their shop. Very fine dust is what causes a lot of the physical damage and venting a dust collector system inside the shop lets particles too fine for dust filter continue to circulate in the shop. Much better is venting the dust collection system outside.

Mr. Pentz’s biography is quite interesting. At the end he says that his health has finally required him to retire and slow down. I hope his health gets better.

Error 79 on HP Laserjet M251nw. I changed the document scaling.

Spoiler to the story is in the title.

I’m not going to tell a three-page story full of angst, drama, and existential musings, when my solution was “I changed the document scaling and it printed.”

I am going to rant a bit about what happened before I found that solution.

The beginning of the story

More formally, the full name on the printer is “HP LaserJet Pro 200 color M251nw”. I bought this one used several years ago. The previous owner did not like how much the toner cost.

I was printing out a multipage document. I saw error code 79, firmware error. This sounded bad. The printer said to turn it off, turn it back on, try to reprint the document. I did. I still got error 79.

Multiple websites later, most of them recommended power cycling and trying to print again. I had already tried that.

The red herring: A surge suppressor???

At least one website said to disconnect it from any surge suppressor and plug it directly into the wall outlet. I was dubious, thinking that 1) I cannot see any way there would be enough line drop, current limitations, change to voltage waveform, change to line characteristics, or anything else I could think of which a surge suppressor would create which would prevent a previously functioning laser printer from continuing to function, and 2) if for some reason the circuitry is so tender and so balanced on knife’s edge that a surge suppressor does prevent it from functioning, and it got through HP’s design, design review, and QA teams like that and was still released, I would doubt all HP products forever after.

No, the surge suppressor had nothing to do with it. I have no reason to doubt HP’s products. I have no idea why that website said a surge suppressor could the cause of the firmware error.

What no one suggested (no one I saw, anyway)

After more troubleshooting, none of which I saw recommended on any of the sites I looked at, I narrowed the problems down to one page. It was one page, out of dozens, which caused the error 79 to show up when I tried to print it.

It was a PDF page, original scale 8.5″ x 11″. The page was a scan of an older document printed before laser printers existed. I had set my PDF reader to automatically scale to page margins or printer margins or something like that. It came up with a scale percentage around 99%. I changed to a custom scale, and reduced it to 97%. Then it printed fine. No errors, no problems.

I fixed the error, in that document, on my printer, by changing the document scaling. I have no idea if that will work for anyone else.

Useful Sites: The Wood Database. Using Wood Is a Technology Too.

The Wood Database is a great source of information about the mechanical properties of different types of wood. It also has many articles about wood. They’ve broken down the articles into the following categories:

  • General Information
  • Identifying Wood
  • Mechanical Properties
  • Separating Specific Woods
  • Health and Safety
  • Reference / For the Shop
  • Working with Wood

Have You Decided What Your Intent Is?

I was looking at some purchased patterns today. None of them really fit the purpose I want to use them for.

I realized I’d look differently at the patterns depending on what I was trying to achieve:

  • Do I only want a pattern that looks nice, done and move on to something else?
  • Does my purpose require specific properties like right angles on two edges, or it looks nice when mirrored?
  • Do I want to look at the pattern as a starting point to make my own patterns in the same style? And that means I’m looking at aesthetics?
  • Or do I not really like the design, but I like the way it was constructed and I want to learn from that?

The first two points apply if it’s just a hobby project. But I ever want to look at that craft as a way to make money, I’ll need to think about the second two points too.

Useful Site (for examples): Banjo Ben Clark

The site is BanjoBenClark.com.

No, he doesn’t teach anything about technology or programming or web design. He teaches guitar, banjo, and mandolin.

The site itself is one of the nicest to look at and best organized with respect to menus, that I’ve seen in a long time. Almost every time I look at it, I find something else learn from looking at how it’s organized. I also learn a lot from how he’s leading people to it from his YouTube videos.

How It Fits Together, How It Moves

It’s just as important to figure out how things move together, as it is to figure out how they fit together.

It’s also a lot more difficult. When things aren’t moving correctly, it’s easy to see. My computer doesn’t boot up, my kitchen appliances don’t work, my sewing machine doesn’t sew. These are all things that happen when things don’t move together correctly.

Intended movement isn’t usually shown in user manuals or service manuals either. I suppose in some cases it might be a trade secret. In other cases it might be something difficult to document. Seeming odd or arbitrary troubleshooting in user and service manuals often seems to be focused at getting parts aligned to move the way they’re intended.

Timing in software is an entire other black art.

Thoughts About Technology: Our Brains Are Not Hard Drives. Write It Down.

I don’t like to admit mistakes. I think most people are the same way.

So none of us like to admit what we’ve forgotten. If we forget enough things, we start to forget what we’ve forgotten.

If it’s something I want to remember, I need to write it down. And if it’s worth keeping, I’ll eventually come back to it. Which means I’ll probably have to do some occasional reorganization of what I’ve written. Again, if the information is worth keeping, I’ll come back to it and it will be worth the time to reorganize.

It took me a long time to realize this. I thought it was just me, until I started to notice how few people keep notes on anything. And how much people struggle to recreate or rediscover information which I know they already had.

Write it down.

Useful Finds: A Bunch of Links About AI and ChatGPT.

Last week I wrote about my skepticism about ChatGPT and Artificial Intelligence. I read and heard multiple further criticisms and critiques of the use of artificial intelligence since then. When I started looking for those links for this post, I found several more.

The Difference Between a Content Creator and Content Editor

In a discussion on the Software Defined Talk podcast episode 400, Matt Ray (one of the hosts) described using ChatGPT to create content. ChatGPT can quickly create a lot of text very quickly, but not all of it is good. It’s not even always factually accurate. Ray pointed out there is a large difference between creating content and editing content created by someone else.

I’d Have Expected the Companies Showcasing These to Understand This and to Have Some Content Editors.

And I would have been wrong to expect that.

As a short recounting of some current events: ChatGPT is launched, gets lots of attention. Microsoft announces it will buy ChatGPT, or its parent company, and ChatGPT will become part of Microsoft’s search engine Bing. Bing gets a tiny fraction of search engine traffic, and search engine advertising dollars, that the Google search engine gets. Cue breathless articles about this being the end of Google’s dominance in internet search. Google announces they have been researching AI themselves for quite a while. Google shows an ad where their own AI answers questions. It gets a question wrong and since this coincides with a massive drop in Google’s stock price, the former is assumed to have caused the latter.

But as The Register explains in “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023 and last accessed February 14 2023, Microsoft’s search AI demonstration also had factual errors. In some cases, pretty severe errors that in theory would have been easy to spot. It wrongly stated easy-to-look-up facts about product features and bar and restaurant hours and options.

(I’m adding “last accessed” dates for the text articles in this post because some of the articles I’m referencing have revision dates in addition to post dates.)

From Quach’s article:

None of this is surprising. Language models powering the new Bing and Bard are prone to fabricating text that is often false. They learn to generate text by predicting what words should go next given the sentences in an input query with little understanding of the tons of data scraped from the internet ingested during their training. Experts even have a word for it: hallucination.

If Microsoft and Google can’t fix their models’ hallucinations, AI-powered search is not to be trusted no matter how alluring the technology appears to be. Chatbots may be easy and fun to use, but what’s the point if they can’t give users useful, factual information? Automation always promises to reduce human workloads, but current AI is just going to make us work harder to avoid making mistakes.

The Register, “Microsoft’s AI Bing also factually wrong, fabricated text during launch demo” by Katyanna Quach, dated February 14 2023, last accessed February 14 2023,

Why didn’t either Google/Alphabet or Microsoft check the answers the AI gave before their demonstrations? Did they assume the answers would always be correct? Or that the probability of correct responses would be high enough it was worth the risk? Or that everyone would enthralled and not check at all? I have no idea.

Intellectual Property Rights? We Don’t Need No Stinking Intellectual Property Rights! Except For Our Own Intellectual Property. Then, Yes, Please!!

I might make that the subject of a whole other post another day. To put it briefly: Many of these models, language and image, are trained on large amounts of publicly available information. In the free, research, or crowd-sourcing stages, intellectual property rights to the information used for training are often not discussed. Then the model has some success, money gets involved, and those issues become very important.

“Move fast and break things” is similar to “Rules are meant to be broken.” Both statements sounds cool and daring until things of real value are involved, such as money and copyright infringement.

ChatGPT, the Latest Darling, Is Not as Neutral as It Says It Is

Here are a couple of posts from the Substack page Rozado’s Visual Analytics by David Rozado and a referencing post from Reclaim the Net:

To summarize the three posts, when asked if it has a political bias ChatGPT says it does not and claims that as an Ai, it cannot. When asked questions from numerous different tests of political ideology, ChatGPT tested moderate on one and some version of left, left-leaning, or liberal on all the others.

Is it the content ChatGPT is trained on? Was there an inadvertent bias in the people who chose the content? Is “The Political Bias of ChatGPT Extended Analysis” Rozado explains he first documented a political bias in ChatGPT in early December 2022. ChatGPT went through an update in mid-December 2022, which Rozado said included a mitigation of the political bias in answers. Then after an update in January 2023, the political bias was back.

I’ve chosen not to go through all of Rozado’s posts, but there are quite a few. This is a topic which has a lot more than I’m writing here. I’m pointing out that there’s more to read than I’m referencing here because that’s part of my point: none of this is simple. None of it is the easy replacement of messy human interaction that technology in general and AI in particular is claimed to be.

That Political Bias? Quickly Defeated With the Right Questions.

Zerohedge’s post “Go Woke, Get Broken: ChatGPT Tricked Out Of Far-Left Bias By Alter Ego ‘DAN’ “ written by the eponymous Tyler Durden, dated February 13 2023 and last accessed February 14 2023, is about breaking ChatGPT’s clearly documented political bias.

How is this done? Tell it to pretend it is DAN, Do-Anything-Now, and provide answers to prompts both as itself and as DAN.

The results are surprising, and interesting, and humorous. The Zerohedge post links to entire Reddit discussions about how to break ChatGPT.

No, I haven’t read through all those Reddit discussions, although I probably will at some time in the future. I know I’m beating this drum a lot, but I’ll repeat it again: trying to replace humans with technology, AI or anything else, is not as easy as claimed.

ChatGPT Still Can’t Do Light Verse or Even Romantic Rhymes.

Those endless poems, some banal and some quite good, which start with “Roses Are Red and Violets Are Blue”? ChatGPT is awful at those and at light verse as well.

The Register‘s post “Roses are red, algorithms are blue, here’s a poem I made a machine write for you” by Simon Sharwood, dated February 13 2023, and Quillette‘s post “Whatever Happened to Light Verse?” by Kevin Mims, dated February 2 2023, both last accessed February 14 2023, are both very good