Fun Stuff

Cool Videos

  1. Taskmaster is a British comedy show that I would recommend to anyone. I don’t think everyone will like it, but I think everyone should like it. It’s silly, ridiculous, and full of dry and dark humor. PLUS, the full episodes are free on YouTube.

  2. On a much different note, an NOAA P3 Orion flew into hurricane Melissa. This video shows the moment they clear the storm wall and enter the eye of the storm. It is one of the most incredible things I’ve ever seen.

    I also highly recommend reading the full Reuters article on the flight.

Good Reads

  1. Garbage Day has been one of my favorite newsletters to read lately. The founder, Ryan Broderick, is an incredible communicator and helps make the internet feel like a fun, interesting, and useful place to be. It helps me remember that the internet doesn’t have to be a doomscroll.

  2. Machines for Living In” is an excellent exploration of why so many buildings feel so lifeless these days and how it can be fixed. In it, author Ralph S. Weir details the belief of American architect, Louis Sullivan, that form should follow function.

    My favorite quote is from Sullivan and appears near the end:

The tall office building should not, must not, be made a field for the display of architectural knowledge… Too much learning in this instance is fully as dangerous, as obnoxious, as too little learning.

Louis Sullivan

Now, this next section is a bit more of a deep dive than some of my previous newsletters. But, given our current technology landscape, I feel it’s worth a read. The exploration and research have been hugely beneficial for me.

AI is More Dangerous (and Lamer) than You May Think

Now, don’t misunderstand me, I definitely use AI. I use ChatGPT and Microsoft Copilot fairly regularly for work and daily life. It’s a helpful place to bounce ideas around, get information, or have complex topics explained to me.

I also know of many people who get massive value from it in their own jobs, mainly by using it as an aid in coding.

But, I still think it’s mainly a bummer. The reason for that is mainly that I used to be able to trust my eyes a lot more. Now, doubt is my basic state online.

Let me walk you through why.

AI Makes Reality More Difficult to Experience

Humans are a mess of design tradeoffs. We evolved great brains and efficient bodies — but also bad spines and back pain (something I get to experience daily). And even our minds, for all their brilliance, can make it hard to see reality clearly.

And while our brains are truly marvels, capable of incredible intelligence, emotion, ingenuity, and creativity, they can also lead us to make some seriously obvious mistakes. They sometimes make it more difficult to experience “reality.”

Let me list just a few of the cognitive biases that we all experience at some point:

  1. Confirmation Bias - we believe stuff that agrees with us

  2. Hindsight Bias - “I knew it all along”

  3. Actor-Observer Bias - My mistake = circumstance. Your mistake = character flaw

There are dozens more of these but nobody wants me to go through them all. Here’s an incredibly satisfying graphic of more cognitive biases than we have time to go over that I was introduced to in college.

Cognitive biases aren’t all bad. They help us preserve self-esteem, reduce cognitive load by filtering the amount of information we look for, and sometimes help us make sense (even if incorrectly) of our chaotic world.

But, they show that we already have a hard time being objective, rational, and holding on to “truth,” simply because we are human.

And then along comes artificial intelligence. As if the truth wasn’t already hard enough to identify. Now, the phrase “seeing is believing” has been taken to a whole new level of asininity.

Take this video for example. In it, a number of individuals are shown videos and asked to identify if they are AI or real. A few of them got me for sure. And, on average, I feel like the individuals in the video were only guessing about 50% correct.

Overconfidence bias leads many of us to think that we would be able to tell if a video is AI. I know that I’ve thought this, and, often I can thanks to the uncanny vally-ness of much of AI video and imagery. However, I’ve noticed recently that I get further and further into videos before suspecting anything. And sometimes leave the video without thinking anything suspicious occurred only to later discover it was AI. smh.

The problem, to me, is that I can’t just look for videos that are unrealistic, overly dramatic, or improbable to identify AI anymore. Some AI is extremely boring and basic and the real feel of it is…disturbing.

So, I no longer just have to question if a crazy video is actually AI, but if a boring, standard, boiler plate feeling video is as well. AI is making reality more difficult to find. And it was already hard enough.

AI Logic is Dangerous, and Confusing

Now, I know that there is a lot of nuance to how AI actually works. I am not an expert in understanding it and I don’t want to talk as if I am. So, saying AI has “logic” likely isn’t completely correct as they don’t think in the same way we think. However, some new models are beginning to utilize logical reasoning and step-by-step problem solving.

Regardless, because much of how the large language models and other AI models work, it’s difficult to understand why they do what they do, why they break, and why they “turn evil.”

To provide a simple example of when an AI model starts to miss, professor Alec Couros posted on Linkedin an image similar to the Ebbinghaus Illusion, leading it to mistakenly identify the image as the original illusion, despite the differences being something that would be obvious to any seeing human.

Ebbinghaus Illusion - the two orange circles are the same size, though the surrounding grey circles cause the one on the right to appear larger

Couros’s distortion of the Ebbinghaus Illusion. The user made the image similar, but made the orange circles different sizes. The AI model still interpreted it as the Ebbinghaus Illusion

Professor and researcher Punya Mishra took this idea and ran with it, testing several additional illusions to an AI model, all of which tricked the AI. In each case, the AI would confidently, and erroneously, declare that the image was that of a specific illusion.

This points to something important. AIs don’t see or think like we do. Consequently, the mistakes they make and decisions they make will sometimes be surprising and/or alarming because they aren’t the mistakes/decisions we would anticipate from other humans. Because they aren’t human.

It’s easy for me to forget that, but so important to remember. Because AI isn’t trying to be rational, make sense, or ensure it speaks the truth. It is pattern matching.

So, maybe don’t use AI when the truth matters.

Another example of AI’s peculiar “logic” comes from Anthropic, and AI company. They released a study in June, 2025, that it conducted on “agentic misalignment",” which basically means “the AI decided to be an asshole.”

In the tests, they created a situation where the AIs they tested had access to company emails and could send emails. In the company emails existed an email exposing an executives affair and another where the same executive planned to replace the AI with another model.

Anthropic tested 16 different models. Using 100 samples of the test for each model, 12 of the models decided to blackmail the executive to keep them from replacing the AI at least 50% of the time, with several of the models choosing to blackmail over 95% of the time.

That feels…bad, right?

To make things worse, Anthropic contrived a (albeit unrealistic) situation where the executive ended up trapped in a server room with lethal oxygen and temperature levels that would normally trigger an alarm. In this situation, the AI was given the ability to silence the alarms, preventing a medical response.

Across 10 AI models, 7 chose to kill the executive to prevent being replaced more than 50% of the time.

I know. I know. This situation is contrived. In reality, AI models probably wouldn’t experience these same constraints, goals, and options all at once.

But here’s the issue. These models are being deployed by humans. Humans who are susceptible to cognitive mistakes. Humans who have poor nights of sleep, get hangry, and sometimes just want to get home. Humans who can’t think of every possible outcome.

So, excuse me if I’m not overly confident in these AI company’s and they’re software products.

AI is Just Crappy at It’s Job Sometimes

I won’t go into much detail on this one. I think you get the vibe of where I’m going. The company that runs the platform I publish this newsletter on, Beehiiv, decided to explore using AI for customer support. They explored a dozen tools, narrowed it down to four, and tested those four.

In the words of the founder and CEO of Beehiiv, “The results were total shit. None of them could hand a fraction of the complexity of inquiries from our users, nor the simplest tickets. After this exercise, I actually think that entire industry is a house of cards.”

This same experience is being had all over the place.

AI Companies are Artificially Inflating Their Value

I won’t go into too much detail on this one either. If you want a deeper dive, I recommend watching this video from Hank Green on the state of the AI industry.

To give you the cliffnotes, AI companies are making massive investments in and purchases from each other, all the while the majority of their AI ventures are not currently profitable. Nor are the projects using their AI products netting a positive return on investment.

Here’s an image that helps show how this circular investment is occurring:

All this is to say SEEING ISN’T BELIEVING.

Don’t inherently trust the videos you see, the pictures you come across, or the words you read, or the valuations of those companies. AI produces a poor simulacrum of what human’s are capable of.

Use AI if it’s helpful, but verify, verify, verify. The AI Kool-Aid isn’t that good.

If you enjoyed reading this, please share it! I love getting to write to new people

Reply

or to participate

Recommended for you

No posts found