AI Fatigue

I’ve spent the last year co-authoring a book on risk management, and one of the questions that came up during the writing process was whether we should include a chapter on AI. We decided against it, because it felt like the technology was developing so rapidly, that any guidance we give would be outdated by the time the book went to print (hopefully, early 2026!).

And to be honest, I’m suffering from a bit of AI fatigue. Every time I  go online, I see a slew of AI solutions for problems that never really ranked that high on anyone’s list. Lately, I’ve been seeing a lot of AI notetakers in Zoom calls, but the actual person isn’t on the Zoom. Then after the Zoom, the AI notetaker sends me an email summary. It gives a whole new meaning to the phrase “this meeting could have been an email.”

I made a video about this experience, and received a few comments on TikTok from people mentioning Walter Writes AI Humanizer. Apparently, there is an AI tool that rewrites the content written by your other AI tool so that it sounds less like AI and evades AI detection by AI tools that detect AI generated content.

Please excuse me while I throw my laptop out of the window.

Most of the AI I experience on a day to day basis doesn’t feel innovative. Instead, it feels like an endless cycle of solutions looking for problems that were not at the top of anyone’s list.

That being said, I’m not an AI naysayer. I believe we can utilize AI as a powerful tool. But MOST OF US are not doing that. Instead, we are grasping at the most accessible thing that has an AI label and then patting ourselves on the back for being so innovative, while simultaneously creating a new set of problems from our poorly implemented AI use.

You guys, I’m so tired. Can we please just sit down and clearly define problems and causes before we jump to solutions? It feels like we are putting the cart before the horse. Just because an AI solution exists doesn’t mean it’s the right solution for your organization’s problem. Every org is different. Solutions are not one size fits all.

For the record, I use chat GPT, but my usage bounces between serious queries like “summarize this piece of legislation” and 2am thoughts like “is it possible for a parole officer to have a parole officer” and “how long would it take for zombies to just turn into bones”.

There are a flood of AI solutions on the market for risk management and insurance, but there is so much “noise” that it’s hard for any of them to differentiate the actual solutions they provide.

I’m not an AI expert, but if you would like to learn more about what good AI implementation looks like, I recommend following Ema Roloff on LinkedIn and TikTok. She gives amazing guidance on what digital transformation should look like.

P.S. Per Chat GPT it IS possible for a parole officer to have a parole officer. If this was a comedy on Netflix, I would watch it. Also, zombies would turn into bones in about six months, but possibly sooner in states with extreme weather conditions like Arizona and Alaska. Fun facts!

Previous
Previous

Why Gen Z Swipes Left on Claims Jobs

Next
Next

You Already Bought the Psyche