AI Hammer Looking for a Nail

  • Personal
  • Web
  • Web Design & Development

Some background

Not long ago, I was helping with discovery work on a project like any other. Our team had come up with some great wires and concepts to help our client. We were addressing content organization and issues, keeping accessibility in mind, and just overall coming up with many improvements for the user experience. Originally when giving feedback, I thought I would only have to point out some possible accessibility problems with a part of the design that used a non-standard type of component.

Then to my surprise, another component entirely took over the conversation. The component in question: an AI based search. This AI based search would be prominent on the site, with the idea that it could build a model off of the site’s content, and be able to answer any question quickly and effectively. The problem? This search was meant for people who may potentially be experiencing a mental or emotional crisis. Potentially life-threatening. And many might be disabled.

My response

Everyone on the team was quite excited about this. No one objected. I instantly felt extremely worried about such a feature. Not for the technical implementation (though time being another factor that may have come into play to me objecting), but for the responsibility we had introducing a feature that, in my opinion, might be outright disastrous. I’ve never had a situation outside my early career where I felt physically uncomfortable pointing out why this was not just a bad idea, but a dangerous one. But I did. It was much harder than I thought. After some back and forth, most people seemed convinced that we should at least pause a recommendation on the feature.

After, I actually had several team members approach me via DMs on Slack and mention how appreciative they were that someone said something. And after conferring with other colleagues in the technology space (and the client’s space), I had pretty universal agreement: that it was a bad idea, and I was right to say so. Some reasons why:

  • It could hallucinate a wrong answer (by combining information in the wrong way).
  • The search could be “convinced” to give the response someone may want (which might have dire consequences).
  • Having a conversation with a program versus an actual person to get help might actually be less efficient in this case. Particularly if the user is impaired in some fashion (likely, according to our client).
  • Further, a person gets special training for this kind of scenario (think 911 operators, crisis hotline intervention specialists). While you could “train” the AI to also have that sensitivity, how effective would it be (see above)? Also, the time (and cost) to train it for something like that would be prohibitive.

So, cased closed right? Well, not quite. This has stuck with me the past little while, and I think I know why.

Hammer, meet… everything

“AI”. As I write this, is the buzzword of today. I like to joke with people that, “If I had a nickel every time ‘AI’ was mentioned, I’d have enough money to {insert really expensive thing}!” So, you know, clichés aside, AI is absolutely everywhere. As someone who doesn’t work too far away from it, it permeates just about every aspect of my daily life. I hear about it all the time. Unlike a couple of other recent tech trends (XR and crypto come to mind), AI seems to be at least stretching its particular bubble about as much as it possibly can go. And it is finding its way into everything, whether those things require it or not.

There have been multiple instances where I’ve seen people use AI just to say they did. Whether it is creating images when they could’ve just taken a picture of the same thing, writing personal letters (how “personal” can it be if AI made it?), or to ask it some question you could’ve easily just searched for (ironically, probably also using AI at this point… thanks Google) and found more easily, AI is getting jammed into absolutely everything. I do mean “jammed”. I usually mention that tech has gotten too far once it gets into a toaster for no reason. (That wasn’t a product suggestion. I don’t need AI toast. Never mind.) Companies are frantically trying to up all their AI offerings (the one I work for included) even if their current non-AI-ified offerings are just fine (and making money).

What is more disturbing to me is the instant reaction to a problem right now is: let’s throw some AI at it! There isn’t any more thinking involved (maybe also due to AI). As such, there are instances where AI is getting shoved into situations where it just isn’t required and adds no actual value. Worse, it is getting put into instances where it isn’t appropriate either.

I think AI is seen by many to be a sort of miracle black box. People in general don’t quite know how it works, but have some weird faith that if we throw something at it, it’ll somehow come up with the solution to whatever.

Chat GPT, sum this up

Honestly, a lot smarter people than me have already gone into this topic and why AI has issues in our world (read this article by Miriam Eric Suzanne, “The Problem with AI” by Chris Ferdinandi, “Now I’m disappointed” by Baldur Bjarnson). It is only recently I saw a more mainstream outlet post an article coming to something people may already know: AI isn’t our savior.

Now, I feel like I should clarify something: I don’t think AI in inherently bad. I actually think in a different scenario, an AI search on some of my client’s sites might actually be very beneficial. AI is great at chewing through data and finding potential patterns, so it definitely has use cases in a broader sense as well. In this particular case though, where someone’s actual life might be on the table, I currently wouldn’t trust AI to that (particularly an LLM or “lower” type of AI).

My real hope moving forwards is the same I’ve had with other tools and technologies: knowing when not to use something is just as important as knowing when to use something. Just because something is “in” right now, doesn’t mean it is right to use.


Note: No, I didn’t have Chat GPT write the end for me that was just the heading. Sorry reader. You’re stuck with my personal brand of human, however flawed that may be.