We at Plugged In have done a lot of talking about the blossoming age of AI. I mean, hey, artificial intelligence is everywhere these days: It’s using our data for its lesson fodder, helping guide us with our socials and standing ready to serve as a helpful assistant.
We don’t even have to use an internet search engine anymore. AI has it covered. Even younger generations who defer to TikTok or Instagram for their news sources are generally letting the site’s built-in AI buddies do all the searching for them.
So, it only makes sense that people are starting to wonder when their jobs will be handed over to that good old AI with all of its never-sleeping gung-ho. I mean you may kinda like the idea of not having to work for a living, but your bank account probably won’t be so happy about it.
Should you be afraid?
Well, it turns out that when you rely too heavily on AI, it doesn’t always work out for the best. A recent article from Business Insider, for example, talked about a virtual simulation set up by Carnegie Mellon University to gauge how a staff full of AI agents would fare in real-world professional settings. The researchers even asked established intelligence models from Google, OpenAI, Anthropic and Meta to slip into roles that real employees might have.
In every case, things fell apart.
Often that collapse happened because of something simple, such as the AI bot not being able to navigate past an innocuous pop-up that blocked important information on the screen. In the end, even the most responsive AI finished less than one-quarter of the tasks it was given.
Oh, and this tendency for AI to have issues in the business world isn’t unique to the Carnegie Mellon study either. Fortune reported on the stumbles of a real-world AI-powered software coding company called Anysphere. This company had been the talk of the town and valuated in the neighborhood of $10 billion. And then its AI support bot, Cursor, went “rogue” and began posting totally made-up explanations (“hallucinations”) for its mysteriously odd interactions with the company’s users.
That AI fumble—especially from an industry leader—gave Anysphere something of a public black eye. And it left business professionals buzzing about the potential consequences of artificial intelligence taking on too much responsibility in the business world.
The fact is, AI is really great at things such as data analysis and working with repetitive and routine tasks. It can figure out your schedule in a jiffy or quickly purchase a plane ticket with your preferences in mind. But when it comes to general intelligence, those common-sense human discernment activities, it’s not always so great. So figuring out the nuances of some human interactions or taking the context of situations into account can be a struggle for AI.
To illustrate that from a different perspective, think of a mountain hike. A human will walk those trails and see the spectrum of colors there, ranging from bright golden sunshine to dusky green woods. They’ll hear the birds and buzzing insects, smell the fragrant pines and the dusty trail. A human will feel the heat of the sun and the cool, refreshing breeze. In the same setting, an AI will see a one-dimensional snapshot. The detail inputs are there, but outside its comprehension.
AI can’t quite get those nuanced bits and pieces that we meager flesh and blood types excel in. So we don’t need to panic about it taking over quite yet. AI is a powerful and increasingly integrated tool with transformative potential. But it’s not so fearfully and wonderfully made as we humans are.
And that’s a very good thing.
Recent Comments