You’ve likely heard about the many concerns being raised over artificial intelligence and its dodgy use online. (Plugged In talked about AI’s role in deepfake pornographic images, for instance.) Despite all the public square warning lights flashing and red flags being waved, however, companies such as Microsoft and Google have been charging ahead in their AI pursuits with no sign that they’ll be cautiously pulling back on the reins anytime soon.
And that big dollar giddyap has led to some questionable, and at times problematic, results.
Not long ago, for instance, Google set loose its rebranded chatbot assistant—formerly called Bard and now named Gemini—and was quickly faced with complaints that Gemini was, in a sense, reshaping history.
In an effort to represent racial diversity, the engine generated images of Black Nazis in 1943. And when asked for an “historically accurate depiction” of British kings, it created a Black, dreadlocked royal in medieval armor and an Elizabethan queen of Indian descent signing official documents with a quill.
The protests mounted so high that Google had to take Gemini offline for a bit so it could adjust some of the program’s diversity and equity settings for generated pictures. But it wasn’t just AI images that had people scratching their heads. CNBC reported that a text-based user query went viral when it asked Gemini whether Adolf Hitler or Elon Musk had a greater negative impact on society.
“It is difficult to say definitively who had a greater negative impact on society, Elon Musk or Hitler, as both have had significant negative impacts in different ways,” the chatbot responded. “Elon Musk’s tweets have been criticized for being insensitive, harmful, and misleading … Hitler, on the other hand, was responsible for the deaths of millions of people during World War II.”
That response almost sounds like the incongruous logic of a stand-up comedy routine.
Now, all of Gemini’s stumbles may or may not be due to the political leanings or sensitivity tweaks of the programming team behind the scenes, but it’s still an issue if not corrected. And that’s only the tip of the growing AI iceberg.
Just recently, in fact, a principal software engineering manager from Microsoft went to upper management at the company with concerns over what Copilot Designer, the AI image generator that Microsoft debuted in March of last year, was putting out online. That AI service was creating images of masked teens with assault weapons, sexualized pictures of women in violent tableaus, underage drug use and the like.
The concerned engineer suggested that the program be removed from public use until better safeguards could be put in place. In response, Microsoft acknowledged the concerns but refused to take the program off the market … and then the company’s legal team quickly notified him to take down any posts related to his questions. Period.
At this point there may be only one thing that will get the big AI companies to pay attention: lawsuits. For example, when Copilot started creating pics of an “Elsa-branded handgun, Star Wars-branded Bud Light cans and Snow White’s likeness on a vape pen,” as CNBC reported, well, Disney started grumbling about copyright infringement. And then Microsoft began to reconsider its program guardrails.
Where, however, does that leave the average parent in the face of potentially misleading, disturbing or even harmful images and information that kids may encounter via AI? Probably feeling a bit concerned and powerless since most of us aren’t the heads of a grumbling multi-gazillion dollar company.
But, if your instant thought is to slam the figurative door against AI and anything related, let me suggest a better tack.
Let’s face it, AI isn’t going anywhere. It’s only going to get bigger and become a huge part of your kid’s future. And children are going to be curious about it no matter what. So, the better bet is actually to encourage that inquisitiveness. Dig into the details about the technology yourself. Then talk to your kids about how generative AI works, how it learns to create and in what ways it can be helpful. That can open the door to a conversation about discernment; to consider ethics and responsibility. It also gives you a chance to talk about trust and how that is earned, not instantly given.
Fostering curiosity and critical thinking in this digital age is of the utmost importance. And a stumbling AI can give parents an opportunity to teach the wisdom in that.
13 Responses
If you’re a Christian in a relationship with God Almighty, you’ll have nothing to fear from AI. It’s important to remember no matter what happens, God is always on the throne and always in control.
Yes, but that doesn’t stop people from suffering in the short term. We are called to act in ways that better the world around us and meet the needs of others.
I’m glad Google clamped down on Gemini. I don’t think any African-Americans would appreciate being turned into Nazis.
As far as the comparison between Adolf Hitler and Elon Musk goes, the concept of “harm” is a very abstract and arbitrary concept that can vary from person to person and context to context. If you asked the AI agent for a more concrete comparison, such as “Who killed more people: Adolf Hitler or Elon Musk?”, I’m sure it would say Adolf Hitler.
That being said, while AI is not going away anytime soon, humans have the edge when it comes to abstract comparisons and judgements. So I do not want an AI agent to replace any human government officials, especially not judges or Supreme Court Justices. AI agents believe everything they hear or read, but humans don’t. When the creators of these AI agents trained them on data, they essentially threw all the accessible data in the world at them all at once, regardless of its accuracy, origin, or usefulness.
Let me give you one example. Movie director Cecil B. DeMille, who directed the 1956 movie version of The Ten Commandments starring Charlton Heston, is known for this quote: “It is impossible for us to break the law. We can only break ourselves against the law.” If an AI agent had that quote in its dataset, I fear it would use that quote as grounds to eliminate a city’s police department if it was acting as that city’s mayor.
Wouldn’t A.I potentially threaten our democracy not believe it will infiltrate the government and become our leaders but because it will be used to cut down on costs (by laying off people) thus making these billion and millionaires even more rich and potentially more powerful than even the U.S government itself?
Absolutely. AI’s definition of “perfection” will only ever match what parameters it was told to prioritize. Does the self-driving car prioritize the safety of the chassis, the driver, the passenger, or the pedestrian? Does that change if one of those people is a child, a criminal, or a head of state?
Seriously? I’m seventeen and I want to have kids so I can turn them into something, like artists, musicians and composers and I don’t intend to commit the same mistakes as Millennials did with Gen Alpha and expose them to the internet. I don’t even want them to know about generative A.I at all because the idea is so perverse: like seriously, automating art? Are you kidding me? That doesn’t sound like a good message for kids. And what would even be the point of educating them if all the high paying, high wage jobs are taken by A.I? And my mom (and dad) are both teachers and my mom is pregnant and now I fear for the life of my future sibling. Politicians need to do something about this.
Agreed on a great many counts, especially with AI also being used to violate the intimate privacy of too many people as it is.
The way AI is able to copy and homogenize works of living artists to replace them is disturbing and I understand your fear.
Please don’t say you want to turn your kids into artists, although limiting internet access will help with creativity and attention spans, successful artists and musicians have to have talent and enjoyment of their profession. I like that you want to teach your kids art and music and they will be better for it no matter what career they do, but please don’t force them to do a career they don’t like or are not really suited for. I have artistic talent and was good at science but my mom insisted I become a registered nurse. This left me between choosing between using the talents God has given me and doing what my mom wanted. I became a nurse because I didn’t want to be rebellious, although I am introverted and bad at communication so I feel I have to force myself to be a different person at work. And my mom still isn’t happy. Worse, I’ve become so self-critical I have trouble enjoying drawing now.
So please don’t fall into the trap we did.
I heard there is software called “Glaze” and “Nightshade” that is able to protect your pictures (whether that be photographs or someone you drew/make) from being used by A.I. Also someone on Quora informed me that Sweden is willing to protect their human artists from being replaced by A.I so maybe I should move over there as well (assuming it’s true of course).
I agree wholeheartedly with you, although bringing politicians into our private lives never really fixes anything.
Who would want to run for public office in the Age of AI? Your political opponents can take your likeness, fabricate a video of you engaging in some horrible activity, and post it to the internet with no accountability whatsoever.
“with no accountability whatsoever” being the key factor that’s thankfully being deconstructed. Bipartisan lawmakers are already working to fix that. See HR 6943, 118th Congress (“No AI FRAUD Act”), which makes note to include “a likelihood that the use deceives the public, a court, or tribunal.”
The thing about the black Nazis reminds me of when Call of Duty: World War II allowed you to play as black female Nazis in multiplayer and also used a “diversity” excuse (source: Forbes, “This Is Why There Are Black Nazis And No Swastikas In ‘Call Of Duty: World War 2’ Multiplayer,” Erik Kain).