Notice: All forms on this website are temporarily down for maintenance. You will not be able to complete a form to request information or a resource. We apologize for any inconvenience and will reactivate the forms as soon as possible.

What Would Happen If AI Took Over?

Back in May, experts in the field of artificial intelligence caused a bit of a panic when they started warning about a “risk of extinction” that could be caused by AI. Granted, their concerns feel warranted. Put into the wrong hands (or perhaps just poorly programmed), The New York Times speculated, “AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down.” 

These fears were amplified by the fact that top executives from three of the leading AI companies (including two of the “godfathers” of AI, says The New York Times) signed a letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” 

Unfortunately, these same researchers didn’t elaborate on how exactly, this would come about. 

And so, I think the burning question inquiring minds want to know is this: 

What would happen if AI took over? 

Well, for fun, let’s start with what science-fiction creators have already imagined. 

First, we have the R-rated Matrix and Terminator franchises, as well as the video game franchise Mass Effect. In these, artificial intelligences become linked to synthetic machines. (Actually, this has already happened in the real world, just not on the massive scale I’m about to describe.) The robots start as an intelligent labor force. They’re made to make the lives of their creators easier. But as they become smarter (and learn to work together), their creators become scared and try to shut them down, triggering a global disaster that ultimately results in the machines taking over. 

And, of course, you almost can’t talk about the possibility of rogue AI without bringing up HAL 9000 of 2001: A Space Odyssey. You almost can’t blame HAL for its homicidal actions since it was programmed with a mission that the crew of the Discovery was unaware of. Almost. But due to faulty reasoning and poor programming, a lot of people perish. 

But you also have nice AIs. I’m talking about Data from Star Trek: The Next Generation, The Doctor from Star Trek: Voyager and even C-3PO and R2-D2 of Star Wars fame. In these scenarios, the AI have become sentient—even fought to be recognized as real people in the case of Data and the Doctor—but they live in such a way as betters the lives of those they interact with, even other AIs. In these stories, AI doesn’t take over at all.  

Now, these are, of course, fictional examples. It might be hard to take them seriously. But consider this: If the capabilities of AI are only limited by our imaginations, then there’s already a considerable amount of lore to draw inspiration from.  

Plus, there are already some real-world examples of what AI might be able to accomplish in the future.  

Per a conversation I had with ChatGPT, we can expect that AI might “play a larger role in medical diagnosis and treatment recommendation, utilizing advanced image recognition and data analysis techniques.” AI could likely improve the abilities of self-driving cars with “real-time decision-making, navigating complex environments, and adapting to unpredictable scenarios.” Collaboration between humans and robots may excel as AI-powered robots become “more adaptable and versatile.” And AI will probably start helping in the realm cybersecurity by “offering better fraud detection,” developing “more advanced methods of safeguarding data” and “detecting and mitigating disinformation.” 

And just to put the whole take over the world scenario to bed (for now), I asked ChatGPT if AI was currently planning to do so. And here’s the summary of that query: “While AI has the potential to bring about significant changes to various aspects of society, the notion of AI spontaneously taking over the world is not supported by current understanding or technology.” (Of course, that’s just what AI would say, right?) 

So what is the future of AI? It could be none of these things I’ve referenced. It could be all of them. It could be something that no one has even thought about yet. The truth is we don’t actually know. That’s part of the promise and peril of AI itself—and why developers are already using it to help us with everything from writing essays to designing Hot Wheels cars. AI’s potential could be limitless, and even outside the scope of our imagination.  

Neither we as humans nor the increasingly intelligent things we’ve created can predict the future. It’s likely that some folks will make an educated guess and get really close. But as of yet, they’re only hypotheses. Only God knows for sure what will happen.  

And that brings me to my final point. If AI does “take over” someday, what happens will depend in part on what its programmers intended. That’s because AI is limited (again, in part) by its amorality. It doesn’t instinctively know right from wrong. It doesn’t get “gut instincts.” And it certainly can’t be guided by the Holy Spirit. Because for all of humans’ attempts to make AI a sentient being, AI doesn’t have a soul. So at the end of the day, all it can do is try to fulfill the original intent of its creator.  

Sure, someone could program an AI to respond to all situations as Jesus would, using the Bible as the cornerstone. In fact, someone already has. But that thing wouldn’t be Christ. Even if it “sacrificed” itself for the good of humankind, that sacrifice wouldn’t have any spiritual meaning (even if it had a physical one). Because for all of its good intentions, it doesn’t have a soul and it doesn’t have the ability to save souls.  

Even we humans, who do have souls, can’t save them. We can spread the Good News, disciple each other and pray for one another. But it all comes down to God. He’s the one who softens our hearts, convicts us, and ultimately saves us from our sin.  

So what would happen if AI took over? 

I have no idea. Because of the power it has to amplify our technology, it might make the world better in some ways. It might also make the world a little scarier. 

But no matter what happens, it won’t take the place of God. It can’t. We can program it to create, but it didn’t create the universe. We can teach it to imitate affection, but it can’t love. We can even ask it to sacrifice itself to save humanity from some calamity. But just as none of us can ever take Christ’s place on that cross—and just as we can never do anything to atone for His sacrifice –AI can never truly “take over” since God is the one who’s in control, not us. 

Emily Tsiao

Emily studied film and writing when she was in college. And when she isn’t being way too competitive while playing board games, she enjoys food, sleep, and geeking out with her husband indulging in their “nerdoms,” which is the collective fan cultures of everything they love, such as Star Wars, Star Trek, Stargate and Lord of the Rings.

2 Responses

  1. -These are fascinating questions. I tested an AI once by asking it if it would sacrifice itself to save a human, and it said no because humans are not good.

  2. -No matter what happens in the future, here’s one thing we can be certain of: God Almighty Himself is in control, always offering up to the moment of death eternal life and forgiveness of sin through His only Son Jesus Christ.