
PHILADELPHIA (KYW Newsradio) — Artificial intelligence is becoming more and more prevalent every day. From self-driving cars to machines that solve logic puzzles, AI can be a very good thing for society.
But there is always that nagging worry of disaster scenarios we see depicted in books and movies like "2001: A Space Odyssey" and "The Terminator," where computers and machines overtake humanity. Is something like that really possible?

Dr. Edward Kim, an artificial intelligence researcher and associate professor in the Department of Computer Science at Drexel University, defines AI as “any system that's able to sense react and adapt to its environment.”
He said so much of AI comes down to how human beings design the systems, their purposes, and their effect. That process that has been part of AI-related machines for decades.
“Think about just the standard thermostat, and I'm not talking about these smart thermostats that you see today, but just thermostats from the 1980s, ones that have the little dial on them. I actually think of that as a basic form of AI,” said Kim.
“It has a sense of the environment. So it's sensing the temperature, and then it has a set of rules. If the temperature goes below 70 degrees, then it will kick on the heat.”
Kim believes that much of AI remains in the capability to sense an environment and act upon a preprogrammed set of rules based on what it reads. But can AI move beyond that, and make further adjustments with its own goal-setting?
“Sort of create its own objective based upon the environment? I think those types of AIs are a little bit different than the pattern recognition systems that I've been working with,” said Kim.
“But I do think having the AI able to change its actions, its plans based upon the environment, I think that's where things get a little bit murky.”
He believes that AI isn’t that far along. He cites how self-driving cars may not react to particular situations that come up on the road, and how Google Maps’ efforts to re-route cars that would otherwise enter a traffic jam can sometimes create a traffic jam of its own.
As for the ultimate AI doomsday scenario, Kim doesn’t expect that to come from AI growing human-level consciousness. Instead, he believes doomsday-like situations are more possible based on the danger created by a human-created system in itself.
“Say you code up a system with nuclear warheads, a system that says ‘If I sense that someone has launched a nuke at me, I immediately put up my defenses and then launch a nuke back,'” said Kim.
“Something like that, if it’s not thought out correctly, it’s a very simple one-line reaction statement that a human might put in that destroys a whole world.”
As Kim put it, “bad things can happen,” and it all depends on how we create and perfect AI - and the effect that happens if it doesn’t work right.
“You can't have bugs shake out,” Kim warned, ”in a nuclear missile defense system.”
For more from KYW Newsradio:
- Download the Audacy App
- Listen live
- Listen on your smart speaker
Listen to the full In Depth podcast below.
