Computer science experts say concerns surrounding Artificial Intelligence valid

The AI within Cognitive and Autonomous Test vehicle from the University of Arizona’s Department of Electrical & Computer Engineering allows it to dodge obstacles on the road and drive by itself via sensors in a controlled environment. (Photo by: Jessica Blackburn/Arizona Sonora News)

The words “dangerous artificial Intelligence” might bring images of robot assassins to mind.

However, science fiction films such as James Cameron’s The Terminator and Alex Garland’s EX_MACHINA are exaggerations of what much of the general public perceives as dangerous AI.

Troy Adams, member of the AZSecure Cybersecurity Fellowship Program at the University of Arizona’s Eller College, said science fiction has always been a good example of what could happen if AI gained some sort of sentience or being.

“Will that happen very soon?” he said. “I don’t know.”

When talking about “dangerous AI,” computer scientists refer to AI capable of carrying out any intellectual task a human would be capable of, except that it lacks human context.

UA Associate Professor of Computer Science Mihai Surdeanu, gave an example involving one of the most recent advancements in AI, a self-driving car.

“Say the AI in a car is programmed to carry out the task it has been given as quickly as possible,” he said. “If you ask it to get you to the airport as fast as possible, it might run a red light and get you into an accident trying to carry out that task.”

But the kind of AI we’re probably most familiar with using on a daily basis, such as facial recognition on a phone, is different than machine learning and pattern recognizing systems that might become dangerous, according to computer scientists.

Adams said the dangers of AI have been debated for decades, and the truth is that many computer scientists still haven’t come to an agreement on the validity of these fears.

Tesla cars now have the capability for drivers to put them on “autopilot,” and according to the site, all Tesla vehicles produced in factory, “have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.”

However, even CEO of Tesla Inc. Elon Musk has warned about the dangers AI presents.

Musk said he supported more regulation of AI and tweeted “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”

Thousands wait for the second lecture of the University of Arizona College of Science’s six lecture series titled “The Minds of Machines” given by Dr. Mihai Surdeanu in Centennial Hall on Jan. 29. (Photo by: Jessica Blackburn/Arizona Sonora News)

But in a January talk titled “The Minds of Machines,” Surdeanu  said, “’scary AI’ won’t happen. Or it will, but not for the reason you think.”

He argued that dangerous AI wouldn’t be like what we see in science fiction, but a system responsible for a certain task that has no human context.

“We might deploy complex machine learning systems for critical decision-making without understanding what they do, or repairing any fatal bugs in AI algorithms,” he said.

Computer scientists have pointed out other dangers surrounding AI that are much more likely to occur in the near future, such as human job loss, manipulation and even prejudice.

“The issue of job loss has been circulating in the AI community for years,” Adams said. “But when we talk about that concern, we also talk about all the jobs that will be created.”

Surdeanu agreed this was the most typical concern circulating around AI, but that people should take into account what has happened throughout history.

“Take ATMs for example,” he said. “When they were introduced, people thought bank tellers would be out of jobs. Now there are more jobs for bankers than there were before ATMs.”

According to the Bureau of Labor Statistics, this was true in that automation historically expanded bank branches, creating more of a demand for bank tellers.

But BLS also found that the recent rise in online banking has decreased the expansion of banks, and that bank teller jobs are projected to decrease 8 percent by 2026.

“But AI will leave more exciting and engaging jobs for humans,” Surdeanu argued. “Since AI is good at one thing and bad at common sense.”

If you’ve ever been on Facebook and seen an ad for something you’d just been online shopping for, you’re not alone.

In his journal examining the societal, economic, ethical and legal challenges tied to AI, an affiliate of the computer science department at the Swiss Federal Institute of Technology Professor Dirk Helbing wrote, “By now, there are about 3,000 to 5,000 personal records of more or less every individual in the industrialized world. These data make it possible to map the way each person thinks and feels.”

This is sometimes referred to as a “personalized ad,” or “nudging,” a form of AI whose technique is to manipulate humans into making certain decisions that would economically benefit sites that have access to your personal data.

The journal also points out that this data can reveal actions one might take in the future, such as voting and financial trades.

According to Adams, this AI manipulation is dangerous and powerful, and something most people don’t think about when it comes to dangerous AI.

According to Surdeanu, AI can actually mimic the bias in society with its pattern recognition.

“Risk assessment systems for people who are arrested, for example,” he said. “The system is biased against people of color because of all the systematic racism in our world.”

According to a journal on criminal justice risk assessments from the University of Pennsylvania, nonviolent black offenders are more likely to be classified as violent by an algorithm than their white counterparts.

“Such differences can support claims of racial injustice,” the journal said.

Surdeanu said pattern recognition AI can also reflect the sexism in our world, and showed that AI would identify a man as a woman if the term “cooking” or “kitchen,” was searched.

The AI in a skittle-sorting robot titled “Monster Sorter,” created by students from the University of Arizona Department of Biomedical Engineering, has sensors that recognize and sort by color. (Photo by: Jessica Blackburn/Arizona Sonora News)

According to Adams, when it comes to recognizing the good and the bad in AI, we need to be asking the right questions.

“We can do great things with AI,” he said. “When you plug it in with robotics and 3D printing or advances in medicine, it’s just amazing. If we want to save things like hearts and kidneys, we can do that with AI. Imagine being able to print a viable seed.”

Surdeanu said that while certain concerns surrounding AI are valid, it’s also important to remember that it is meant to help us.

“Further good news is that AI is not designed to replace us, but to compliment us,” he said. “It’s designed to help us.”

Jessica Blackburn is a reporter for Arizona Sonora News, a service from the School of Journalism with the University of Arizona. Contact her at

Click here for a Word version of this story and high-resolution photos.


Leave a Reply

Your email address will not be published. Required fields are marked *