Artificial intelligence can now emulate human behaviors – soon it will be dangerously good

By Ana Santos Rutschman and The Conversation

When artificial intelligence systems start getting creative, they can create great things – and scary ones. Take, for instance, an AI program that let web users compose music along with a virtual Johann Sebastian Bach by entering notes into a program that generates Bach-like harmonies to match them.

Run by Google, the app drew great praise for being groundbreaking and fun to play with. It also attracted criticism, and raised concerns about AI’s dangers.

My study of how emerging technologies affect people’s lives has taught me that the problems go beyond the admittedly large concern about whether algorithms can really create music or art in general. Some complaints seemed small, but really weren’t, like observations that Google’s AI was breaking basic rules of music composition.

In fact, efforts to have computers mimic the behavior of actual people can be confusing and potentially harmful.

Impersonation technologies

Google’s program analyzed the notes in 306 of Bach’s musical works, finding relationships between the melody and the notes that provided the harmony. Because Bach followed strict rules of composition, the program was effectively learning those rules, so it could apply them when users provided their own notes.

The Bach app itself is new, but the underlying technology is not. Algorithms trained to recognize patterns and make probabilistic decisions have existed for a long time. Some of these algorithms are so complex that people don’t always understand how they make decisions or produce a particular outcome.

AI systems are not perfect – many of them rely on data that aren’t representative of the whole population, or that are influenced by human biases. It’s not entirely clear who might be legally responsible when an AI system makes an error or causes a problem.

Read also:
DIANA JOHNSTONE: The Great Pretext … for Dystopia

Now, though, artificial intelligence technologies are getting advanced enough to be able to approximate individuals’ writing or speaking style, and even facial expressions. This isn’t always bad: A fairly simple AI gave Stephen Hawking the ability to communicate more efficiently with others by predicting the words he would use the most.

In early 2019, a fake nude photo of U.S. Rep. Alexandria Ocasio-Cortez circulated online. Fabricated videos, often called “deepfakes,” are expected to be increasingly used in election campaigns.

Members of Congress have started to look into this issue ahead of the 2020 election. The U.S. Defense Department is teaching the public how to spot doctored videos and audio. News organizations like Reuters are beginning to train journalists to spot deepfakes.

Published at www.alternet.org