Artificial intelligence presentation prompts debates about empathy

Seniors Josie Rozzelle, Emily Kanellos, and Hannah Kelliher participate in an activity where they were asked to rearrange cards in order of whose life was more valued: a pregnant woman, child, dog, cat, or athlete. The activity was meant to demonstrate the similarities and differences between human and AI morals.

Audrey Pinard, Reporter

WEB EXCLUSIVE John Villasenor, senior fellow at the Center for Technology Innovation at Brookings, presented to students and faculty today about artificial intelligence, concluding with a debate about AI’s ability to experience human emotion.

“I think AI may not feel empathy in the same way that humans do, but it might weigh some of the same factors that we consider when we act,” Villasenor said. “Perhaps it can be taught to act in a manner reflecting human empathy — that would be incredible.”

The development of artificial empathy is the idea of inventing a machine capable of recognizing and understanding emotion and eventually experiencing emotion of its own. Students and faculty were asked during the presentation to reflect on their opinions regarding AI.

“I don’t think AI can learn empathy, and I’m not comfortable knowing it has the possibility to learn something that should be reserved for living things,” senior Isabella Bermejo said. “I think there needs to be a separation between what is biology and nature, and what is mechanic and machine.”

During the presentation, Villasenor discussed how AI can interpret certain behavioral characteristics like crying or laughter but are unable to understand the meaning behind these emotions or the messages they convey.

“I think that AI will be reflective of what it learns and knows now,” Librarian Alyson Barrett said, “but I do not think AI will be able to develop empathy or any other aspects that make humans human.”

Villasenor said that while AI will most likely lead to advances in medicine, agriculture, and weather prediction, the major difference between humans and machines is that they will always be dependent on another source for information.

“Like with any new technology, something that we have to be on the lookout for is its potential downsides as well,” Villasenor said. “But by teaching these machines to love and express empathy we might learn more about humans and compassion along the way.”