Scientists create online games to show the risks of AI emotion recognition | Artificial Intelligence (AI)


It is a technology frowned upon by ethicists: now, researchers hope to unmask the reality of emotion recognition systems to relaunch public debate.

Technology designed to identify human emotions using machine learning algorithms is a huge industry, with claims it could prove useful in a myriad of situations, from traffic safety to market research. But critics say the technology not only raises privacy concerns, but is inaccurate and racist.

A team of researchers has created a website – emojify.info – where the public can try out emotion recognition systems through their own computer cameras. One game focuses on grimaces to trick technology, while another explores how such systems can struggle to read facial expressions in context.

Their hope, according to the researchers, is to raise awareness of the technology and promote conversations about its use.

“It’s a form of facial recognition, but it goes further because rather than just identifying people, it claims to read our emotions, our inner feelings in our faces,” said project leader and researcher Dr Alexa Hagerty. at the University of Cambridge Leverhulme. Center for the Future of Intelligence and Center for the Study of Existential Risk.

Facial recognition technology, often used to identify people, has come under intense scrutiny in recent years. Last year, the Equality and Human Rights Commission said its use for mass screening should be halted, saying it could increase police discrimination and harm free speech.

But Hagerty said many people were unaware of how common emotion recognition systems were, noting that they were employed in situations ranging from hiring, to know-your-customer work, to airport security. and even education to see if students are engaged or doing their homework.

Such technology, she said, was used all over the world, from Europe to the United States and China. Taigusys, a company specializing in emotion recognition systems and headquartered in Shenzhen, claims to have used them in settings ranging from nursing homes to prisons, while according to reports earlier this year, the Indian city of Lucknow plans to use the technology to spot distress in women following harassment – ​​a move that has drawn criticism, including from digital rights organisations.

While Hagerty said emotion recognition technology could have some potential benefits, these must be weighed against concerns about accuracy, racial bias, as well as whether the technology was even the right one. tool for a particular job.

“We need to have a much broader public conversation and deliberation about these technologies,” she said.

The new project allows users to try emotion recognition technology. The site notes that “no personal data is collected and all images are stored on your device”. In a game, users are asked to draw a series of faces to simulate emotions and see if the system is fooled.

“The claim of the people developing this technology is that it reads emotion,” Hagerty said. But, she added, in reality, the system was reading facial movements and then combining them with the assumption that these movements are related to emotions – for example, a smile means someone is happy.

“There’s a lot of really solid science that says it’s too simple; it doesn’t quite work that way,” Hagerty said, adding that even human experience has shown it’s possible to fake a smile. “That’s what this game was about: to show that you didn’t change your inner mindset six times fast, you just changed your appearance. [on your] face,” she said.

Some emotion recognition researchers say they are aware of these limitations. But Hagerty said the hope was that the new project, funded by Nesta (National Endowment for Science, Technology and the Arts), will raise awareness of the technology and foster discussion around its use.

“I think we’re starting to realize that we’re not really ‘users’ of technology, we’re citizens in a world deeply shaped by technology, so we need to have the same kind of democratic and civic input on these technologies. that we have on other important things in societies,” she said.

Vidushi Marda, program manager at human rights organization Article 19, said it was crucial to take a “pause” from the growing market for emotion recognition systems.

“The use of emotion recognition technologies is deeply concerning because not only are these systems based on discriminatory and discredited science, but their use is also fundamentally incompatible with human rights,” she said. “An important learning from the trajectory of facial recognition systems around the world has been to question the validity and need for technologies early and often – and projects that emphasize the limitations and dangers of face recognition emotions are an important step in that direction.”


Source link

Previous Know which online games are likely to have a negative effect on your children
Next 'Outriders' fixes two of modern online gaming's worst problems