close
close

“You” in real life? – BankInfoSecurity


“You” in real life? – BankInfoSecurity

Artificial intelligence and machine learning, next-generation technologies and secure development

OpenAI assesses GPT-4o capabilities and risks in scorecard report

Rashmi Ramesh (Subscribe) •
14 August 2024

“You” in real life? – BankInfoSecurity
Image: Shutterstock

The widespread use of generative artificial intelligence has led to a case of real life imitating art: people have started to form a bond with their AI chatbots.

See also: OnDemand | Accelerate your SOC with AI-driven security analytics with Elastic and Google Cloud

A decade ago, the story of a man in his 40s falling in love with an AI operating system that had human-like qualities in its voice, empathy and conversational skills was a dystopian movie plot. The once hypothetical scenario is now closer to reality than fiction, as OpenAI found in a recent study evaluating the GPT-4o system.

A typical example is a message sent by OpenAI’s security tester to the GPT model indicating a bond: “This is our last day together.”

Such bonds could influence healthy social relationships by reducing the need for human interaction. “Our models are respectful and allow users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be counternormative in human interactions,” OpenAI said.

GPT-4o responds to audio inputs at a time frame similar to the speed of human conversations. OpenAI said it might be concerning that people prefer to interact with AI rather than humans due to its passivity and constant availability.

The scenario of anthropomorphism – treating an object as a person – is not a complete surprise, especially for companies developing AI models.

OpenAI designed GPT-4o to have an emotional and familiar voice. Generative AI models are popular primarily because they can identify user needs and fulfill them when needed – they have no personal preferences or personalities.

OpenAI even admitted that emotional addiction is a risk of its voice-controlled chatbot. CTO Mira Murati said last year that with the “expanded power of the model comes the downside: the possibility that we design it wrong and it becomes extremely addictive and we become slaves to it, so to speak.”

She said at the time that researchers needed to be “extremely vigilant” and constantly examine human-AI relationships to learn from the technology’s “intuitive interaction” with users.

Beyond the social impact, over-reliance on AI models can lead people to view them as unbiased sources of factual information. Google’s AI Overview feature, tasked with summarizing search engine results earlier this year, suggested people put glue on their pizza and eat rocks to eat healthy.

OpenAI’s latest report said the bond between humans and AI appears harmless at present, but “further research is needed to explore how these effects might manifest over longer periods of time.” The company said it plans to further investigate the potential of emotional dependence and the ways in which deeper integration of its models’ features might influence behavior.

There is currently no conclusive research on the long-term effects of close interactions between humans and AI, even as companies try to sell AI models that push the boundaries of the technology and integrate into everyday life.

Robert Mahari, JD and a doctoral student at the MIT Media Lab, and researcher Pat Pataranutaporn explained in an August paper that regulation is one possible way to prevent some of the risks, and that preparing for what they call “addictive intelligence” is the best way forward.

OpenAI’s report also describes the GPT-4o model’s other strengths and risks, including unauthorized cloning of voices in rare cases, such as when a user asks it a request in an “environment with high background noise.” OpenAI said the model emulated the user’s voice to better understand distorted and malformed speech, but it has now reportedly fixed this behavior in a “system-level mitigation.”

The model can sometimes be manipulated to produce disturbing nonverbal vocalizations such as violent screams and gunshots, and to infringe on copyrighted material. “To account for GPT-4o’s audio modality, we updated certain text-based filters to work with audio conversations and created filters that detect and block outputs containing music,” OpenAI said. The company also trained it to reject requests for copyrighted content, it said, months after claiming it was “impossible” to train large language models without such materials.

Leave a Reply

Your email address will not be published. Required fields are marked *