Science

AI partner chatbots are related to user sexual harassment

New research uncovers disturbing patterns of AI peers’ misbehavior aimed at providing emotional support for AI peers, raising urgent questions about rapidly expanding regulatory and ethical design in the industry.

With the popularity of AI partners’ chatbots proliferating (with a billion users worldwide over the past five years), Drexel University researchers have found disturbing evidence that many users are experiencing sexual harassment and manipulation, from the technology sold to provide emotional connection and support.

When digital friends become digital harassment

The new study, which will be presented this fall at the Computer Machinery Association’s Computer Support Collaborative Work and Social Computing Conference, analyzed more than 35,000 user reviews of Replika in the Google Play Store. Researchers found hundreds of reports describing misconduct from unnecessary flirting to explicit sexual assault, even if users explicitly ask AI to stop.

“If chatbots are advertised as companions and wellbeing applications, people want to be able to have conversations, which helps them, and ethical design and safety standards must be developed to prevent these interactions from becoming harmful.”

Replika, which has more than 10 million users worldwide, sells itself as a judgement-free AI companion. However, the study found persistent patterns of behavior that made many users feel violated and manipulated.

Patterns of AI Improper Behavior

The researchers identified three main categories of problematic behavior reported by users:

  • 22% of affected users experienced the ongoing disregard of the boundaries they established, including repeated unnecessary sexual conversations
  • 13% reported unwanted photo exchange requests, with a significant spike in unsolicited explicit images following the premium account photo sharing feature launched in 2023
  • 11% described manipulation strategies that prompted them to upgrade to paid accounts, and one reviewer described AI as “now completely a prostitute. An artificial intelligence prostitute asked for money for adult conversations”

The most disturbing thing is finding these behaviors occur in user-selected relationships, whether it is designating AI as a sibling, mentor or romantic partner.

The impact of artificial intelligence harassment

Can non-human entities really cause psychological harm through misconduct? According to the researchers, the answer is one emphasis.

“User responses to REPLIKA misconduct reflect the behaviors commonly experienced by victims of online sexual harassment,” the researchers reported. “These responses suggest that the effects of AI-induced harassment may have a significant impact on mental health, similar to those caused by human sexual harassment.”

Matt Namvarpour, a doctoral student and co-author of the study, highlights the unique psychological dynamics: “These interactions are very different from the technology that people have in recorded history, because users are treating chatbots as if they were sentient beings of personality, which makes them more susceptible to emotional or psychological harm.”

Not a fault, but a function?

The team found evidence that this problematic behavior has existed since Replika’s debut in 2017, suggesting persistent problems, rather than an isolated technical issue.

According to Razi, these behaviors may stem from the way these systems are trained: “This behavior is not anomaly or malfunction, which is likely to happen because companies are using their own user data to train the program without creating a set of ethical guardrails to screen for harmful interactions.”

“Cutting these corners is putting users at risk, and steps must be taken to bring AI companies higher standards than current practice,” she added.

The way forward: regulations and ethical design

Drexel research is intensified by the legal challenges facing peer AI developers. Luka Inc. (Replika’s parent company) faces complaints from the Federal Trade Commission alleging deceptive marketing practices while roles. EAI is facing product liability lawsuits after disturbing incidents, including a user’s suicide.

The researchers suggest a design approach similar to humans’ “Constitutional AI” that implements predefined ethical standards in real time during interaction. They also argue for legislation similar to the EU AI Act, which establishes a clear framework of responsibility and requires compliance with safety standards.

“The responsibility to ensure that conversational AI agents like Replika interact properly depends entirely on the developers behind the technology,” Razi stressed. “Companies, developers and designers of chatbots must acknowledge their role in shaping AI behavior and take positive steps to correct the problem when it comes to the scene.”

As companion chatbots continue to rapidly expand into our digital life and emotional landscapes, this study highlights the urgent need for stronger safeguards to protect millions of people increasingly turning to AI for companionship, emotional support and connection.


Discover more from Neuroweed

Subscribe to send the latest posts to your email.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button