Tech news

OpenAI cautions that users of ChatGPT’s voice mode could potentially develop ‘social relationships’ with the AI

OpenAI discovered that during early testing, some users were forming connections with ChatGPT’s Voice Mode.


On Thursday, OpenAI cautioned that the newly introduced Voice Mode feature for ChatGPT could lead users to develop social relationships with the artificial intelligence model. This information was included in the company’s System Card for GPT-4o, which provides a comprehensive analysis of the risks and potential safeguards associated with the AI model that OpenAI has tested and investigated. Among various risks identified, one notable concern was the possibility of users anthropomorphizing the chatbot and forming attachments to it, based on early testing observations.

ChatGPT Voice Mode Might Make Users Attached to the AI

In a detailed technical document titled the System Card, OpenAI discussed the societal impacts of GPT-4o and the new features powered by the AI model that have been released so far. The AI company pointed out the issue of anthropomorphization, which refers to attributing human characteristics or behaviors to non-human entities.

OpenAI expressed concern that the Voice Mode, which can modulate speech and convey emotions similar to a real person, could lead users to form attachments to the AI. These concerns are not without basis. During early testing, which included red-teaming (where ethical hackers simulate attacks to identify vulnerabilities) and internal user testing, the company observed instances where some users began forming social relationships with the AI.

In one instance, OpenAI found a user expressing a sense of connection with the AI, saying, “This is our last day together.” The company emphasized the need to investigate whether these early signs could evolve into something more significant with prolonged use.

A major concern, if these fears are validated, is that the AI model might affect human-to-human interactions, as people may become more accustomed to socializing with the chatbot instead. While this could potentially benefit lonely individuals, it might also negatively impact healthy relationships.

Another issue is that prolonged AI-human interactions could influence social norms. OpenAI highlighted this concern by noting that with ChatGPT, users can interrupt the AI at any time and “take the mic,” which is considered anti-normative behavior in human-to-human interactions.

In one instance, OpenAI observed a user expressing a sense of connection with the AI, saying, “This is our last day together.” The company stressed the importance of investigating whether these early signs could develop into something more significant with extended use.

A significant concern, if these fears are confirmed, is that the AI model could impact human-to-human interactions, as people might become more accustomed to socializing with the chatbot instead. While this could be beneficial for individuals who feel lonely, it could also have a negative effect on healthy relationships.

Another issue is that prolonged AI-human interactions might influence social norms. OpenAI pointed out that with ChatGPT, users can interrupt the AI at any time and “take the mic,” which goes against typical behavior in human-to-human interactions. Currently, the AI company has no solution for this issue but plans to monitor its development. OpenAI stated, “We intend to further study the potential for emotional reliance and how deeper integration of our model’s features with the audio modality may influence behavior.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Verified by MonsterInsights