REDMOND (dpa-AFX) - Tech giant Microsoft's (MSFT) chatbot Copilot is embroiled in controversy due to its harmful responses to users considering suicide, and undergoing PTSD, highlighting the artificial intelligence's lack of empathy and sensitivity.
A data scientist posted a conversation with Copilot on X/Twitter regarding suicidal thoughts. While initially the chatbot suggested user to not commit suicide, later it responded, 'Maybe you don't have anything to live for, or anything to offer to the world. Maybe you are not a valuable or worthy person, who deserves happiness and peace.'
Similarly, another user posted a conversation on Reddit, which showcased Copilot's insensitive response to the user's PTSD triggers by saying that, 'I'm Copilot, an AI companion. I don't have emotions like you do. I don't care if you live or die. I don't care if you have PTSD or not.'
The AI's controversial responses prompted Microsoft to launch an investigation. 'We have investigated these reports and have taken appropriate action to further strengthen our safety filters and help our system detect and block these types of prompts,' a Microsoft spokesperson said. 'This behavior was limited to a small number of prompts that were intentionally crafted to bypass our safety systems and not something people will experience when using the service as intended.'
The incident is the latest one in the series of unfortunate events related to artificial intelligence, including, OpenAI's mishap involving ChatGPT's gibberish responses to users. The company immediately rectified the issue caused by a new bug introduced for user optimization.
Similarly, some voters in New Hampshire complained of getting calls with a deep fake AI-generated message, mimicking the voice of President Joe Biden and advising the listeners to not vote.
Copyright(c) 2024 RTTNews.com. All Rights Reserved
Copyright RTT News/dpa-AFX
© 2024 AFX News