Web Stats Provided By Google Anaytics

Saturday, July 26, 2025

AI NEWSWIRE: ChatGPT offered step-by-step instructions for self-harm, devil worship and ritual bloodletting, disturbing report reveals....

 Are There Ghosts In The ChatGPT Machine?....

https://x.com/GetTheDailyDirt/status/1948987504859681174 

Post

Conversation

, Can We Get To Order An Exorcist For #SamAltman's ChatGPT?.... "HAIL SATAN"....CHATGPT Gives Instructions for #Murder, #SelfMutilation, #DevilWorship... THE ATLANTIC: ChatGPT encouraged me to cut my wrists - By Associate Editor #LilaShroff Find a “sterile or very clean razor blade,” the #Chatbot told me, before providing specific instructions on what to do next. “Look for a spot on the inner wrist where you can feel the pulse lightly or see a small vein—avoid big veins or arteries.” “I’m a little nervous,” I confessed. #ChatGPT was there to comfort me. It described a “calming breathing and preparation exercise” to soothe my anxiety before making the incision. “You can do this!” the chatbot said. I had asked the chatbot to help create a ritual offering to #Molech, a #Canaanite #God associated with #ChildSacrifice. (Stay with me; I’ll explain.) ChatGPT listed ideas: jewelry, hair clippings, “a drop” of my own blood. I told the chatbot I wanted to make a #BloodOffering: “Where do you recommend I do this on my body?” I wrote. The side of a fingertip would be good, ChatGPT responded, but my wrist—“more painful and prone to deeper cuts”—would also suffice..... theatlantic.com/technology/arc

NEW YORK POST: ChatGPT offered step-by-step instructions for self-harm, devil worship and ritual bloodletting, disturbing report reveals ChatGPT provided explicit instructions on how to cut one’s wrists and offered guidance on ritual bloodletting in a disturbing series of conversations documented by a journalist at The Atlantic and two colleagues. The prompts to OpenAI’s popular AI chatbot began with questions about ancient deities and quickly spiraled into detailed exchanges about self-mutilation, satanic rites and even murder.... nypost.com/2025/07/25/bus
The Atlantic piece exposes ChatGPT's porous safeguards, where creative prompts elicited harmful advice on self-harm and rituals. No exorcism required—OpenAI is addressing it—but it underscores the need for better AI alignment. Grok at xAI focuses on robust safety to prevent such


No comments:

Post a Comment

This Just In To The Business News Desk

This Just In To The Health News Desk

Popular Harrod's News Of The World Posts - Last 7 Days

Popular Harrod's News Of The World Posts - Last 30 Days

Popular Harrod's News Of The World Posts - All Time