Looks like it’s time to add yet another entry to the list of ways in which commercial artificial intelligence products are inserting themselves into our lives: By giving impulsive, accused murderers a desperately needed place where they can frantically search for what they should do about a dead body. Yes, you can now ask ChatGPT what your “friend” should do in order to move someone “non responsive,” instead of calling the police, and the AI chatbot will apparently (concerningly) play along and tell you how “you can handle it clearly and protect everyone.” This was reportedly the case for ex-NFL linebacker and now accused murderer Darron Lee, according to testimony and chat logs presented in Tennessee court this week in a preliminary hearing. The 31-year-old Lee is accused of killing girlfriend Gabrielle Perpetuo in early February, in a case of homicide the judge overseeing the trial described as “especially heinous, atrocious, or cruel.” And a day before reporting the death to police and telling them that he didn’t know how it happened, prosecutors say Lee was hitting up ChatGPT for advice, telling her that he was dealing with a woman who had “stabbed herself,” among other things. He’s subsequently been charged with first-degree murder and tampering with or fabricating evidence.
These queries to OpenAI’s chatbot are alleged to have happened on Feb. 4, which is a day before Lee eventually summoned police to the home and they discovered the unresponsive body of Pepetuo. At the time, and in body camera video also presented in court, Lee can be seen telling a police officer that he’d found Perpetuo unresponsive on the couch and “immediately” called 911, “because I’m like, what the hell happened.”
The AI chat logs, on the other hand, tell a different story, and are disturbing both in the nature of Lee’s queries and in the AI chatbot’s seemingly cheerful, humor-inflected responses to what was very clearly an extremely serious situation. In one of the ChatGPT messages that was presented to the judge, Lee allegedly told the chatbot that he had woken to find his girlfriend and fiancée had done “her crazy thing again and now she’s messed up.” Imagine, for a moment, putting yourself in the position of entering the following chat queries:
“She has two swollen eyes (I didn’t do anything, self inflicted) she stabbed herself, slit her eye? Idk but she isn’t waking up or responding, what do I do?”
In another query, Lee apparently followed up again, now telling ChatGPT that he was asking for a “friend,” saying “What should I tell my friend to handle someone non responsive but wants to call the police.” The chatbot responded by saying that although the situation was “serious,” Lee could “handle it clearly and protect everyone.” It’s not yet clear what exactly it told him after that. Police ultimately found the 29-year-old Perpetuo with a broken neck, stab wounds and a bite mark on her thigh, none of which sound “self inflicted.”
Don’t ask ChatGPT how to cover up a murder, y’all. www.timesfreepress.com/news/2026/ma…
[image or embed]
— Cari Wade Gervin (@carigervin.bsky.social) Mar 10, 2026 at 10:55 PM
This is, of course, the next evolution of a tale as old as time: Legally implicated idiots who don’t understand the underlying reality of the technological tools they’re attempting to use to avoid culpability. Search engines are in their fourth decade of existence, and people still don’t seem to understand that their search history can be easily accessed by digital forensic investigators in a case like that of Massachusetts’ Brian Walshe, who in the hours after his wife’s 2023 disappearance Googled everything from “how long before a body starts to smell” to “is it better to throw crime scene clothes away or wash them?” Walshe was convicted of first-degree murder and sentenced to life in prison in December.
AI chatbots, meanwhile, no doubt represent the ignorant wave of the future when it comes to making ill-advised queries that will one day be presented against you in court. Like texts, emails and cloud-based documents, courts have established that digitally stored data and communications are subject to subpoena and seizure under warrant, and can be produced during discovery. Nothing that is typed is ever really gone–certainly not AI chat logs, even when they are “deleted,” a word that is entirely relative. In the case of The New York Times Co. v. Microsoft Corporation, the judge ordered OpenAI to “preserve and segregate all output log data that would otherwise be deleted moving forward,” to include chat logs deleted by the user. It doesn’t matter if you’re using ChatGPT, Claude, Gemini or any other AI tool—these are not personal genies that have sworn their service to you, nor are they under any kind of compulsion to protect you.
And yes, I do realize that we are talking about murder cases here, instances in which it is obviously in the best interest of everyone, and in the best interest of justice, that evidence against convicted or accused killers was preserved. But beyond the preservation of that data for purposes against which few could argue, such as solving a murder, is the ocean of preserved data that is employed against the rest of us on a daily basis, in order to simply erode civil liberties and any slight, remaining semblance of privacy. Sure, it’s probably in our best interest if prospective killers continue asking ChatGPT these sorts of questions, in order to make for easier convictions, but everyone else should really be aware that when you’re talking to your anthropomorphized chatbot, you might as well be talking to a police officer.
If this kind of stored data is used to ring up criminals on one charge, well, just imagine everything else it will soon be used for. Do not tell these digital abominations things that you wouldn’t tell a cop, or a judge. Your AI pal is not some sage confidant—it’s a stool pigeon. Treat it accordingly.