Could ChatGPT or some other AI agent ever be called to the stand to testify in a court of law as a "witness"?
from cheese_greater@lemmy.world to nostupidquestions@lemmy.ca on 02 Apr 03:13
https://lemmy.world/post/45055205
from cheese_greater@lemmy.world to nostupidquestions@lemmy.ca on 02 Apr 03:13
https://lemmy.world/post/45055205
Im fascinated by the idea of animals or other unusual witnesses being called to testify lol
I feel like the prosecution or whoever subpeonas it would just get the transcript or the company would release their logs or something but I wonder if that could ever end up happening, particularly where would be cross-examined
#nostupidquestions
threaded - newest
In the form of some sort of record of user inputs, like a chat log between two humans, sure. Other than that, I sincerely hope not. Remember, LLMS and everything else we term “AI” is just predicting the statistically most probable answer to a question based on its training data. It has no concept of truth nor any way to evaluate it.
I know all that but man i dunno. Im naturally critical but even when they lie or dont know and try and bullshit, they seem to get to somewhere closer resembling reality/truth as you grill them and work thru the bullshit Socratically or whatever. Its fascinating.
Like, objectively I think you’re right but I’ve had too many experiences with them where I pinned them down on a mistake/error or even straight up “hallucination” without giving them a direction to weasel into and they seem to be pretty good at being led to sweep all that away and get to something actually sort of useful
In other words, any lawyer ‘cross-examining’ an AI could ‘grill them’ and ‘work thru the bullshit Socratically’ or whatever and get the “witness” to say whatever it is they want it to say.
Aha u patronizing me?? I cant tell
An exercise in futility.
The more you “grill it” or more specifically pigeonhole it to the narrative you want to see, the more likely it is to spit it at you sooner or later.
Same thing happens when you push these text generators to convince you that the earth is flat or that taking a hit of heroin to take the edge off is something you should be doing.
Only thing it’s useful for is logs like originally stated, any sort of interaction will result in statistically plausible bullshit, not actual evidence.
They don’t lie, that’s not how LLMs work. LLMs don’t think. They don’t learn. They respond based on your input, and they aren’t capable of doing anything more than that. They aren’t alive.
I’d argue not even a hypothetical AGI (artificial general intelligence), with capabilities similar to a reasonable and able-minded human being, would be able to testify in a court.
An AGI would be able to lie, like a human being; but unlike a human being, it gets no consequence for its actions; for example it can’t be jailed for perjury.
Also, for people downvoting the OP: less knee-jerk reaction, more “no stupid questions”. Pleeeeeeeeeeeease.
I wish I could upvote this more than once for the reminder that the entire purpose here is that there are no stupid questions.
Any AI testimony would be printed out and submitted to the court as evidence similar to data logs.
Or as information (which is a lower standard than evidence)
No, since they can’t think for themselves and can be controlled they would not make for a good witness outside of standard evidence collection practices. In the future maybe if AI can prove its sentience and nations recognise that.