First time I’ve seen it. Maybe your Lemmy account is haunted?
First time I’ve seen it. Maybe your Lemmy account is haunted?
The AI used doesn’t necessarily have to be an LLM. A simple model for determining the “safety” of a comment wouldn’t be vulnerable to prompt injection.
My instance admin is also extremely oppressive.
I think you linked the wrong video, this should be the right one