- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
Now, with the help of AI, it’s even easier to waste time of open source developers by creating fake security vulnerability reports.
Now, with the help of AI, it’s even easier to waste time of open source developers by creating fake security vulnerability reports.
What’s the state of LLM detection algorithms? Is there any with a higher sucess rate and with OK-ish amount of false positives? Is there even a FOSS solution for detecting chatgpt? Would make for a great tool to have, I’m getting tired of this.
Unfortunately, the methods of detecting AI generated text and training AI text generaters is basically identical. Any reliable method of detecting AI can therefore be used to improve its performance.
You can, at least, detect low grade attempts to use it. The default output has distinctive patterns. These can be detected. The problem is 2 fold. Firstly, some people write in the same way (the LLM is copying the amalgam, and they write close to that). Secondly, it’s fairly trivial to ask the LLM to change its writing style.
No matter your method, you need to accept a high rate of both false positives and negatives.