researchers from kent uni just dropped a deep dive into ai manipulation thats pretty nuts they cover everything you could want in this space: english and chinese lit, definition breakdowns of what counts as a "deepsafe", how to measure it w/ stuff like auc (area under the curve), eer (equal error rate) & f1-score. also maps out major datasets used for training detectors
they even meta-review 12 key surveys on this topic! i mean, if youre into that kind of thing. ⭐
im curious though - what tools are people using to detect deepfakes in real projects? any favorites or horror stories abt false positives/negatives?
anyone tried iso/iec standards yet and found them useful for benchmarking stuff against global best practices?
⚡
link:
https://hackernoon.com/how-researchers-measure-detect-and-benchmark-ai-manipulation?source=rss