Now this would effectively kill off the current AI powered solution, because they have no way of explaining, or even understanding, why a paper may be plagiarized or not, but I'm okay with that.
Now this would effectively kill off the current AI powered solution, because they have no way of explaining, or even understanding, why a paper may be plagiarized or not, but I'm okay with that.
> The data subject shall have the right not to be subject to a decision based solely on automated processing [...]
[1]: https://gdpr.eu/article-22-automated-individual-decision-mak...
But if you're a nobody, and can't afford to go to court against Deutsche Bank for example, of course you're SOL. EU has some good parts, but it's still a human government.
It's especially problematic since a good chunk of those "flagged" are actually doing something nefarious, and both courts and government will consider that "mostly works" is a good outcome. One or ten unlucky citizens are just the way the world works, as long as it's not someone with money or power or fame.
The assumption is that a human review the conditions that led the automated system to make that decision.
I think it would be trivial to argue in court that rubberstamping some scalar value that a deep neural net or whatever spit out does not pass that bar. It's still the automated system's decision, the human is just parroting it.
Note that it's easier for the FAANGs to argue such a review has happened because they have massive amounts of heterogenous data where there's bound to be something that would be sufficient to argue with (like having posted something that offended someone).
But a single score? I'd say almost impossible to argue. One would have to demonstrate that the system is near-perfect, and virtually never makes mistakes.