The more data the better. Negative examples are also useful to proof the model, examine its edges, detect movement from the model later. I don't believe in restricting how we can use models, only in honestly testing their use.
Facebook released a database of 100,000 deepfakes to teach AI how to spot them
The videos are designed to help improve AI’s performance—as even the best methods are still not accurate enough.
by Will Douglas Heaven
Deepfakes have struck a nerve with the public and researchers alike. There is something uniquely disturbing about these AI-generated images of people appearing to say or do something they didn’t.
With tools for making deepfakes now widely available and relatively easy to use, many also worry that they will be used to spread dangerous misinformation. Politicians can have other people’s words put into their mouths or made to participate in situations they did not take part in, for example.
That’s the fear, at least. To a human eye, the truth is that deepfakes are still relatively easy to spot. And according to a report from cybersecurity firm DeepTrace Labs in October 2019, still the most comprehensive to date, they have not been used in any disinformation campaign. Yet the same report also found that the number of deepfakes posted online was growing quickly, with around 15,000 appearing in the previous seven months. That number will be far larger now. ... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment