Does the threat from Deep Fake still exist?

Hello everyone!

I was wondering if threats from algorithmic techniques such as deep fakes still exist and are considered a potential threat to an individual today, or if society has grown above the hatred that technology can create and is only interested in creating memes using fakes or primarily sees its potential in the film industry?

I’m sometimes frightened that a person’s widely accessible open source data may be utilized to make random fakes. I’m not sure how well our data protection rules are designed in this regard, or whether they can deliver justice to a target(victim).

3 Likes

This is a very very pertinent question. I think deepfakes do pose a significant threat. While we aren’t probably seeing open widespread adoption by actors considered to be socially antagonistic, I personally believe it is a big problem. Memes and films are on the good side of its uses.

There are two aspects to this problem that I see: 1) the technological and 2) the social and legal aspects.

Regarding 1): This is an arms race at the moment. When deep fakes came out, the first widely reported misuse was using them to create pornographic videos of female celebrities. Many porn sites soon woke up to this issue and purged such videos. However, it would be very naive to believe that this has totally eliminated the issue. From someone, who works in the field, albeit in a totally different and distant field, but still uses some of the underlying mathematics and tools, the advances being made points to my very first statement.

Read these for example to understand the larger destabilizing influences:

  1. SAGE Journals: Your gateway to world-class journal research
  2. The growing threat of political 'deepfakes'

These are just examples that I randomly picked. The second example is an issue of how easy it is to create false divides in politics using deepfakes. In fact, with social media creating echo chambers, I think the effect of deepfakes using micro targeting will accelerate our confirmation bias.

. In fact, there are developments, mainly at the University of Pennsylvania, regarding detection of deepfakes. But, as I mentioned earlier, this is an evolutionary arms race. One can only hope the good side wins this. And to win this, or be in the lead is not just a technological issue. This leads me to the second aspect

  1. Socio-legal aspects: The legality of such technology is probably way down the priority list in India. We still don’t have a GDPR like law in India, or data protection in any form. The current bill under consideration is very different from what was proposed in its original form. In fact, just two days back the CDSL repository was hacked in its entirety and the data being sold by hackers. Most of our response to such incidents is a combination of ostrich like behaviour, apathy and arrogance. So, to expect any laws around it, I believe is wishful thinking. Now, what can one do to not become a victim of such incidents? Be careful on what information one puts up on social media, or shares with anyone online. Secondly, be skeptical. Question, seek reputable sources to verify. A bigger way to combat this would be to conduct workshops that are accessible to the public in various languages. This is where media houses, policy makers should come together to expose the public to the fact that there exists the menace of deepfakes. From anecdotal experience, the threats of deepfakes is currently the concern of certain sections of society. Zoom in and you will easily see that this is largely a tech crowd, and hence the need to conduct workshops or educational workshops. The best way to do this would be to start from schools. An example of this would be how schools in Finland are now teaching how to verify news, and be skeptical of outlandish claims being made in the media.

I would like to add a note here that the technology behind deepfakes was not made with the intention to create deepfakes. In the field of deep learning, which is now commonly, but incorrectly referred to as artificial intelligence, the availability of good data determines the performance of an algorithm. The idea when the tech came out was to create synthetic data that can help with this data availability issue. This is especially important in medicine and healthcare.

Happy to discuss more on this!