page contents

AI synthetic video is fake, real, Facebook, Microsoft, Amazon crowdfunding

(Original Title: AI Synthetic Video Is Falsified, Facebook, Microsoft, Amazon Crowdfunding)

Deepfake technology uses AI algorithms to “pull out” specific people in movies, pictures, and audio, replacing others. According to the latest statistics from Amsterdam-based cybersecurity startup Deeptrace, the number of imaging products based on this technology is rapidly increasing on the Internet. The company found 14,698 deepfake videos in June and July statistics, up from 7964 last December. Just 7 months later, deepfake videos have exploded by 84%. This growing trend has caused public anxiety, not only because deepfake videos can be used to manipulate, sway public opinion, or even affect elections, or to plant unwanted crimes to someone. They are even more concerned about the large number of porn videos and extortion cases that result.


In response to the rapid growth of deepfake, Facebook joined hands with Amazon Cloud Services (AWS), Microsoft, and the Partnership on AI to host a deepfake detection challenge. Many experts and scholars from Cornell Institute of Technology, MIT, Oxford University, California Berkeley and other institutions joined the competition to take the lead in the research of deepfake detection technology. The event will be released globally at the Vancouver NeurIPS 2019 conference in an effort to promote the development of open source detection tools.


So far, Facebook has contributed more than $ 10 million to encourage people to participate in the competition; Amazon cloud services have contributed more than $ 1 million worth of points and provided alternative models for contestants; Google's Kaggle A data science and machine learning platform will host the challenge.


Facebook Chief Technology Officer Mike Schroepfer pointed out that "live" video and audio materials produced by deepfake technology have seriously affected people's trust in online information, but the industry has no tools to detect them. We hope to develop some technology to make these forged videos nowhere to be seen. "


The dataset provided by the detection challenge shows the original video (left) and the tampered video (right).


The deepfake detection challenge includes a huge bonus and a training data set. A variety of detection tools have been interviewed for video fraud. Among them, the tools developed by researchers at the University of California, Berkeley and the University of Southern California are the best, and their recognition accuracy can exceed 90%. But at the same time, deepfake is a technology that is constantly changing and evolving. It can be said that this testing has a long way to go. In a recent interview, Hao Li, CEO of Pinscreen, pointed out that artificial intelligence synthesis technology is constantly evolving. To some extent, synthetic counterfeits are almost impossible to distinguish from reality.


In the deepfake detection challenge, many actors were hired to shoot a lot of video training materials. The backgrounds, movements, and scene designs in these videos are different and can be said to be diverse. After that, based on these real videos, fake videos are processed, they or faces are swapped, or sounds are modified.


"The cutting-edge research on deepfake detection technology needs to combine large-scale, realistic, useful and free datasets. Since this resource does not exist, we have to create it from scratch." Christian Ferrer, artificial intelligence research manager at Facebook, said in order to avoid Legal and policy constraints, this data set only refers to those who have signed a usage agreement, and does not involve any information of other users. In addition, Ferrer said that access to the dataset is closed, so only teams with a license agreement can access and use it.


Starting today, contestants can download corpora to train their deepfake recognition models. After completing the design, they need to submit the code to a black box environment for verification and get a score. Participants do not need to share their models, but must agree to open source the models to qualify for huge prizes.


The deepfake detection challenge is driven and overseen by a professional committee that includes Facebook, Microsoft, and the media and academic community. The competition is scheduled to run until the end of March 2020.


"Cross-industry groups work together to create this challenge in a matter of months. Such solidarity is undoubtedly inspiring." Irina Kofman, director of artificial intelligence at Facebook and business director responsible for managing this challenge "Everyone brings insights from their field, which gives us access to a wide range of perspectives."


Google's vice president AI Jerome Pesenti said, "We know that solving these problems will not be easy. However, I believe that open source and open research methods will eventually incubate effective tools to help people identify deepfake scams."



Source: NetEase Smart, translated by Google Translate

Statement: this information is reprinted from authoritative news media. Reprinted for the purpose of transmitting more information and academic exchange, it is not used for commercial purposes, and does not mean to agree with its views or confirm its description. The content of this article is for reference only. If you violate the rights and interests of a third party, please contact us and we will deal with it as soon as possible. 

推荐

  • QQ空间

  • 新浪微博

  • 人人网

  • 豆瓣

取消
技术支持: 机器人行业建站
  • Home
  • 手机
  • 地址
  • QQ