The Deepfake of the Queen’s Christmas Message highlights the age of fake news

Elizabeth II is one of the oldest constitutional monarchs in history (1953–). British Nervous Channel 4 tested the waters with a deep Christmas address:

In Commonwealth countries like Canada, it has been a long-standing custom to listen to Elizabeth’s Christmas address. So how did the fake go?

If you have poor eyesight and limited hearing, a fake queen could fool you on a busy Christmas day. But by the time she starts talking about Netflix and gets into a dance routine, you’re sure to know something’s going on. Channel 4 makes little effort to hide its deception, but that hasn’t stopped some critics from expressing unease over the stunt.

Rhett Jones, “The Queen’s First Deep Address Debuts on British Television” at Gizmodo

Well, on the one hand, the behavior has been extreme unlike Elizabeth in the past seven decades.

Okay, but deepfakes are getting better. They will be among us for a while – and so will the effort to discover them. For example, there is actually an industrial challenge:

The culmination of these efforts is this year’s challenge of deep discovery. All in all, the winning solutions are a tour of advanced DNNs (average accuracy of 82.56 percent by the best performers). They provide us with effective tools to detect deep fake files that are automated and mass-produced using artificial intelligence algorithms. However, we must be careful in reading these results. Although organizers have made an effort to simulate situations in which deeply fake videos are applied in real life, there is still a significant difference between the performance of an assessment data set and a more real data set; when tested on unseen videos, the accuracy of the best performer was reduced to 65.18 percent.

Siwei Lyu, “Deepfakes and the New Arms Race to Create and Detect Fake Media Generated by AI” at Scientific American (July 20, 2020)

Could deepfakes affect the election (that’s the question)? Well, maybe, but there are simpler ways to influence elections. First, deepfake might turn out to be fake, but propaganda can’t be detected if people believe it.

In the meantime, here are five things to know about the world of deepfakes:

➤ This is not a new idea: “Psychological disorder and deception are not a new threat. The Chinese eminent strategist of the sixth century BC, General Sun Tzu, had a poet’s fondness for the epigram – the ability to convey the complex into a concise phrase. “The whole war is based on deception,” he wrote. The Italian Renaissance philosopher of forced pragmatism, Niccolò Machiavelli, stated: “While deception in other activities may be disgusting, in war management it is commendable and glorious, and he who deceives the enemy should be praised as much as he who acts by force. “” – Austin Bay, Strategy page (July 30, 2019)

➤ The software is quite easy to buy and use.

➤ Yes, deepfakes are definitely used for scams. In one case, a voice liar tricked a manager into transferring thousands of dollars to a scammer. It could be worse: “Although someone who uses deepfake audio to pretend to be the company’s CEO, and the company’s accounting provided them with a million dollars out of” necessity “, is one thing, the technology could also be used for sabotage. What if one rival – or even a nation state – wants to lower Apple’s share price? A well-timed deepfake audio clip that allegedly shows Tim Cook having a private conversation with someone about negotiating the sale of the iPhone, could do just that – wipe out billions from the stock market in a second. ” (Fast Company, July 19, 2019)

But that scenario assumes, of course, that Apple is incapable of retaliating, which, we suspect, is.

➤ Yes, there are ways to detect deep false contributions in advance: “Researchers at the University of Surrey have developed a solution that could solve the problem: instead of detecting untruth, it will prove the truth. Planned to be unveiled at the upcoming Computer Vision and Pattern Recognition (CVPR) Conference, a technology called Archangel uses AI and blockchain to create and register digital fingerprints resistant to unauthorized changes for authentic videos. The fingerprint can be used as a reference point to validate media distributed over the Internet or broadcast on television. ” (PC Mag, June 19, 2019)

For example, protection against unauthorized storage can be “stuck” in cameras.

➤ Even more practical, here are some tips in the field:

Approach every image you see with skepticism. Does it come from a medium you recognize? Is he a deserving photographer? Is there an inscription that explains in detail what is happening? All of these things can be faked, of course, but not without effort, and we try to avoid being caught in cheap propaganda basements here. “I don’t like to be fooled by people,” says James O’Brien, a computer graphics and forensics expert at UC Berkeley. “I think people should take that position. When you see a candidate who hates shooting puppies, stop and ask yourself where does this video come from? How do I know it’s real? “If it confirms all your bitterest feelings about a topic, it is more a sign of truth rather than truth.

Emma Gray Ellis, “How to spot fake images and online propaganda” on Wired

Obviously, every technologically smart person today needs to be aware of these risks and plan for them. The golden rule, of course, is: When you doubt, you doubt, and if it sounds unbelievable, don’t believe it.

In the meantime, deepfakes will help you make great thrillers, of course.


You can also enjoy: Here is a deep version of former US President Obama: Look what you think.

Source