Mission impossible? How to tackle "deepfakes" in international arbitration

In recent weeks, millions have watched videos that would appear to show actor Tom Cruise playing golf or performing magic tricks. To the naked eye, it would appear clear that Tom Cruise has simply decided to show off his hobbies on social media: the movements are smooth, whether he is putting on sunglasses, slicking back his hair, or using sleight of hand.

What are "deepfakes"?

But, to the surprise of millions on social media, the videos aren't real. The person in them isn't Tom Cruise. Rather, they have been reportedly created by a visual effects specialist from Belgium using "deepfake" technology. When utilized by specialists, "deepfake" technology can render it nearly impossible with the naked eye to detect whether an image or a video is fake. While there is "deepfake" detection technology, it has been reported that the videos of "Tom Cruise" avoided discovery when scanned through several of the best publicly available "deepfake" detection tools.

Here's how it works: images of a person are fed into a deep-learning algorithm in order to create a fake image or video of that person. With that, the creator can have the person do or say anything they want them to. An artificial intelligence technique known as a "generative adversarial network" can be used to effectively swap one person's face onto the body of someone else, matching the target's facial movements with the new face. While the clips of "Tom Cruise" would seem to be fairly innocuous, it is easy to see how a "deepfake" could be used for more sinister purposes.

How can "deepfakes" affect international arbitration?

Experts have long been concerned that "deepfakes" may be utilized to disrupt elections or violate privacy through the spread of misinformation. These concerns have become more pronounced as the technology has developed. But what concerns could this cause for the users of international arbitration?

Video evidence is increasingly prevalent and also very persuasive. It offers an additional theatrical effect that can often be particularly compelling for tribunals. Since the algorithm used to create "deepfakes" is fed with images or video of a person, high-profile individuals such as politicians are particularly at risk. This creates a specific concern for investment arbitration, which often concerns high-profile political figures.

Further, in high-stakes, high-value international disputes, certain governments or deep-pocketed parties may be tempted to commit significant time and resources to creating fake video evidence that is practically undetectable.

This leads to two competing concerns. On the one hand, certain tribunals have attached a heightened standard of proof for a party alleging that a piece of evidence is fake. For example, in Dadras International v. Iran, the tribunal commented that allegations of forgery, because of their implications of fraudulent conduct and intent to deceive, are particularly grave. This justified, in the tribunal's view, a heightened standard of proof of "clear and convincing evidence." Recalcitrant parties may be buoyed by this standard and encouraged to use "deepfakes" due to the difficulties that may be faced by their counterparty in establishing that they are, in fact, fake.

On the other hand, any developing rhetoric that "seeing is no longer believing" can also encourage parties seeking to deny the veracity of legitimate evidence by claiming that it is an elaborate fake.

How should parties and tribunals react?

Given how hard it is becoming to prove that a "deepfake" is fake, tribunals may struggle to determine conclusively how to deal with contested video evidence. Rules on how tribunals treat video evidence and allegations of technological tampering may have to be updated before they fall too far behind the quickly developing technology.

Striking a balance between these competing threats can be a "Risky Business." While there would not yet appear to be a publicly available example of a "deepfake" being used (or alleged to have been used) in international arbitration proceedings, given the increasing prevalence of the technology, it may be an issue that tribunals may have to address more frequently in the near future.

 

 

Authored by Markus Burgstaller and Scott Macpherson.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.