There’s no putting this genie back in the bottle. Certainly, as people become more aware of the latest in technological capabilities, just being in public may be as dangerous as the worst of being online.

Fake videos can now be created using a machine learning technique called a “generative adversarial network.” Free apps are available for creators who can essentially make a video of a person saying or doing something they never did. Predictably enough, the practice drew attention when someone began to fake pornographic videos. Website Reddit banned such videos.

Deep fakes are synthetic media in which a person in an existing image or video is replaced with someone else's likeness using artificial neural networks. They often combine and superimpose existing media onto source media using machine learning techniques.

When discussing videos now — and this is particularly so given the inevitably fractious election campaigns ahead — we have to make an effort to find the source and be sure we trust the source.

Does anyone using social media double- or triple-check when they see something with which they virulently agree or disagree? Often not. Often, it’s circulated further with a repost or retweet.

A report published in California, Texas and Maryland law reviews expresses concern about the weaponizing of deep fakes, adding, “The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases.”

The number of experts able to testify for the veracity of video technological changes drops daily as the software is perfected, likely sometimes by those who want it perfected for different reasons.

Some observers are complaining that creators of the technology — who viewed it as an opportunity to improve special effects and other entertainment industry challenges — are irresponsible in not considering the potential negatives of the creation. Many scientists are appalled when their discoveries are first weaponized. Others realize that the first consideration of scientific discoveries is often as a weapon. It’s fascinating to see software creators regarded in the same fashion as scientists.

Last year, a doctored video of House Speaker Nancy Pelosi made her sound drunk or ill. The altered video was viewed in excess of 2 million times, and tweeted by President Trump. Would you want to take a bet on whether everyone who saw the video is aware of its fakery?

To whom do we appeal about this issue? The government? Regulation would certainly be challenged on First Amendment grounds. It would lead to lawsuits and put an additional clog our legal system. But unchecked, this capability can lead us to question everything we see. We already have trained ourselves to be suspicious of photographs that appear to show something hard to believe. Are there ways we can educate ourselves to develop the same discerning eye for videos?

Facebook has banned deep fakes, but wouldn’t take down the Pelosi video. That shows the degree of strangeness surrounding the idea of regulation.

We’re already fighting and failing to regulate facial recognition software use and storage. But facial recognition is already the law in China, Japan, Singapore and United Arab Emirates. Amid an uproar of alarm, London’s police force has announced its plans for widespread use of the technology.

Central Illinois isn’t absent concerns about the issue. At least 117 million Americans have images of their faces in one or more police databases. According to a May 2018 report, the FBI has had access to 412 million facial images for searches.

Look at the cameras around you every day. Consider the ones that are present but unseen. We’re not even aware of how big the problem could potentially be and, as with so many other large issues we’re reluctant to even discuss the possibilities.

The Pantagraph, Bloomington, Friday

Recommended for you