Artificial Intelligence
Security Threat: Falsified Satellite Images in Deepfake Geography

Security Threat: Falsified Satellite Images in Deepfake Geography

 


 

Scientists who have identified a potential national security threat from incorrect geography, for example in incorrect satellite imagery, are investigating ways to identify it and take countermeasures. (Image credit: Getty Images)

Deepfake is a portmanteau of “deep learning” and “fake” and refers to a synthetic medium in which a person in an existing image or video is usually replaced by the likeness of another person. Deepfakes use machine learning and AI techniques to manipulate visual and acoustic content with a high potential for deception.

Geographic deepfakes can corrupt satellite imagery, which could pose a national security threat. Scientists at the University of Washington (UW) are investigating this in hopes of finding ways to spot fake satellite images and warn of their dangers.

 

Bo Zhao, Assistant Professor of Geography, University of Washington

“These are not just Photoshopping things. This makes data look incredibly realistic, ”he explained Bo Zhao, Assistant Professor of Geography at the UW and lead author of the study, in one Press release from the University of Washington. The study was published in the journal on April 21 Cartography and geographic information science. “The techniques are already there. We’re just trying to show the possibility of using the same techniques and developing a coping strategy for them, ”Zhao explained.

Fake locations and other inaccuracies have been part of map creation since ancient times as real places are translated into map form. However, some inaccuracies in maps are created by map manufacturers to prevent copyright infringement.

The director of the National Geospatial Intelligence Agency sounds the alarm

With the proliferation of geographic information systems, Google Earth, and other satellite imaging systems, spoofing is highly sophisticated and risky. The director of the federal agency responsible for geospatial intelligence, the National Geospatial Intelligence Agency (NGA), sounded the alarm at an industry conference in 2019.

“We are currently facing a security environment that is more complex, interconnected, and volatile than it has been in the recent past. In this environment, we have to do things differently if we are to navigate it successfully,” he said . Robert Sharp, according to a report by SpaceNews.

To investigate how satellite images can be forged, Zhao and his team at WU used an AI framework that was used to manipulate other types of digital files. When applied to the field of mapping, the algorithm essentially learns the properties of satellite imagery from an urban area and then creates a deepfake image by transferring the properties of the learned satellite imagery features to another base map. The researchers used a generative machine learning framework in the opposing network to achieve this.

The researchers combined maps and satellite imagery from three cities – Tacoma, Seattle, and Beijing – to compare features and create new images of a city based on the features of the other two cities. The untrained eye could have difficulty telling the differences between real and fake, the researchers found. The researchers examined color histograms, as well as frequency, texture, contrast, and spatial domains to identify the forgeries.

Simulated satellite imagery can serve a legitimate purpose, such as showing how an area has been affected by climate change over time. If there are no images for a period of time, filling in the gaps to provide perspective can provide perspective. The simulations must be marked as such.

Researchers hope to learn how to spot fake images and help geographers develop data literacy tools, much like fact-checking. As technology advances, this study aims to promote a more holistic understanding of geographic data and information so that we can demystify the question of the absolute reliability of satellite imagery or other geospatial data, Zhao explained. “We also want to develop a more forward-looking thinking to take countermeasures such as fact-checking if necessary,” he said.

In an interview with The edge Zhao said the aim of his study was “to demystify the function of absolute reliability of satellite imagery and to raise public awareness of the possible influence of deep falsified geography.” He stated that while deepfakes are widespread in other areas, his article is likely the first to touch the subject in geography.

“While many GIS [geographic information system] Practitioners have celebrated the technical benefits of deep learning and other types of AI for geographic problem solving. Few have publicly recognized or criticized the potential threats from deep fake to the field of geography or beyond, ”the authors say.

US Army researchers are also working on counterfeit detection

 

Professor C.-C. Jay Kuo, professor of electrical engineering and information technology at the University of Southern California

US Army researchers are also working on a deepfake detection method. Researchers at the US Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory, are working with Professor CC Jay Kuo’s research group at the University of Southern California to investigate the threat deepfakes pose to our society and national security publication from the US Army Research Laboratory (ARL).

Your work will be featured in the newspaper with the title “DefakeHop: A lightweight, high-performance deepfake detector. ” This will be presented at the IEEE International Conference on Multimedia and Expo 2021 in July.

ARL researchers Dr. Suya You and Dr. Shuowen (Sean) Hu found that most of the cutting-edge deep fake video detection and media forensics methods are based on deep learning, which has weaknesses in terms of robustness, scalability, and portability.

“With the advancement of generative neural networks, AI-driven deepfakes have advanced so rapidly that there is a lack of reliable techniques to detect and defend them,” they explained. “We urgently need an alternative paradigm that can understand the mechanism behind the amazing power of deepfakes and develop effective defense solutions with solid theoretical support.”

Based on their experiences with machine learning, signal analysis and computer vision, the researchers developed a new theory and a new mathematical framework, which they call successive subspace learning (SSL) as an innovative neural network architecture. SSL is DefakeHop’s key innovation, according to the researchers.

“SSL is an entirely new mathematical framework for neural network architecture that was developed from signal transformation theory,” said Kuo. “It’s radically different from the traditional approach. It is very suitable for high-dimensional data with short-, medium- and long-range covariance structures. SSL is a complete, data-driven, unattended framework that provides a brand new tool for image processing and understanding tasks like facial biometrics. “

Read the source articles and information in a Press release from the University of Washington, in the journal Cartography and geographic information science, an account of SpaceNews,a publication from the US Army Research Laboratory and in the article entitled “DefakeHop: A lightweight, high-performance deepfake detector. “


Leave a Reply

Your email address will not be published. Required fields are marked *