Skip to main content
Start of main content.

When Seeing Is No Longer Believing: Five Ways to Tackle Deepfakes

By Alexis Hill and Marisa Agius*

Prime Minister Anthony Albanese has recently called the Federal election. As Australian citizens turn their focus to which party will win, they must remain aware of the rising threat of deepfakes.

What are Deepfakes, Exactly? 

Deepfakes are videos, images, or audio created by artificial intelligence (โ€˜AIโ€™) to mimic real people doing or saying things that never actually happened. These AI-powered fabrications extend far beyond face-swapping apps or Photoshop โ€“ they generate entirely new moments, making it nearly impossible to differentiate what is real from what is not.

 

Deepfakes are not only becoming more sophisticated and realistic, they are also readily accessible to anyone with an internet connection. Consequently, they threaten the integrity of elections, national security, journalism, and public discourse. As deepfake technology becomes more difficult to detect, the question shifts from if they will cause damage to whenโ€”and to what extent. 

Case Study: Deepfakes in Australian Politics 

In September 2024, Senator David Pocock released two AI-generated deepfake videos of Prime Minister Anthony Albanese and Opposition Leader Peter Dutton. The clips illustrate both leaders vowing to ban gambling ads โ€“ an issue that has been the subject of extensive debate in Australian politics. Although clearly labelled as artificial, the videosโ€™ unsettling realism exposed a broader concern: how easily this technology can mislead the public. 

 

Senator Pocock aimed to raise awareness, not mislead. His clear point is that if these โ€˜fairly low-budgetโ€™ deepfakes could momentarily fool viewers, the potential damage from more sophisticated versions during an election could be far more significant.

 

His warning proved timely. In the lead-up to the 2024 U.S. presidential election, deepfakes of Kamala Harris making offensive remarks and engaging in criminal behaviour surfaced and went viral. Microsoft later confirmed that Russian actors were behind many of these clips. 

A group of Australian academics warned in The Conversation that similar tactics could be used in Australia. Studies show that people are generally more inclined to believe and share false content when it confirms their existing beliefs. Even highly educated individuals often struggle to detect deepfakes. The political threat is no longer theoretical โ€” it is already here.

What are the Implications of Deepfakes? 

Threaten electoral integrity 

Deepfakes are often strategically timed to sway public opinion or discredit opponents during elections. This is something to consider in the upcoming federal election. Even after being debunked, such content spreads rapidly, feeding outrage and confusion. As seen in the U.S. and warned by Australian academics, the mere existence of deepfakes can erode public confidence in political communication and democratic institutions; after all, the concern is not merely that people will believe something that is false, it is also that people will distrust materials that are truthful. 

National Security 

One convincing deepfake of a defence official ordering military action could trigger mass panic, market volatility, or even an international crisis. Australian defence leaders have already warned of the national security risks posed by deepfakes. In such high-stakes environments, delays in verification can have devastating consequences.

Public Safety and Crime Misuse 

Fabricated videos showing individuals committing crimes or making threats could result in wrongful arrests or spark unrest, while fake emergency broadcasts may endanger lives. Deepfakes have also been used to impersonate public figures to commit fraud, such as authorizing fake financial transactions. These incidents not only cause serious harm but erode trust in digital communications. Ultimately, as the line between truth and fiction blurs, our capacity to respond effectively to real crises is seriously weakened. 

Five Strategies to Protect Yourself Against Deepfakes

Although institutional preventative measures such as fact-checking, regulation, and AI-detection tools are effective, individuals should also proactively work towards building digital resilience, given the serious implications of deepfakes. Accordingly, here are five actionable ways to effectively safeguard against deepfake-induced misinformation and disinformation. 

1. Digital Skepticism 

Cultivate a mindset of digital skepticism. Deepfakes are designed to exploit emotional triggers โ€“ such as outrage or fear โ€“ so if a video provokes an immediate gut reaction, pause to think. Consider: Who benefits from this message? What are the underlying motivations? Does the context feel off? Critically questioning the intent and narrative behind viral content before resharing can prevent further misconceptions.

2. Media Literacy 

Self-educate. Developing media literacy and an understanding of content manipulation can foster an innate defence against false narratives. For example, regularly consuming diverse news sources helps create a baseline for what information is genuinely plausible. Or learn how AI can generate deepfakes and its telltale signs, such as unnatural blinking or voice tonation inconsistencies. These minor forms of media literacy alone could expose disinformation.

 

3. Practical and Responsible Experimentation 

Experiment with deepfake tools. A practical and comprehensive understanding of the intricacies of deepfakes is one of the most effective securities against potential misinformation and disinformation. Ethical experimentation with AI tools โ€“ such as DeepFaceLab or FaceSwap โ€“ can provide insight into the underlying mechanisms, such as facial recognition, audio manipulation, and neural networks. Hand-on experience builds technological awareness and informs ethical implications and risks, reinforcing the need for mindful digital behaviour. 

4. Reverse Image and Video Search Tools

Fact-check โ€“ it does not have to be elaborate and tedious. Accessible tools such as Google Reverse Image, InVID, and Reality Defender allow users to trace the origins of images and video origins to confirm whether they have been tampered with. Uploading suspicious content to these platforms can quickly filter out the deepfakes โ€“ even before external verification becomes available. 

 

5. Metadata and Digital Footprint Analysis

Inspect the details. Visuals may lie, but metadata does not. Embedded file data โ€“ such as timestamps, editing history, or software used โ€“ can reveal discrepancies or signs of tampering. Tools such as FotoForensics allow users to inspect these technical details to discern content verity. Also, consider tracking digital footprints: Is the content cited by credible news sources? Is there a lack of context? If the content lacks a clear origin or supporting coverage, it is worth questioning its authenticity. 

Final Thoughts 

In an age where seeing is no longer believing, trust has become increasingly fragile. Deepfakes are no longer a distant concern โ€“ they are a present danger with real-world consequences. Leading up to the Australian Federal election, we will undoubtedly see the truth under threat due to deepfakes. However, by combining critical thinking with the right tools and education, individuals are far from defenceless and can fight the growing tide of misinformation and disinformation.

*Students in the Internet Law Research Clinic (ILRC). ILRC operates within the Centre for Space, Cyberspace and Data Law at Bond University.  

More from Bond

  • The POSITIVE Power of MAKING Mistakes: Why Getting It Wrong is Essential for Legal Learning

    Law academics Tammy Johnson and Caroline Strevens discuss the benefits of embracing a 'mistakes friendly approach'.

    Read article
  • What happens during Health Experience Day

    Bondโ€™s Health Experience Day is an opportunity for high school students to experience what itโ€™s like to manage the care of a real patient from start to finish.

    Read article
  • How to maximise your citations โ€“ some lessons for law academics

    In this follow up blog post, Professor Dan Sventesson provides further guidance to law academics on how to maximise citations

    Read article
  • Citation and h-index averages for law academics โ€“ Comparing apples with apples

    Citation and h-index averages for law academics โ€“ Comparing apples with apples

    Read article
  • All about Bondโ€™s Vice Chancellorโ€™s Elite Scholarship

    Vice Chancellor's Elite Scholar Alissa Woods give us her insight into the benefits of being a VC scholar, and how it impacts the university experience.

    Read article
Previous Next
1 of 5