27 C
Canberra
Friday, November 22, 2024

It’s a bot-eat-bot world as cyber criminals go hi-tech

So-called deepfakes are being used to manipulate voters, launch business scams or even generate fake pornography to harass and extort.

The highly convincing hoaxes using images, audio or video are made with a type of artificial intelligence known as generative AI.

That technology is rapidly spreading in part thanks to popular apps such as ChatGPT that don’t require users to be computer experts to make sophisticated materials.

One high-profile example of a deepfake was Boris Johnson appearing to endorse opponent Jeremy Corbyn during the 2019 UK election.

The video was produced by research institute Future Advocacy and UK artist Bill Posters to show how deepfakes could undermine democracy.

AI-generated images of Donald Trump being arrested that went viral this week were more clumsy, featuring three legs and too many thumbs.

But the weaponisation of manipulated videos for malicious ends is more than an academic talking point or gimmick.

Deepfake porn, made for titillation or other insidious purposes, can be generated using AI without the consent of a person whose face is grafted onto sexually explicit imagery.

Earlier this year, several female gamers from video live-streaming service Twitch became victims of this form of abuse.

Cyber experts also warn fabricated materials are already being used for political manipulation, depicting people making false statements in an attempt to sway election results.

Disinformation expert Jake Wallis says he is concerned that state actors – malicious groups working on behalf of a government – are exploiting the kinds of techniques that ChatGPT makes much easier to deliver at scale.

“The industry, in general, already uses these techniques in the defender community,” he told AAP during a cyber conference.

But Dr Wallis said governments must start to think hard about how to use the technology because malign actors will certainly deploy it to deceive.

“The challenge that this kind of technology poses for our democratic processes I think is particularly acute,” he said.

His research at the Australian Strategic Policy Institute focuses on the threat to open society and democracy, and he says this openness is increasingly being exploited by state actors as a vulnerability.

“We already see actors like China, Russia, Venezuela even, playing with generative AI in terms of developing content that is designed to manipulate,” Dr Wallis said.

Mimicking the tone and style of bosses, hackers can also use AI to generate highly convincing messages with fraudulent links that prompt employees to share sensitive information or disclose passwords that let cyber criminals in.

The Australia Trade and Investment Commission reports there is a cybercrime every seven minutes in Australia, and their number and sophistication is increasing.

Australia is among the five most-attacked countries, with attacks on mobile devices increasing exponentially, BlackBerry executive Jonathan Jackson says.

“My organisation is blocking an attack every two minutes in Australia,” he told AAP.

Mr Jackson said healthcare, education and financial services providers, along with governments and critical infrastructure, were top targets.

“I often get asked, ‘well, when is the next big one coming?’ Well, it just happened.”

The company is also detecting a big change in the way that cyber criminals operate as systems become more interconnected, creating a wider area to attack.

That meant the whole ecosystem of cyber – including governments, security vendors, researchers – needed to come together, Mr Jackson said.

“That’s failing at the moment because we’re not stopping enough attacks getting through.”

The rise of AI that is capable of generating text, images or audio in response to prompts means deep knowledge of coding languages is no longer required to produce fake content.

BlackBerry’s latest Global Threat Intelligence Report forecasts that cyberattacks on critical infrastructure will continue, with AI increasingly used not only for automating attacks but also to develop advanced deepfakes.

Home Affairs Minister Clare O’Neil told the conference that the country could be the most cyber-secure country in the world by 2030 with the backing of a new strategy.

But Australian organisations are lagging rivals in other developed economies in cybersecurity readiness, according to a report from Cisco.

Some 10 million Medibank customers and hundreds of thousands more people whose private information was accessed in major hacks on Latitude Financial and Optus are coming to grips with the vulnerability.

Cisco found roughly one in 10 Australian organisations are in the “mature” stage of cybersecurity readiness, compared to the global average of 15 per cent.

In contrast, more than nine out of 10 respondents said they expect a cybersecurity incident to disrupt their business in the next 12 to 24 months.

Almost three-quarters (70 per cent) said they had a cybersecurity incident in the last 12 months, compared to 57 per cent globally, costing the majority of affected organisations at least $750,000.

The conference audience of spooks, lawmakers, tech vendors and academics was told generative AI was useful for governments as well as being a tool of cyber criminals and state actors.

Mr Jackson said being educated on what attackers were using the technology for was an important part of a defence strategy.

“Be ‘eyes wide open’ to the reality of the world. We now live in an AI-versus-AI world,” he said.

Mr Jackson said incredibly powerful technology was now available to people who previously hadn’t had access to the capability to automate attacks, create a deepfake social media profile or impersonate a voice.

“Wherever there is value, cyber criminals are very quick to pervert any attack opportunity. so Australia, as a country, needs to be prepared,” he said.

Content had become more difficult to trust and it would become difficult for lawmakers to create boundaries, Mr Jackson added.

“We’re really just starting to explore some of those conundrums now and policy is a long way behind,” he said.

The Australia Information Security Association said businesses needed to find a collective $10 billion a year for cyber security.

Chair Damien Manuel said under-investment in cyber security by Australian firms had been a problem for years.

“With significant data breaches in major organisations like Optus and Medibank last year, the Australian business sector is finally waking up to the very real and very present danger,” he said.

“The business sector will need to ask themselves, what is the cost for not getting up to speed on this major security issue, what is the cost to reputation and ultimately customers and sales?”

By Marion Rae in Canberra

More Stories

 
 

 

Latest

canberra daily

SUBSCRIBE TO THE CANBERRA DAILY NEWSLETTER

Join our mailing lists to receieve the latest news straight into your inbox.

You have Successfully Subscribed!