Blog icon

By Madeleine Clarke 14 December 2022 4 min read

New research by CSIRO's Collaborative Intelligence Future Science Platform seeks to supercharge Australia's cyber security efforts. Image generated using DALL-E

Almost 20 million Australians have been impacted by recent, high-profile cyber attacks. The personal cost of data breaches can include the leaking of customers’ private information on the dark web resulting in identity fraud, theft, and scams. The financial cost is also staggering. Cyber crime is projected to cost the world 16 trillion Australian dollars by 2025. 
 
Given the growing burden of cyber attacks worldwide, it might surprise you to learn that cyber security analysts are in significant shortage[Link will open in a new window].  

Australia needs to grow its cyber security workforce to combat rising threats to privacy and security. But the pressures of the job make it hard to attract and retain talent.  

An innovative solution is required to reduce the strain on our cyber security workforce and enhance the security of systems.  

CSIRO is tackling this challenge by combining the skills of humans and artificial intelligence (AI) in an integrated team, enabling both human and machine to perform to the best of their abilities.  

The AI who cried alert  

Security Operations Centres (SOCs) are the ground zero of cyber defence in companies. In SOCs, analysts monitor potential security alerts around the clock, escalating and investigating. AI generates these alerts, but with one big problem.  

To ensure no real threats slip through the cracks, AI is overly cautious. In other words, it raises a lot of false alarms.  

 While capturing all possible threats is important, AI in security expert Dr Shahroz Tariq[Link will open in a new window] says this barrage of alerts can wear analysts down.  

“The manual process required to validate an alert is lengthy. An analyst has to look through logs and see where it came from to find out if it is really suspicious. This causes fatigue and leads to alarm desensitisation,” Dr Tariq said.  

One survey by Trend Micro[Link will open in a new window] reports analysts spend more than a quarter of their time investigating alerts that are false positives. Like the boy who cried wolf, the AI who cried alert may also lose the trust[Link will open in a new window] of the humans that interact with it. This can have devastating consequences when a real threat is missed.  

A SOC team powered by collaborative intelligence

A headshot of Dr Shahroz Tariq, who looks directly at the camera in a suit, in front of a salmon background
Dr Shahroz Tariq

Dr Tariq is working to shift the way humans and AI interact in SOCs as part of CSIRO’s Collaborative Intelligence Future Science Platform (CINTEL)[Link will open in a new window].  

He and his colleagues are working to reduce the number of false alarms generated by AI.  

Currently, AI is programmed to perform fixed tasks repetitively. The CINTEL for Cybersecurity project is working to mature AI technologies so that they continually adapt and improve their performance over time.  

“For alerts that the AI is unsure about, it can defer to a human analyst, but the interaction doesn’t stop here. The AI can record the human’s investigation and capture the outcome, so the next time a similar alert is generated, it can manage it alone,” he said.  

“Over time, we can reduce the workload so humans can focus on more novel, legitimate threats.”  

Cyber Centaurs to the rescue  

The team are also exploring optimising the way human and AI team members collaborate. On the AI side, this involves targeting which member of the team (human or machine) it defers to, to ensure the threat is investigated as efficiently as possible.  

On the human side, the project seeks to enable humans to defer to AI more regularly through increasing their trust in the tools at their disposal.  

“Humans are more likely to delegate tasks to AI after gaining confidence in its ability to accomplish them. This brings us closer to creating Cyber Centaurs, a new breed of security experts whose abilities are augmented by cutting-edge software tools, process automation, and, most importantly, collaboration with AI,” Dr Tariq said. 

Collaboration in these new, digitally empowered teams could transform current workflows, he added.
 
“There may be a new digital teammate sitting there that might present the most relevant information to analysts’ dashboards or, train junior staff in a virtual, simulation environment to teach them how to react, understand threats and respond.”   

Cyber Centaurs will have their unique, human skills augmented by cutting-edge software tools, process automation and collaboration with AI

The team are currently looking to engage with real-world SOCs to understand current workflows and pain points. If this sounds like you, reach out to Project Lead Dr Mohan Baruwal Chhetri

Collaborative intelligence seeks to transform the workforce by making employees safer, more productive, and engaged in interesting work through the targeted use of technologies. 

Find out more about the different workplaces we are currently researching on the CINTEL website[Link will open in a new window]

Contact us

Find out how we can help you and your business. Get in touch using the form below and our experts will get in contact soon!

CSIRO will handle your personal information in accordance with the Privacy Act 1988 (Cth) and our Privacy Policy.


First name must be filled in

Surname must be filled in

I am representing *

Please choose an option

Please provide a subject for the enquriy

0 / 100

We'll need to know what you want to contact us about so we can give you an answer

0 / 1900

You shouldn't be able to see this field. Please try again and leave the field blank.