Blog icon

By  Darcy Millett Naomi Stekelenburg 3 April 2024 4 min read

Key points

  • Large language models (LLMs) have a role in healthcare, but they’re not a magic solution to all problems.
  • LLMs rely on high-quality data and lack reasoning capabilities so can’t be used for all tasks in healthcare.
  • Our scientists are developing AI tools that can be safely implemented into the healthcare system.

Imagine you’re shopping online and talking to a helpful bot about buying some new shoes. That’s the basic idea behind large language models (LLMs). LLMs are a type of artificial intelligence (AI) and they are gaining traction in healthcare.

At our Australian e-Health Research Centre (AEHRC), we’re working to safely use LLMs and other AI tools to optimise healthcare for all Australians. Despite their increasing popularity, there are some misconceptions about how LLMs work and what they are suitable for.

New (digital) generation

One of the most widely used and well-known types of AI is generative AI. This is an umbrella term for AI tools that create content, usually in response to a user’s prompt.

Different types of generative AI create different types of content. These could be text-based (like OpenAI’s Chat-GPT or Google’s Gemini), images (like DALL-E) and more.

LLMs are a type of generative AI that can recognise, translate, summarise, predict and generate text-based content. They were designed to imitate the way humans analyse and generate language.

Brain training

To perform these tasks, a neural network (the brain of the AI) is trained. These complex mathematical systems are inspired by neural networks of the human brain. They are very good at identifying patterns in data.

There are many models of neural networks, but most LLMs are based on a model called ‘transformer architecture’. Transformer architectures are built with layers of the neural network called ‘encoders’ and ‘decoders’. These layers work together to analyse the text you put in, identify patterns, and predict what word is most likely to come next based on the input.

AI models are trained using LOTS of data. LLMs identify patterns in text-based data and learn how to generate language.

Dr Bevan Koopman, a senior research scientist at AEHRC, says it’s important to remember what tasks LLMs are performing.

"A lot of misconceptions surround the fact that LLMs can reason. But LLMs are just very good at recognising patterns in language and then generating language," Bevan says.

Once trained, the model can analyse and generate language in response to a prompt. They do this by predicting what word is the most likely to come next in a sentence.

CSIRO Senior Research Scientist Dr Bevan Koopman

Large language models (LLMs) in healthcare

LLMs are often seen as a ‘silver bullet’ solution to healthcare problems. In a world of endless data and infinite computer power this might be true – but not in reality. High-quality and useful LLMs rely on high-quality data… and lots of it.

We find healthcare data in two forms – structured and unstructured. Structured data has a specific format and is highly organised. This includes data like patient demographics, lab results, and vital signs. Unstructured data is typically text based, for example written clinician notes or discharge summaries.

Most healthcare data is unstructured (written notes). This leads people to think we don’t need structured data to solve health care problems – because LLMs could do it for us.

But according to Derek Ireland, a senior software engineer at AEHRC, this isn’t entirely true.

"Maybe with infinite computing power we could, but we don’t have that,” David says.

Fit for health

While LLMs aren’t a cure-all solution for healthcare, they can be helpful.

We’ve developed four LLM-based chatbots for a range of healthcare settings. These are continuously being improved to best support patients and work alongside clinicians to ease high workloads. For instance, Dolores the pain chatbot provides patient education and takes clinical notes to help prepare clinicians for in-depth consultations with patients.

We’re also studying how people use publicly available LLMs for health information. We want to understand what happens when people use them to ask health questions, much like when we Google our symptoms.

It's important to remember LLMs are only one type of AI. Sometimes their application is appropriate and sometimes a different technology might do a better job.

We’re also developing other types of AI tools like VariantSpark and BitEpi to understand genetic diseases and programs to analyse and even generate synthetic medical images.

Safety first

Using LLMs and AI safely and ethically in healthcare is crucial. Just like any tool in healthcare, there are regulations in place to make sure AI tools are safe and used ethically.

Our healthcare system is very complex and the same tools won’t work everywhere. We work closely with researchers, clinicians, carers, technicians, health services and patients to ensure technologies are fit for purpose.

We all have a role, including AI

LLMs might not be a miracle cure for all our healthcare problems. But they can help support patients and clinicians, make processes more efficient and ease the load on our healthcare system.

We’re working towards a future where AI not only improves healthcare but is also widely understood and trusted by everyone.

Contact us

Find out how we can help you and your business. Get in touch using the form below and our experts will get in contact soon!

CSIRO will handle your personal information in accordance with the Privacy Act 1988 (Cth) and our Privacy Policy.


First name must be filled in

Surname must be filled in

I am representing *

Please choose an option

Please provide a subject for the enquriy

0 / 100

We'll need to know what you want to contact us about so we can give you an answer

0 / 1900

You shouldn't be able to see this field. Please try again and leave the field blank.