Generative artificial intelligence (AI) can produce text, images, or other content in response to prompts. It has the potential to streamline workflows in virtually every sector, from education and manufacturing to banking and life sciences.
In fact, Australia’s Generative AI Opportunity report predicts this technology could contribute $115 billion annually to Australia’s economy by 2030.
Its ability to generate a multitude of possibilities instantaneously can empower humans to explore new and novel concepts that might otherwise remain undiscovered. For small businesses and sole traders, generative AI can scale operations at speed, transforming time-intensive tasks into quick and efficient processes.
But all of this is only possible if it’s implemented responsibly.
Australia’s Responsible AI Network (RAIN) is a world-first program bringing together experts, regulatory bodies, practitioners and training organisations to empower Australian businesses and industries to responsibly adopt AI technology.
RAIN's interactive and free workshops are run by AI experts that explain and demonstrate the methods, tools and practices needed to responsibly use AI.
Here, we recap the highlights from our recent workshop on generative AI, starting with the fundamentals of the technology – Large Language Models (LLMs).
What are Large Language Models (LLMs)?
LLMs are a form of AI that recognise, translate, summarise, predict, and generate text.
They form the algorithmic core of text-based generative AI, like ChatGPT. They’re designed to use and understand language in a human-like way.
You’ve experienced earlier versions of this technology in automated text prompts while messaging on your smart phone, or ‘quick response’ suggestions when emailing.
This kind of response prediction and answer generation technology now underpins a variety of sophisticated tools. These include virtual assistants, market research analysis, fraud detection, cybersecurity programs, and medical diagnosis technology.
How do they work?
LLMs are trained using vast amounts of data, which is partly where the ‘large’ in large language model comes from.
It would take a human 2,000 years to read ChatGPT3’s training data. Generally, the more data the LLM is trained on, the more capable it is at using and understanding language.
The current generation of LLMs are pre-trained on billions of words of text from sources such as books, websites, academic papers and programming code.
From a 'base model', LLMs can be customised with a much smaller amount of industry-specific information. This approach is both more practical and more affordable than attempting to build one from scratch.
The cons: what are the risks?
The quality of training data is a critical challenge for LLMs.
If the models are trained on poor quality data that contains inaccuracies, bias, or only represents a specific group of people, the LLM unknowingly learns and reproduces prejudiced or incorrect results. This can also occur if models are trained with a broad objective or goal.
Microsoft’s 2016 chatbot ‘Tay’ began parroting Trumpian quotes after being trained on live Twitter data for little over 24 hours.
Gradient Institute are an Australian non-profit research institute that builds safety, ethics, accountability, and transparency into AI systems. Chief Technology Officer and workshop presenter Dr Lachlan McCalman says the risk of LLMs is compounded by their extreme ease of use and application.
“Anyone can log on to OpenAI and start using ChatGPT, and they might not understand the limitations of these systems,” Lachlan said.
People using these systems should understand how they work, be accountable for their objectives as well as their unintended consequences, says Lachlan. But a lack of transparency from the third-party providers that build these systems make it difficult to properly exercise that responsibility.
OpenAI and a handful of others have models at the cutting-edge of LLM capabilities, and we're only seeing a gradual increase in the diversity of models available, explains Lachlan. What's more, many systems sharing the same core model are creating the risk that errors are replicated across the board.
The pros: business benefits of LLM-powered generative AI
Sole traders, small businesses and not-for-profits can all make the most of publicly available generative AI tools, says Responsible AI Think Tank leader and former Australian Red Cross CEO Judy Slatyer.
Utilising tools like ChatGPT and Dall-E 2 to contribute to or perform time intensive tasks like correspondence, content creation and market research are the most common use cases.
“My friend runs her own real estate business and use to spend 30 to 40 per cent of her time on analysing industry trends, creating a newsletter, and client correspondence. Now she’s performing these tasks with the help of generative AI and saving four to five hours a day, which is a huge amount of time for a sole contractor," Judy said.
For organisations that are thinking about building their own generative AI applications but are hesitant, Google Partnerships Manager (Australia and New Zealand) Mr Scott Riddle suggests starting with internal use cases rather than customer-facing ones.
This allows more time to develop internal skills and to build a level of familiarity with the technology that gives organisations the confidence to better navigate any of those ethical concerns.
“The sooner your teams start engaging with AI the sooner they can begin to explore ways of using it and innovating with it,” Scott said.
What can generative AI help with now?
- Writing emails: Writing a tough email that needs to make a strong point while preserving the relationship can be time consuming. Generative AI can help by assessing the tone and making it more diplomatic or assertive.
- Market research and analysing trends: Using generative AI to constantly scan the horizon and identify industry trends is a significant time saver. That half a day of researching or conducting market analysis can be replaced by some well-worded prompts on ChatGPT.
- Creating content: Talk Agency are using generative AI to produce blogs, email marketing and social media copy for clients. According to the director, their writers have turned a five-hour job into a three hour one. But beware of hallucinating AI – where the system confidently provides incorrect information. This is a risk that needs to be managed by a human fact checker.
- Building websites: Creating a website can now be as simple as entering a description of what you want and answering some follow up questions. There’s no need for technical skills or hours spent perfecting the design and functionality.
- Chatbots: Damien McEvoy Plumbing is using generative AI chatbots to improve the quality of customer service. “These chatbots can comprehend customer inquiries, provide accurate information and offer personalised recommendations,” he told the Prospa blog. Bear in mind customer expectations are increasing on par with chatbot popularity, with systems requiring sophisticated conversational capabilities.
According to Google’s Scott Riddle, they’ve been overwhelmed by the interest in generative AI from Australian businesses big and small.
Canva is using Google’s generative AI translation services to better support its non-English speakers and is exploring ways that Google’s LLM technology can turn short video clips into longer, more compelling stories.
“We’re starting to see a real groundswell of generative AI activity in the local startup community too. Rowy, an exciting Sydney based low/no code platform startup, has been a great earlier adopter of our generative AI technologies,” Scott said.
Generative AI capabilities are already making their way into core business applications says Scott, making it a question of when, not if, your employees and customers will ask for them.
“For that reason, understanding how to responsibly use generative AI will quickly become a necessity for all organisations,” he said.
There are new risk mitigation strategies you can put in place, to implement LLMs safely and ethically. In the video below, Dr McCalman explains:
Judy Slatyer also calls for transparency around using generative AI tools to enhance your businesses offering.
“It’s a simple responsible AI practice that ultimately increases customer trust,” she said.
Remaining abreast of the latest technology, requirements, and processes is another essential responsible AI practice.
Judy suggests referring to the National AI Centre’s 2023 report on Implementing Australia’s AI Ethics Principles and the 2022 Responsible AI Index report for established approaches to deploying AI both effectively and ethically.
She also advises joining peer groups to share information, experiences, and best practice.
“That’s where organisations like the National AI Centre can help with their Responsible AI Network workshops and events.”
What does the future hold?
Businesses and sectors will need time to reorient themselves and experiment with and implement generative AI, with full benefits likely to emerge in the next year or two.
But generative AI deployment will move significantly faster in larger organisations. Most are early AI adopters who have already experienced productivity and cost efficiencies the tech can provide.
“Small and Medium Enterprises will be pushed to stay up to speed so they can be a good supplier to the bigger corporations,” explains Judy.
“Over the next six to twelve months, the emphasis will be on fostering curiosity and exploring untapped possibilities and opportunities.”
“The generative AI revolution – like the introduction of the internet, smartphones, and apps – calls for experimentation, learning, adaptation, and knowledge-sharing to fully harness its potential.