Informa Australia is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Healthcare | Technology

Understanding the risks of new AI applications – how the NSW Government developed its revolutionary framework

14 Feb 2023, by Amy Sarcevic

As Australia takes strides in its journey with artificial intelligence (AI) and data analytics, the potential harms and long term implications of these technologies have increasingly been talked about.

Broadly, we know that – if fed with poor data – the technologies can fall prey to algorithmic bias and make inaccurate forecasts or suboptimal decisions. We also know they raise privacy and social equity concerns.

Until recently, however, we haven’t had a firm grasp on the risk profile of each new AI or big data application – a particular concern in healthcare where people’s health and safety is at stake.

As such, the interpretation and handling of AI and data risk has not been standardised, raising questions about the ethics of its usage.

A revolutionary approach

Thankfully, the NSW Government is addressing this with a series of initiatives, led by Chief Data Scientist Ian Oppermann – a speaker at this year’s Healthcare Cyber Security Conference.

Dr Oppermann and team have developed an AI assurance framework and series of “data sharing and use” whitepapers to help people in different industries get a better handle on AI risk.

Working with the International Standards Committee, they have also devised a set of global standards for data usage, with further work in the pipeline.

“The data sharing and use frameworks help people identify repeatable risk patterns in the infinite number of potential AI uses; and the standards are a template for anyone who wishes to use data more safely,” said Dr Oppermann.

“The whitepapers give people insight on how to manage this risk within their own work context. They are a bit like a cook book, showing you how to blend the different ingredients (i.e. the various features of the technology and the context in which it is being used) into a set of recipes.”

Challenges in developing the materials

Dr Oppermann says developing the tools has been complex, with AI now spanning practically all industry applications. In the early stages, he and his team found that rules which applied to some categories of AI did not apply to others.

“The idea of the assurance framework is to test yourself against the principles of the government’s Ethics Policy and Strategy. At first we developed a version which contained general, leading questions and found that it kept missing the mark.

“One reason is that the risk profiles of operational versus non-operational AI are vastly different. Another is that the level of harm from a ‘false negative’ versus ‘false positive’ AI reading varies significantly between contexts.

“When assessing someone’s need for urgent medical treatment, for example, a false negative could lead to death, while a false positive could lead to inconvenience. So we had to classify thresholds for different types of AI risk.”

By exploring a wide range use cases of the AI, however, Ian and team were able to crack the formula.

“We tested the framework against real projects. Initially, every project came back with flashing red lights. But when we stepped through the context and got an understanding of how things mattered in each, we were able to refine it.”

Calibrating what needs to be done against potential harms added a further layer of complexity.

“Once you have identified the risk, you have to consider it against the way things are done without the technology. Yes, an AI that incorrectly diagnoses someone carries a risk of harm, but is that risk greater than if a health professional were to deal with the same information manually? This answer will determine how you handle the AI risk and work out a mitigation strategy.”

Only starting out

Dr Oppermann says he and his team are only beginning the journey towards more ethical AI, but are so far pleased with progress.

“It is exciting to see the guidelines come together and make sense across different industries and applications. It is the first time we have had a single objective framework like this.

“We are still very much working out how to cater for emerging AI and big data use cases – this process will unlikely ever end. But we have proudly reached that turning point where the initial complexities of mapping it across industries and contexts are ironed out.”

Dr. Ian Oppermann is the NSW Government’s Chief Data Scientist working within the Department of Customer Service, and Industry Professor at University of Technology Sydney (UTS).

He has thiry years’ experience in the ICT sector and has led organizations with more than 300 people, delivering products and outcomes that have impacted hundreds of millions of people globally.

Hear more from Dr Oppermann at the 2nd Annual Healthcare Cyber Security Conference – one of three conferences to take place at Connect Virtual Care.

One pass for Connect Virtual Care gives delegates access to the Healthcare Cyber Security Conference, the National Telehealth Conference and the Medication Safety & Efficiency Conference.

This year’s event will be held 27-28 April at the Hilton Sydney.

Learn more and register your place here.

 

 

Blog insights you may like

Get all the latest on Informa news and events

Informa Connect Australia is the nation's leading event organiser. Our events comprise of large scale exhibitions, industry conferences and highly specialised corporate training.

Find out more