Join us at our webinar on Healthcare AI – Hype, Hope or Humdrum?
Artificial intelligence (AI) tools play an increasing role in healthcare, including in medical screening, diagnosis and treatment. This webinar showcases current projects in the Safety, Quality and Ethics Program of the Australian Alliance for Artificial Intelligence in Healthcare (AAAiH). Projects include a recently-launched NHMRC-funded project led from the Australian Centre for Health Evidence, Engagement and Values (ACHEEV) at the University of Wollongong. This project, “The Algorithm Will See You Now”, focuses on the ethical, legal and social implications (ELSI) of diagnostic and screening AI. Other featured AAAiH projects include a survey of Australian AI Safety and Ethics initiatives and a review of COVID-19 triage algorithms.
Date: 22 October 2020
Time: 11:30am – 1:00pm (AEDST, Sydney time)
- Welcome, Enrico Coiera
- AAAiH Safety, Quality and Ethics Program update
- A survey of Australian AI Safety and Ethics Initiatives
- The Algorithm Will See You Now (TAWSYN) – An NHMRC-funded investigation of the ethical, legal and social implications of AI for diagnosis and screening
- Automating difficult decisions? A review of ICU triage protocols during the COVID-19 pandemic
- Panel discussion and Q&A
AAAiH Safety, Quality and Ethics Program update
Wendy Rogers and Farah Magrabi
The Safety, Quality and Ethics (SQE) Program is one of four programs of the Australian Alliance for Artificial Intelligence in Healthcare. In this presentation, Wendy Rogers will introduce the work of the SQE program and provide a brief update about our current activities.
A survey of Australian AI Safety and Ethics Initiatives
Farah Magrabi, Wendy Rogers, Yves Saint James Aquino
Like other countries, Australia has seen a rapid proliferation of guidelines and other documents focusing on ethical, safety and regulatory aspects of AI. Many of these have been developed in isolation. Most address multiple domains of AI rather than being healthcare specific. In this project, we identify and analyse 18 published initiatives for the safety and ethics of AI in Australia. The analysis identifies the types of AI and the range of applications covered by these initiatives, and the commonly invoked safety and ethical principles and approaches that are used. In this presentation, we describe the project and some of the major findings.
The Algorithm Will See You Now (TAWSYN) – An NHMRC-funded investigation of the ethical, legal and social implications of AI for diagnosis and screening
Stacy Carter, Yves Saint James Aquino and Wendy Rogers
As machine learning systems develop greater capability in diagnostic and screening tasks, questions arise about how to prepare health systems for the ethical, legal and social implications (ELSI) of these technologies. TAWSYN, launched in July 2020, is an NHMRC-funded collaboration between the University of Wollongong, University of Sydney, Macquarie University, University of Adelaide and Monash University. Our team includes ethicists, social scientists, lawyers, clinicians, public health academics, health economists and data scientists. Our aim is to conduct a new kind of ELSI research, focused not on abstract concepts but on concrete cases, and grounded in deep engagement with expert stakeholders, health professionals, patients and the general public. This presentation will introduce our three-year plan to better understand how machine learning for diagnostic and screening is and should be developing for use in screening for breast cancer, predicting cardiovascular disease risk and diagnosing cardiovascular conditions. This work is designed to generate independent, multidisciplinary and useful frameworks for diagnostic and screening machine learning.
Automating difficult decisions? A review of ICU triage protocols during the COVID-19 pandemic
Yves Saint James Aquino, Wendy Rogers, Stacy Carter, Jackie Leach Scully, Farah Magrabi
The current coronavirus (COVID-19) pandemic has resulted in significant demand for acute and critical care services worldwide, leading to a shortage of beds in intensive or critical care units (ICUs) and consequent rationing. These surges in hospital admissions raise challenging questions about ethically justifiable criteria and processes for rationing patient care. In response, guidelines, decision aids and other tools have been developed to assist clinicians in making rationing decisions. As these decisions involve collecting and analysing data, there is a potential role for artificial intelligence (AI) to support clinicians in their decisions. To investigate the ethical criteria and processes for rationing, and the nature of any decision aids, we reviewed 21 international guidelines, developed in high-income countries, for allocating scarce ICU resources during the COVID-19 pandemic. We found that all were paper based, with no automated elements. The predominant ethical framing in the guidelines and tools is utilitarian, although there are some differences in medical, ethical and social criteria across the various guidelines. This presentation offers a brief overview of the project, including a description of the commonest ethical approaches and principles used in these guidelines for allocating scarce ICU resources during the COVID-19 pandemic. In addition, we identify features of the decisions that may pose challenges for developing AI tools to aid in decision making during pandemic-related conditions of scarcity.
This webinar is organised by Macquarie University, the University of Wollongong, and the Australian Alliance for Artificial Intelligence in Healthcare.
We look forward to you joining us at the webinar.
Need more information?