Gaining efficiencies through technology is key for providers but there are also legal and ethical issues to consider, writes Alison Choy Flannigan.
The aged care sector is currently under considerable stress with the recent Royal Commission into Aged Care Quality and Safety and the new Aged Care Quality Standards.
One of the greatest opportunities is for technology to provide efficiencies with respect to administrative or mundane tasks in order to enable staff to spend more quality face-to-face time with residents and clients.
Progress in artificial intelligence is moving rapidly. We have been working with international and Australian clients at the cutting edge of innovation who are using AI for a range of purposes including to detect falls in hospitals and residential aged care facilities.
We know others who are using it to determine pain using face recognition software or in applications of predictive medicine.
Duty of care, negligence
A number of legal, regulatory, ethical and social issues have arisen with the use of AI in the health and aged care sector.
Proving causation may be difficult when machine learning occurs in a multi-layered, fluid environment when the machine itself is influencing the output. Answers may be complex and difficult to find given many legal, regulatory, ethical and social issues at play.
If a resident or client suffers an injury, and that injury arose from using AI, who will be liable:
- the treating clinician, such as the GP, who relied upon the technology
- the developer of the algorithm
- the programmer of the software
- the aged care provider?
We are watching closely how the law of negligence and duty of care adapts to this new technology.
Several working groups have been established to discuss ethical issues concerning AI’s use in health care.
In 2017, the World Health Organisation and its Collaborating Centre at the University of Miami organised an international consultation on the subject. A WHO Bulletin devoted to big data, machine learning and AI will be published in 2020.
The European Union on Ethics in Science and New Technologies published the Statement on Artificial Intelligence, Robotics and Autonomous Systems in March 2018.
The EU proposed some basic principles and democratic prerequisites, based on the fundamental values laid down in the EU Treaties and in the EU Charter of Fundamental Rights. These principles and our commentary follow.
Human dignity: This principle recognises the inherent human state of being worthy of respect, must not be violated by autonomous techniques. It implies legal limits to how people can be led to believe they are dealing with human beings when in fact they are dealing with algorithms and smart machines.
Should we be transparent in telling people they are interfacing with AI?
Autonomy: This principle implies human freedom, the freedom of human beings to set their own standards. The technology must respect human choices when to delegate decisions and actions to them.
What should we delegate to machines? Surely, the best care is human touch, and people should come first?
Responsibility: Autonomous systems should only be developed and used in ways that serve the global social and environmental good. AI and robotics applications should not pose unacceptable risks of harm to human beings.
This is consistent with the principle that we should do no harm.
Justice, equity and solidarity: AI should contribute to global justice and equal access.
Commentary: It is important to grant equity of access, so AI’s benefits are not only provided to those countries or people who can pay for the technology.
Democracy: Key decisions should be subject to democratic debate and public engagement.
AI should be used in accordance with community expectations and standards.
Rule of law and accountability: Rule of law, access to justice and the right of redress and a fair trial should provide the necessary framework to observe human rights standards and potential AI specific regulation.
There should be adequate compensation for negligence.
Security, safety, bodily and mental integrity: Safety and security of autonomous systems includes external safety for the environment and users, reliability and internal robustness, for example against hacking, and emotional safety regarding human-machine interaction.
AI should be appropriately regulated in health care to ensure it is safe.
Data protection and privacy: Autonomous systems must not interfere with the right to privacy of personal information and other human rights, including the right to live free from surveillance.
Protecting privacy and data is important.
Interesting times ahead
The royal commission and government are looking for ways to improve quality in aged care on a limited budget. Future generations of baby boomers and others coming into residential aged care will demand improved services, choice and the latest that technology has to offer.
Therefore, efficiencies gained through the use of technology will be key. Those approved providers who are early adopters of the right technology will become market leaders.
The question will be how do you identify the right technology or the Apple or Uber of the future? We are in for interesting times.
Alison Choy Flannigan is a partner at Hall & Wilcox.
Comment below to have your say on this story