Perverse effects of quality indicators raised

Publishing quality report cards in aged care can lead to greater cherry picking of clients and encourage providers to divert resources to areas under review, an expert in aged care has warned.

Publishing quality report cards in aged care can lead to greater cherry picking of clients and encourage providers to divert resources to areas under review, an expert in aged care has warned.

Dr Lee-Fay Low
Dr Lee-Fay Low

Dr Lee-Fay Low, an Associate Professor in Ageing and Health from the University of Sydney, said that international evidence showed that public reporting of quality indicators could have perverse outcomes, and this needed to be considered as Australia adopts its own quality indicators program.

The Australian Government is currently developing a national voluntary quality indicators program for aged care, initially with a pilot of three indicators in residential care – unplanned weight loss, pressure injuries and the use of physical restraint – with the results to be published on My Aged Care. A quality of life measure to capture a consumer’s experience and home care indicators will also be developed.

The national program is slated to commence January 2016.

Dr Low said that overseas studies showed that if report cards were published, cream skimming of admissions can occur.

“We also see a little bit of ‘teaching to the test’, or basically diverting resources to monitored areas. For instance, by putting lots of money into pressure ulcers if that is an indicator being measured and less into other areas,” she told an Australian Association of Gerontology (AAG) webinar on changes to community aged care on Thursday.

Dr Low said that in some cases aged care facilities and hospitals selected healthier residents to improve their performance on published measures of quality.

While there may be some unintended consequences, publishing indicators to compare facilities has generally led to better performance and resident outcomes on some measures, Dr Low said.

“Report cards tend to stimulate quality improvement by providers because there is public benchmarking happening and the US has shown providers who do well have increased demand for their services and make more money.”

Poorer performing facilities close, amalgamate or are taken over, she said.

“We also see in the publication of report cards that the initial response can be to improve reporting, which means you see a rise in adverse outcomes before they improve.”

Dr Low said local evidence was required to show how the transparent reporting of quality measures could best work in Australia.

A study in Victoria where quality indicators have been used in public sector facilities showed that nursing homes did use quality indicator data to improve care. However the distribution of the performance reports in the study was very limited.

Regulation

In Dr Low’s discussion on evidenced-based policy, she said the cost benefit of regulation was also unknown.

“We know providers spend a lot time and energy in meeting accreditation but we don’t know if those costs are worth it.”

In terms of community care reform, she said that evidence shows consumer directed care improves satisfaction with care and the use of services but it hasn’t been shown to improve clinical outcomes, quality of life or to reduce admission into aged care.

The impact of CDC on family burden was another important area of study, she said.

Dr Low said she would like to see more evidenced-base policy in aged care and a closer relationship between researchers and policymakers in the development of policy.

Tags: evidence-based-policy, lee-fay-low, quality-indicators,

2 thoughts on “Perverse effects of quality indicators raised

  1. I think Dr Low makes some really good points, particularly about the importance of the use of evidence and the observation that the cost benefit of regulation was unknown. She goes on to make the observation that “We know providers spend a lot of time and energy in meeting accreditation, but we don’t know if the costs are worth it.” My observation is that providers spend a lot of time and money on providing quality services and, looking at systems in aged care that do not have an accreditation scheme, most would do that absent the accreditation system. The actual (additional) cost created by accreditation vis a vis the costs of providing a quality service has vexed me for years. I would be interested to know why some create a link between the performance expectations set out in the accreditation arrangements as if the absence of accreditation would create some significant cost saving. A restart of the debate could be to ascertain which (costly) activities would a good provider cease if accreditation was abolished tomorrow.

  2. Err… what, exactly, is your point?

    Aged care providers already ‘cherry pick’ admissions, (witness the increasing numbers of challenging residents refused by other providers but admitted to less salubrious facilities under pressure to increase their occupancy) so nothing new there.

    The selection of national quality indicators has been tortuously slow with the panel unable to reach agreement on what to include and when to include it. Their considerations include the sensible (Victoria’s current model of 5 practical indicators) to the ridiculous (O’reilly- QLD-Blue Care’s farago of 23). One suspects the wobbly dance between doing the right thing and not upsetting providers as the major reason for the delay.

    Why is putting money into a monitored indicator a perverse outcome? Surely Ms Low isn’t suggesting that care would be limited only to indicator related areas; that model wouldn’t last very long for anyone that adpoted it.

    The real concern with benchmarking (apart from selecting the criteria) is in the data analysis. For example: A large provider, benchmarkng across many facilities, may well view a ‘stable rate’ of pressure injuries as a favourable result, when the reality is that even one pressure injury should be viewed as a disaster. What ‘s more important, the indicator or the measurement?

    ‘A closer relationship between researchers and policy makers’…there it is, the ubiquitous plug for more research money. Money that should be spent on fundamentals like staffing and training at the coalface. Actually, i’d like to see a closer relationship between researchers and the real world of residential care. Dementia trained Robotic Seals, anyone?

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement