Quality Agency rejects ownership factor on accreditation

Not-for-profit providers are no better at meeting accreditation standards than for-profit providers and ownership type has “no discernible” impact on passing accreditation, the Australian Aged Care Quality Agency has said.

Not-for-profit aged care providers are no better at meeting accreditation standards than for-profit organisations and ownership type has “no discernible” impact on passing accreditation, the Australian Aged Care Quality Agency has said.

The agency’s Queensland state manager, Tracey Rees, told the LASA Q conference last week that ownership was not a performance factor.

“There’s been some suggestion in the industry and in the community that the not-for-profit providers are better at meeting expected outcomes than the for-profit providers – that is not what our data is showing us at the moment,” she told the event.

“It shows that there is no discernible difference at an industry level in their performance at audit at all.”

However, Ms Rees said there were differences based on the size and location of facilities.

“We know remote services have difficulty meeting the accreditation standards more than, for example, providers in the cities and that has a lot to do with workforce. But certainly we are seeing no discernible difference between the for-profit and not-for-profit sector.”

The agency’s comments follow the release of study findings late last year showing that for-profits were more than twice as likely as not-for-profits to have government sanctions imposed on them.

As Australian Ageing Agenda reported, the research conducted by Richard Baldwin from the University of Technology Sydney found ownership type, jurisdiction and service location were significant predictors for sanctions.

A higher likelihood of sanctions in for-profit services was also identified by Australian gerontologist Anna Howe and Dr Julie Ellis in 2010.

The research has sparked debate about the association between ownership and quality failure, especially in the context of an expanding industry and growing for-profit sector.

While the Quality Agency monitors a facility’s progress in meeting accreditation standards, it is the role of the Department of Social Services to take regulatory action including imposing sanctions in response to immediate and severe risk to health and safety or where there is continued non-compliance.

Sector improvement

In an analysis of non-compliance data, Ms Rees said there had been significant and sustained improvement by the industry in meeting accreditation outcomes.

In 2012, 95 per cent of homes met the 44 expected outcomes under the accreditation process, up from 64 per cent in 2000.

Ms Rees said it was expected that more than 95 per cent of services would meet all outcomes this year.

At 31 January this year, only 24 facilities out of 2,688 were on a timetable for improvement, which she said was “a historical low.”

“If you look at the size of the industry, the number of homes we have and those that are on timetables for improvement, it is a very small part of the sector.”

Compounding risks

Discussing non-compliance, Ms Rees said failure in one area of performance usually impacted on other areas of care and service delivery, and was often coupled with a poor or weak systemic approach to continuous improvement.

“We have more than 10 years of accreditation data and it shows that failure in standard one, (two not met outcomes in standard one) is often the causal factor for failure in standards two, three and four,” she said.

Ms Rees said key risk factors for failure included changes in key staff, systems and processes, resident mix and ownership, and risk management was everyone’s responsibility – not just the responsibility of boards or senior managers.

“Risk can’t be avoided, it needs to be identified and managed, and your systems and your people create or reduce the risk in your service.”

Ms Rees detailed the top five areas where expected outcomes were not met by facilities in the three years to 31 December 2014. These were:

  1. Having effective information management systems in place including collecting information, updating care planning and sharing information (standard one) – 170 not met outcomes nationally.
  2. Care recipients receive appropriate clinical care (standard two) – 115 not met outcomes
  3. Care recipients’ medication is managed safely and correctly (standard two) – 115 not outcomes
  4. Human resource management which ensures appropriately skilled and quality staff to deliver services (standard one)– 106 not met outcomes.
  5. The needs of care recipients with challenging behaviours are managed effectively as part of behavioural management. (standard two) 97 not met outcomes.
Tags: data, dss, ownership, quality agency, richard-baldwin,

5 thoughts on “Quality Agency rejects ownership factor on accreditation

  1. I spoke to Ms Rees and asked her on what data the statement “rejecting ownership issue” was based. She indicated that she had not prepared the material herself. I have asked her to let me have the figures and to indicate how they were evaluated. I suspect that this is based on the raw data the agency usually publishes and that the variability introduced by the almost total absence of for-profit facilities from the rural sector has not been considered. Baldwin’s analysis considered this variable.

  2. I write these comments based on Linda Belardi’s report (Quality Agency rejects ownership factor on accreditation, AAA, 25 March 2015) as I was not at the LASA conference in Queensland to hear Ms Rees’s presentation. As the industry moves to a more competition environment greater transparency in performance reporting is needed at both the industry and provider level; so it is good news to read that the Agency has completed some analysis on the volumes of data that it has collected over the years since accreditation commenced and is reporting de-identified industry-wide performance data. However, what is not clear in the article (but may have been in the presentation although my reading of Michael’s comment suggests it was not) is what data was analysed and how it was analysed. The reporting of associations between weaknesses in performance and failure to achieve accreditation standards are very significant findings, however, stakeholders need to be confident concerning the level of analysis that was undertaken to reach these conclusions and whether the methods used meet appropriate standards of statistical rigour. It is time this industry moved beyond the reliance of selected reporting of data by agency staff at a time of their choosing. Ideally the data held by the Agency (in de-identified form) should be analysed by independent researchers and subject to peer review In a manner that is Independent of the Agency (and Department). The Agency has used its interpretation of its Act in the past to justify its decisions not to make the substantial data it holds available for independent analysis. Ms Rees’s analysis of the Agency’s data need to be subject to peer review to build confidence in the reported results.
    Richard Baldwin, PhD
    University of Technology Sydney

  3. Dissect the data anyway you want…but there are better ways to spend your time. The pachyderm in the filing cabinet is the veracity of agency proffered stats.

    A health care accreditation process conducted by non clinicians (no, the night shift you did twenty years ago doesn’t count) auditing paper not people, after giving you two months notice they were coming, is one of the few things in aged care that keeps me laughing.

    Surprisingly, the article had almost finished before it appeared, but there it was…continuous improvement. This panacea for every failed standard should be sold in a bottle (or on a clipboard?).

    Medieval clinical care?, Swiss cheese documentation? Cuckoo’s Nest behavior management? No problem…just bang out a CI Plan and you’re good to go. I’d attribute the 64% compliance increase to clever hoodwinking rather than agency diligence.

    Even the HRC has concerns about the focus on process over outcome and the majority of aged care complaints continue to be care related.

    The slavish acceptance of a system in which Pavlovian promises pass for actual change condemns the sector to mediocrity. I’m off to buy some paint and a few new chairs…the agency will be here next week.

  4. On 27th March Ms Rees supplied me with the following figures. She indicated that these were “an extract from our Better Business accreditation operating system”.

    Homes by ownership type – result of last audit decision prior to 31/12/12

    Australia No of homes No of homes 44/44 % of homes 44/44
    Not for Profit 1,601 1,526 95.3%
    Government 284 274 96.5%
    Private 815 776 95.2%
    Total 2,700 2,576 95.4%

    It seems clear that these are simply the crude figures and Ms Rees has not corrected me on that when I responded. She did not give me the figures for rural and metropolitan centres and I could not find the “Better Business accreditation operating system” on the internet or the agency’s web site. In the past these figures have been given in “The Standard”, the agency’s publication each year but I was unable to find a recent report of these figures. In past reports metropolitan facilities have performed considerably better than rural ones, but on the crude figures not-for-profit and for-profit performed equally well. It was readily apparent, given the absence of for-profit providers in the rural areas, that the comparable figures given in the reports could only have been because the not-for-profits were performing mush better than the for-profits in the metropolitan areas.

    The same day I emailed her and challenged her use of crude figures, without considering this obvious and readily apparent variable. I invited her to correct the claim she made by responding to this article. She has not done so.

    I made the following comments to her in my emails.

    “Most evaluations of performance generally show roughly a 2 to 4 fold difference in the rate of measured failures in care between for profits and not for profits. International figures generally reveal a clear trend between failures in care and staffing with not for profits at the one end, extending through private for profit, market listed for profit, and private equity having the most staff problems, the worst care and with both continuing to deteriorate the longer the private equity owns the facility. Staffing levels vary by 30 to 40% between the two ends of the spectrum. This sort of information is critically important for public policy but is not collected in Australia. This is a national disgrace.”
    “Unlike other countries (the USA and UK) Australia does not collect objective data of performance, records of failures in care, staffing and resident acuity’ It may be that these factors do play an important part and impact on the result but if that information is collected it is not made available. That information should be available to researchers and the public and regrettably it is not.

    I am aware that the accreditation Agency has over the years continued to report the two (types of provider) as equal, but equally clearly did not do any sort of analysis of the variables. This in my view is unacceptable”
    I quote from our 2008 analysis which can be found here. (http://www.agedcarecrisis.com/aged-care-report-card).

    When rural facilities were excluded

    The private-for-profit aged-care homes now had twice the number of homes failing one or more standard than the government operators, three times that of any of the not-for-profit operators and almost 4 times the religious based subgroup – the largest of these. – – –

    We express our concern that the higher proportion of private-for-profit operators within the failing facilities may have been because both financial and human resources are being diverted from the care of the frail, older people to the business of generating profits and capital expenditure compared to the ongoing costs of providing quality care and ensuring there is adequate and experienced staff to look after residents.
    Our findings, which used only the 1 years reports available to us are in keeping with international studies and now with with the study of Baldwin et al.

    There were many distinguished scientists on the Accreditation Agency’s board and now also on the new Agency Council. I questioned how this sort of analysis of data got past them.

    I also questioned the utility of reporting data as “Result of last audit” rather than as the number of failures per year. Results in 2008 indicated that there were over 400% more failures to attain 44/44 in that year than the agency reported using the same method. If there is a similar pattern in the figures presented by Ms Rees then only 78% attained 44/44 in the accreditation assessments in 2014 and not 95%. If correct then this is a more useful figure, which does not lead to complacency.

    Hopefully Ms Rees will now comment on the above and defend the way the department does its analysis..

  5. Michael, many thanks for your dogged pursuit of the truth about this issue. I really want to applaud your analysis and advocacy. I convene a number of campaigns and projects here in WA and globally- Serco Watch and the Nemesis Project- to challenge the corporate takeover of social care and social and community services- in child care, health care, employment, mental health, health care, corrections, aged care etc. Very rarely are claims of the supposed superiority of corporate and business provision of services and care ever challenged. So what you are doing here is really critical. Just wondering if your work is accessible online anywhere. Great work

Leave a Reply

Your email address will not be published. Required fields are marked *