Technical Articles

International Journal of Government Auditing – January 2012


Making Performance Audits More Responsive

According to ISSAI 3000, a performance audit is “the independent examination of the efficiency and effectiveness of government organisations, operations, or policies, with due regard to economy.”[1] The aim of performance audits is to lead to improvements.

Performance auditors work on the basis of professional audit norms and criteria that constitute the basis for performance judgments. When assessing an organization’s output, the annual production targets, or the objectives to improve quality or reduce costs, constitute logical starting points. When assessing the effectiveness of policy programs, norms and criteria can be found in policy objectives and indicators. Taken together, these norms and criteria, and especially the policy objectives and indicators, constitute a relatively stable frame of reference.

For politicians, policymakers, and performance auditors alike, this has important advantages: it provides focus, transparency, and predictability. Objectives and indicators form easy-to-understand frames of reference in a policy-oriented debate. Clear norms and criteria, together with sound quality controls in the audit process, help to produce credible reports on government performance.

New INTOSAI Guidelines

Despite these advantages, an important risk of performance audits that focus firmly on objectives and indicators is that the auditor misses the true explanation of disappointing agency or program performance. Such performance audits may even become less relevant. The objective-and-indicators-driven approach also carries the risk of (1) misunderstanding of stakeholders’ needs, (2) resistance to change, and (3) dysfunctional strategic behavior.

As Dutch politician and professor of public administration Mary-Louise Bemelmans-Videc and her colleagues noted: “[M]any explanations for disappointing policy results point to a ‘gap’ between simple policy intentions and schemes on the one hand, and the complex and ever-changing nature of societal problems on the other.”[2] They envision a new type of accountability, in which public sector auditing should play its role by (1) taking a dynamic rather than a static approach, (2) focusing primarily on results rather than on process, and (3) stressing the need for continuous and responsive learning. As a part of this, performance auditors must consider whether policy intentions and schemes still relate to society’s real problems and stakeholders’ concerns.

INTOSAI recognizes the need for performance audits that allow for more complexity. The new performance audit guidelines in International Standards of Supreme Audit Institutions (ISSAI) 3000 encourage an orientation towards citizens’ needs: the actual impact of activities, compared with the intended impact, should be audited. Unlike compliance auditing, performance auditors should not just look at whether policy programs are being carried out according to plan. Neither should they focus too narrowly on whether objectives are reached, criteria met, and indicators attained. According to the ISSAI 3000, “progress and practices must be built on learning from experience.”

Initiatives to Promote Responsiveness in Performance Auditing

The Netherlands Court of Audit (NCA) also stresses the need for learning. Auditors must be more responsive to what auditees and other stakeholders have to say about the relevance of audit questions, the meaning of intermediate audit findings, and the usefulness of potential recommendations.

To achieve this, the NCA has actively opted for a more open and responsive approach, including more participatory methods. Following the example of the United Kingdom’s National Audit Office, the NCA has developed various participatory audit methods and other stakeholder involvement activities that audit teams can use during the different stages of the audit process. NCA auditors set out to analyze the main policy actors and other stakeholders and their respective interests. Actions such as meetings at the start of the audit and brainstorming sessions to discuss intermediate outcomes and potential remedies are considered and plotted along the timeline of the audit process. Expert and stakeholder panels are an important part of this process.

In addition, the NCA wants to know what happens with audit recommendations after audit reports have been published, so it actively monitors the ministers’ follow-up. The follow-up audits address two dimensions: (1) Did the ministries do what they promised to do? (2) What changes—positive and negative—actually took place concerning the issue at stake? Here, much harder questions are asked: How did the follow-up of audit recommendations contribute to the quality of implementation processes? To what degree was the social problem solved?

Reality Checks: Testing Policy Theories, Objectives, and Indicators

“Reality checks” are the NCA’s latest initiative to develop a more responsive approach to performance audit. From 2009 to 2011, the NCA carried out 20 of them. In the language of innovators and product developers, a reality check is a test to ensure an idea is consistent with the real world. The NCA deliberately uses this rather provocative term.

The purpose of the NCA’s initiative was to provide government and Parliament with information on the effects of public policy programs and whether they are appreciated. The NCA started out by looking at a number of problems that affect citizens and businesses and the ways in which government translated these problems into policy objectives and measures. The pivotal question was what is the actual contribution of policy measures to find solutions as judged by (1) the policy target group (the persons, businesses, or institutions that are experiencing a particular set of problems) and (2) available evidence of program effectiveness?

The NCA used the following questions in its audits:

  • How do these stakeholders relate to the objectives, criteria, and intervention logic (or policy theory) of central government?
  • Do they recognize the relevance and value of those elements, and do they use them?
  • How do they appraise the actual interventions or policy tools and the way the policy program has been implemented?
  • To what degree did the targeted spending actually land on target?
  • What concrete effect has the policy had on those who are directly concerned?
  • What information is provided by central government on the effectiveness of the policy measures taken?

NCA auditors interviewed representatives from target groups and other stakeholders to find out what they thought of the official objectives and criteria of policy interventions or measures. In addition, the actual use of policy schemes by target groups was analyzed, using available data from agencies, statistics, and monitoring and evaluation reports. Where suitable and possible, auditors observed real-life negotiations, meetings, transactions, and so on. Finally, the results were put forward to the responsible policymakers for a reaction.

Two examples of these reality checks involve improving the security of small businesses and improving the energy efficiency of homes. In the first case, a subsidy scheme encourages small businesses to take preventive measures against crime. A key element in the policy program was compensating small businesses for the costs for tailored advice on improving security. In practice, however, many hired security consultants and specialists work with standard formats. NCA found that although 80 percent of the target group found the safety scan useful, only 8 percent thought the scan was worth 350 euros.

In the second example, in the three subsidies to improve the energy efficiency of existing homes, the NCA found a clear tension between the objective of relieving participating homeowners of the administrative burdens of applying for and declaring costs, and the reluctance of those homeowners to leave these tasks to contractors.

The NCA reported its findings to Parliament in May 2010 and 2011. The reality checks revealed that the link between policy and practice leaves a great deal to be desired. Although expected outputs (grants, subsidies, websites, and advice) were mostly delivered according to plan, recipients were either not very familiar with them, could not use them, saw no need for them, or expressed a greater need for other measures. The choice for instruments can be characterized as supply-side driven. The NCA often observed a “shotgun” approach in government policy. Instead of evidence-based interventions—where there would be a clear, well-founded relationship between intervention and effect—the policy often rather rashly opted for subsidies or other financial incentives. “Money will work wonders” seemed to be the dominant perspective, even when other factors (cultural, technical, or judicial) were behind societal problems.

The reality checks showed the importance of policymakers actively testing whether the assumptions behind the policies make sense, both prior to the introduction of policy measures and during their implementation. Good ex ante evaluations, however, are often lacking. The NCA found a lack of relevant data and information to follow the implementation and results of policy interventions and make necessary corrections.

In many cases, the NCA found that the way policy interventions were designed led to rather complex implementation processes. The many links between government, intermediate organizations, stakeholders, and target groups create long delivery chains and the risk of too many hands being involved. As a result, bureau politics and institutional logic may come to crowd out societal relevance, posing a risk to both efficiency and effectiveness.

Conclusions and Challenges

While policy objectives and indicators are important aids, performance auditors should also be aware of the risks they bring. Nonetheless, becoming more responsive is not an easy process. Recent NCA initiatives to bring more responsiveness into its performance audits illustrate the challenges in this process.

First, there is the problem of timing. Early warnings in the form of reality checks may prevent inefficient or ineffective programs from going bad to worse. Still, for ministries and agencies, early audits and evaluations may lead to the feeling that a program has not been given the opportunity to demonstrate its worth.

Second, responsiveness should not be mistaken for indiscriminately using stakeholders’ concerns and complaints against auditees. It is relatively easy to find complaints about and disparities between public policy program intentions and target group needs and wishes. Even when government interventions appear to lack relevance in the eyes of respondents, there may be very good reasons for those interventions.

Third, there should be fairness in criticizing the use of policy objectives and performance indicators. The reality checks showed that one size does not fit all, when it comes to policy initiatives. Still, objectives and criteria are and always will be deliberately simplified versions of desired realities, which may provide the focus needed for debate, implementation, and evaluation.

Responsive performance audit does not mean that auditors should abandon their traditional criteria for judgment or become merely passive receivers of other people’s interpretations and criteria. Neither should performance auditors join the lamenting choir that denies all benefits from policy objectives, performance targets, and indicators.

In a democratic society, government has a moral duty to organize its own responsiveness and reflexivity. With the rise of the Internet and new social media, there are now more possibilities than ever to engage the broader public in policymaking and evaluation. Learning and the organization of policy-oriented learning are crucial in this process, and performance audit has an important role to play.

For additional information contact the author, currently director for performance audit with the NCA, at Peter.vanderknaap@rekenkamer.nl.

[1]INTOSAI, Implementation guidelines for performance audit: Standards and guidelines for performance audit based on INTOSAI’s Auditing Standards and practical experience, International Standards of Supreme Audit Institutions 3000 (2010), p. 11.

[2]Marie-Louise Bemelmans-Videc, Jeremy Lonsdale, and Burt Perrin, eds., Making Accountability Work: Dilemmas for Evaluation and for Audit (New Brunswick, N.J.: Transaction Publishers, 2007), p. 133.

References:

Bemelmans-Videc, Marie-Louise, Jeremy Lonsdale, and Burt Perrin, eds. Making Accountability Work: Dilemmas for Evaluation and for Audit. New Brunswick, N.J.: Transaction Publishers, 2007.

Institute of Internal Auditors (IIA). The Role of Auditing in Public Sector Governance. 2006.

INTOSAI. Implementation guidelines for performance audit: Standards and guidelines for performance audit based on INTOSAI’s Auditing Standards and practical experience, International Standards of Supreme Audit Institutions 3000. 2010.

Lonsdale, Jeremy, Tom Ling, and Peter Wilkins. Performance Auditing: Contributing to Accountability in Democratic Government. Cheltenham, U.K.: Edward Elgar Publishing, 2011.

Netherlands Court of Audit. Staat van de beleidsinformatie 2010. The Hague, 2010.

Netherlands Court of Audit. Staat van de Rijksverantwoording 2011. The Hague, 2011.

Pollit, Christopher, Xavier Girre, Jeremy Lonsdale, Robert Mul, Hilkka Summa, and Marit Waerness. Performance or compliance? Performance audit and public management in five countries. Oxford, U.K.: Oxford University Press, 1999.

Van der Knaap, Peter. “Policy Evaluation and Learning: Feedback, Enlightenment or Argumentation?” Evaluation 1, no. 2 (1995): 189–216.

Van der Knaap, Peter. “Responsive Evaluation and Performance Management: Overcoming the Downsides of Policy Objectives and Performance Indicators.” Evaluation 12, no. 3 (2006): 278–293.