International Journal of Government Auditing – October 2012
Editor's Note: The authors work for a federal government agency in Canada that employs more than 40,000 individuals in over 50 offices located across 10 provinces and 2 territories. In this article, they share a new methodology for selecting audit sites that they have applied in their recent internal audit work.
Several years ago, after finishing the planning phase for the first-time audit of a program area, we faced one key unanswered question: Where should we go to gather the evidence we would rely on in order to address our audit objectives?
In past audit engagements, we had selected our sites by using our judgment based on qualitative and quantitative information found in concurring audits, reviews of comparative data (volumetrics), analyses of program monitoring information, and other sources. While far from comprehensive, these elements had generally been sufficient guides for selecting a sample of sites. However, given the increasingly fast-paced change environment in the operations we audit, we realized we had to develop new ways of conducting audits. We, therefore, adopted a risk-based methodology to support our site selection and add value to our process.
As with most audits, we began preliminary interviews with the key client, the senior manager responsible for the program area being audited. We discussed the role of internal audit and described the process we would follow: why we were carrying out this particular audit and what they could expect over the course of it (i.e., good communication, regular updates, etc.). The client talked about the unit’s operations, pressures, service approach, mandate, deliverables, and a recent reorganization the client’s organization had undergone. However, no big risks were mentioned, and the client felt that, in general the program was well-managed.
It is important to emphasize that the program area for this client had never been audited. While we were auditing a specific operational area, this division is also responsible for other programs. This involves providing a large amount of analyses (in the form of presentations addressed to senior management, documents, flowcharts, innovation benchmarking, research, etc.) to support its recent operational changes.
We felt that we could gain a better understanding of our client’s needs by providing them with more than the usual amount of information regarding our audit process and objectives and that this would be beneficial to both parties in the long run. Additionally, while as auditors we knew that to maintain our independence we need not share everything with our clients throughout the course of the audit, for this particular first time auditee, it was advantageous to adapt our auditing style to obtain their support and buy-in to the audit process. To accomplish this, we discussed our criteria and the results of our site selection analysis with the client early in our work.
After gathering and reviewing information about the program under audit, we realized that using our traditional comparative site selection approach would be challenging given that the program area had operational sites located in 5 geographical regions, including 12 main offices and 40 smaller points of service. Therefore, we applied a risk-weighted site selection model as a parallel model to our traditional comparative method.
The elements of the risk-based audit site selection model are outlined in figure 1. This section describes the steps we followed to implement the model in our audit.
Figure 1: The Risk-Based Site Selection Model
We began our risk-based selection process by using the same data (tabled by region) that the client had made available to us. Metrics were transformed into indicators based on information that was thought to be relevant to site selection based on the audit objectives. Each indicator for each region was ranked from 1 to 5. A rank of 1 was attributed to the region whose indicator value was the most critical to site selection. Ranks for all five regions across all 17 indicators were determined. The sum of ranks for each region was calculated in a total score and tabled in an unweighted site selection matrix, where (as shown in the second column of table 1) regions A, B, C, D, E were then ranked 4, 3, 1, 5, and 2, respectively.
We then developed a risk adjusted matrix (RAM) to factor in the risk events (RE) identified in our preliminary risk assessment as we conducted preliminary interviews, environmental scanning, etc. REs were always phrased in the form of a specific change in a set condition that would impact the program being audited—for example, the risk that the information the program managed about one client would be sent by mistake to another client. These REs were then assessed against the likelihood of their occurring, the impact they would have, and current controls in place to mitigate them. We made assumptions about the risk drivers or catalyst of these REs—i.e., how likely is that catalyst to materialize? REs were then cross-referenced as best descriptors to each of the 17 indicators. In many cases, more than one relevant RE was tied to a single indicator, which required that the scores (ranging from 3 to 27) for all relevant REs be averaged and then normalized by dividing by the maximum allowable risk score (27) to yield the relative risk of the indicator (RRI). The RRI that would result would then be a coefficient between zero and one (0<RRI≤1), and the RRI scores of each indicator populated the risk-adjusted matrix. The RRI was then applied to the unweighted site selection matrix to generate the risk-weighted site selection matrix. Rankings for each region across all 17 indicators were then recalculated and totaled to determine the risk-weighted site selection rankings. Under this model, regions A, B, C, D, and E were now ranked 3, 2, 1, 5, and 4, respectively. Table 1 below summarizes the different rankings for the regions using the different site selection methods.
Table 1 - Regional Rankings Using Different Site Selection Methods
The site selection exercise demonstrated that both the traditional comparative method and the risk-weighted model identified region C as the highest risk. However, for the remaining 4 regions, viewing our site selection indicators through the filter of our risk assessment yielded slightly different results. These were shared with the client and confirmed to be a truer reflection of the current environment.
In our opinion, the risk-weighted site selection model added value by getting the required buy-in from the client while also providing a thorough quantitative tool to support our audit methodology. This model provided the extra step needed for a more open discussion with the client about the real issues. The model is still experimental, but once it is fully developed, it could easily be applied to a continuous monitoring program, a cyclical planning exercise, or an annual risk planning exercise. It integrates the initial risk assessment process with the site selection audit step. If populated on a regular basis (monthly, quarterly, or annually), the model can be adapted to operational needs and can truly reflect internal or external changes. It could be adapted for any segregated environment (such as commodity, sector, geography, or threshold) against the client’s monitoring or performance information. At the same time, it does require extra analysis, and audits under tight time frames or resource constraints may or may not be able to take this extra step to quantify their risks.
The risk-weighted site selection model relies mostly on the client’s own monitoring and performance data to create the indicators, which can reduce its accuracy depending on the reliability and validity of the information provided. This is where we believe the risk adjusted matrix values can compensate. A risk adjusted matrix value gives an order of importance to the indicator and, by extension, to the monitoring information provided, which can vary through time. What was deemed good information to monitor in the past may no longer be as relevant today. This is where this risk-weighted approach brings discipline to a monitoring function that may be automated. Ultimately, the risk-weighted site selection model provides a balance between performance monitoring information and assessment of risk for better informed decision-making.
Translating the information from your initial risk assessment to the other steps of your audit allows for a detailed, documented approach that an external reviewer can verify and reproduce. Such a process provides an excellent foundation for discussion with your client and can be used to build a stronger audit program. Combining the audit risk assessment with what data the client may already have provides an excellent opportunity to tap into existing information for a more accurate decision-making process, which can result in savings in planning time, travel costs, and resource allocations.
As with any model, the usefulness of the risk-weighted site selection model will depend on the availability and quality of the raw data used to develop the appropriate indicators it requires. The reliability of the model’s final site selection rankings will also depend on the rigor of the audit risk assessment. It may be beneficial to add a second filter to the risk assessment matrix to further refine the process. Some will argue, however, that additional refinement comes at a cost of increased bias. Although this model was applied to an operational audit environment, it could also be applied to a performance assessment context or perhaps even an enterprise-level risk management process by providing a more robust and objective (third party) quantification of a risk mapping exercise.
For example, the quotient of individual measurements of Total Budget (Salary and Operating and Maintenance) and Total Volume (processed) were calculated for each region to determine a relative indicator of per unit cost, which was thought to be more relevant to site selection (for example, targeting regions whose unit costs were relatively higher or lower).
For example, the region with the highest per unit cost was given the rank of 1, and the region with the lowest per unit cost was given a rank of 5. Had our indicator been Total Volume per Budget Dollar, the region with the lowest volume processed per dollar would have received a rank of 1 and vice-versa.
Our actual model calculated the risk-adjusted matrix value as the product of the relative significance of the indicator (RSI) value and the relative risk of the indicator (RRI) value.
Some monitoring tools can be fully automated by relying on complex software algorithms. Others are dated, even outdated, and yet are routinely performed despite their being unable to provide current and meaningful information to senior management.