February 2020 Our purpose: We solve the greatest challenges through innovative science and technology. Our vision: We are Australia’s innovation catalyst, collaborating to boost Australia’s innovation performance. Contents Introduction...................................................................................................... ..........................................................6 The centrality of impact for CSIRO......................................................................................................... ..........................6 Why evaluate impact? ....................................................................................................................................................... 7 Why produce an Impact Evaluation Guide?..................................................................................................................... 8 Why has CSIRO publicly released the Guide?.................................................................................................................. 9 Source materials.............................................................................................................................................................. 10 Overview...................................................................................................... ...............................................................12 CSIRO’s Impact Evaluation Principles............................................................................................................................. 12 STEP 1: Establishing the purpose and audience...................................................................................... 16 STEP 2: Identifying the impacts...................................................................................................................... 17 2.1 Background................................................................................................................................................................ 17 2.2 Identifying impacts ......................................................................................................... ..........................................18 2.3 Establishing the impact pathways............................................................................................................................ 20 2.4 Finalise method selection......................................................................................................................................... 21 STEP 3: Clarifying the impacts...................................................................................................... ..................22 3.1 Counterfactual ........................................................................................................................................................... 22 3.2 Attribution..................................................... ............................................................................................................. 24 3.3 Adoption..................................................... ................................................................................................................ 25 STEP 4: Evaluating the impacts...................................................................................................................... 28 4.1 Selecting the appropriate mix of methods ............................................................................................................. 28 4.2 Estimating costs.......................................................................................................... ...............................................29 4.3 Externalities, spill-overs and economic flow-on effects on non-users................................................................. 32 4.4 Distributional effects on users......................................................................................................... ........................33 4.5 Inflation adjustment and discounting.......................................................................................................... ............34 4.6 Documenting assumptions and decisions.......................................................................................................... .....35 STEP 5: Aggregation and comparability of impacts across programs of work.....................37 STEP 6: Sensitivity analysis and reporting................................................................................................ 38 6.1 Sensitivity analysis..................................................................................................................................................... 38 6.2 Reporting impact evaluation findings .................................................................................................................... 39 APPENDIX A: CSIRO’s Impact Framework............................................................................................... 42 APPENDIX B: Evaluation types ................................................................................................................... 44 APPENDIX C: CSIRO’s Impact Categories .............................................................................................. 46 APPENDIX D: Real options analysis............................................................................................................. 48 APPENDIX E: Valuing non-market impacts ............................................................................................49 APPENDIX F: Sensitivity analysis................................................................................................................... 56 References ............................................................................................................................... ...............................57 Figures Figure 1: CSIRO’s Impact Evaluation Process.......................................................................................................... ..............14 Figure 2: CSIRO’s Impact Framework ................................................................................................................................... 20 Figure 3: Indicative Impact Adoption Profile .......................................................................................................................25 Figure A1: CSIRO’s Impact Framework................................................................................................................................. 42 Figure A2: Economic Impact of a Hypothetical CSIRO Hydrogen Project ......................................................................... 43 Figure E1: Selecting a Non-Market Valuation Method – Initial Questions ....................................................................... 54 Tables Table 1: Purposes and audiences of CSIRO impact evaluations ......................................................................................... 16 Table 2: CSIRO’s impact categories....................................................................................................................................... 18 Table B1: Comparison of evaluation types......................................................................................................... ..................45 Table C1: Economic impact categories.................................................................................................................................. 46 Table C2: Environmental impact categories......................................................................................................... ................47 Table C3: Social impact categories........................................................................................................................................ 47 Table E1: Advantages and limitations of common monetisation methods ...................................................................... 50 Impact Evaluation Guide Boxes Box 1: What is impact for CSIRO?............................................................................................................................................ 6 Box 2: Data collection for the counterfactual......................................................................................................................22 Box F1: Conducting a Sensitivity Analysis............................................................................................................................. 56 Tasks Task 1: Identify purpose and audience................................................................................................................................. 16 Task 2.1: Background ............................................................................................................................................................. 17 Task 2.2: Identify impacts and their categories ................................................................................................................... 19 Task 2.3: Impact pathways...................................................................................................................................................... 20 Task 2.4: Finalise selection of evaluation methods.............................................................................................................. 21 Task 3.1: Counterfactual......................................................................................................... ................................................23 Task 3.2: Attribution......................................................................................................... ...................................................... 24 Task 3.3: Adoption.................................................................................................................................................................. 26 Task 4.1: Evaluation approaches............................................................................................................................................ 29 Task 4.2: Estimating costs..................................................... .................................................................................................. 30 Task 4.3: Externalities, and spill-over and economic flow-on effects on non users......................................................... 32 Task 4.4: Distributional effects on users............................................................................................................................... 33 Task 4.5: Discounting..................................................... ......................................................................................................... 34 Task 4.7: Documenting assumptions and decisions.......................................................................................................... ...35 Task 5: Aggregation ......................................................................................................... ...................................................... 37 Task 6.1: Conducting a sensitivity analysis......................................................................................................... ..................38 Task 6.2: Reporting......................................................................................................... ........................................................ 40 Introduction The centrality of impact for CSIRO We solve the greatest challenges through innovative science and technology. We are Australia’s innovation catalyst, collaborating to boost Australia’s innovation performance. Source: strategy.csiro.au CSIRO, the Commonwealth Scientific and Industrial Research Organisation, was established to produce positive impact for the people of Australia. CSIRO’s origins date back to 1916, with the formation of the Commonwealth Advisory Council for Science and Industry1, and since that time CSIRO has grown to become one of the largest industrial research and innovation organisations in the world. CSIRO now produces public benefits across a broad range of research areas including agriculture, health, biosecurity, information technology, energy, environmental sciences, manufacturing, and mineral resources. In addition, the organisation also manages research facilities for the nation, and provides services such as Education and Outreach, connection to the SME sector, and futures. Working from sites across the nation and around the world, the aim of every CSIRO staff member is to create value for our stakeholders through innovation that delivers positive impact for Australia. Box 1: What is impact for CSIRO? CSIRO defines impact as: An effect on, change or benefit to the economy, society and environment, beyond contributions to academic knowledge”. For the purposes of CSIROs impact evaluations, impact is the effect of CSIRO work that is generated after this work has been adopted. CSIRO was established to produce positive impact CSIRO’s vision is to be Australia’s Innovation Catalyst Evaluation provides robust evidence of impact 1 This body was replaced in 1920 by the Commonwealth Institute of Science and Industry, in turn replaced in 1926 by the Commonwealth Council for Scientific and Industrial Research, in turn replaced in 1949 by CSIRO. Impact Evaluation Guide Why evaluate impact? Simplystating the goal of producing positive impact is not enough. For CSIRO to fulfil its purpose, each year it must provide its stakeholders (and itself) with robust evidence that this goal is actually being accomplished. This, then, is the purpose of CSIRO’s impact evaluation activities: to provide a firm evidence base of the effects of CSIRO’s research and innovation activities on the economy, environment and society. Industrialised economies are increasingly relying on technology development and deployment to raise productivity, and thereby increase economic competitiveness, as well as to address other significant challenges beyond economics. Equally important, the complexity of new technologies combined with the pressure to develop and deploy them quickly, mandates more efficient technology-based growth and innovation models. Managing technology development and deployment programs requires not only a solid rationale, but also real-time management and ex post evaluation of the nature and impacts of these programs. The main drivers behind CSIRO’s increasing interest in evaluating its research impact are represented in the 4 A’s of impact evaluation.2 Evidence generated through impact evaluation is provided to key stakeholders groups including: • Government, for the purposes of accountability as required under legislation3 and by principles of better practice performance management; • CSIRO’sleadership, to inform future funding allocation in areas that show the greatest promise; • CSIRO researchers and business development managers, to support analysis on how to improve CSIRO’s research and innovation activities; and • the Australian public, to communicate and advocate for the vitally important role that science, research and innovation play in ensuring Australia’s security and prosperity. Ultimately, the value of an impact evaluation is measured by the strength of the evidence produced and the credibility of the evaluation to its intended audience(s). Most particularly, though, it is demonstrated by the use of the evaluation information to inform and improve future decisions and actions. For these reasons, CSIRO actively seeks to ensure that its research evaluation reports are well utilised by their intended audiences. Allocation Informing strategy Informing investment decisions for greater returns Align capabilities to customer needs Advocacy Evidence-based articulation and communication of the value of our work Greater confidence for stakeholders Analysis Greater awareness of collective action and impact Informing future program design and delivery Improvement in performance management Accountability To Parliament, our clients and the public Required by funding bodies and legislation (PGPA Act 2013) 2 Adapted from Adam, et al., 2018. 3 Most particularly, CSIRO’s establishing legislation and the Public Governance, Performance and Accountability Act 2013. Why produce an Impact Evaluation Guide? CSIRO’s research activities and their impacts are diverse in nature and occur across many sectors of the economy. Some impacts can be evaluated quantitatively using economic analysis or statistical methodologies. The results may be able to be expressed in monetary terms. Other types of impacts – especially those relating to environmental or social effects - may have to be evaluated qualitatively. Ultimately though, each impact must be assessed within the context of a common framework if a comprehensive understanding of CSIRO’s impact and return on investment is to be developed. This Impact Evaluation Guide articulates such a common framework; its consistent and rigorous use across CSIRO supports comparability of results from each evaluation – across business units, and across time. The Guide describes the minimum requirements for all CSIRO impact evaluations, regardless of the purpose of the evaluation or the ‘unit of evaluation’ (which could be an individual project, subject area, business unit or the whole enterprise). It guides researchers, CSIRO staff and engaged external support to address key relevant questions in a logically consistent manner, to select the appropriate resources and methods in the evaluation of CSIRO research, and to ensure consistency in analysing results. Ultimately though, each impact must be assessed within the context of a common framework if a comprehensive understanding of CSIRO’s impact and return on investment is to be developed. Impact Evaluation Guide Why has CSIRO publicly released the Guide? The Guide has been publicly released because the challenge of demonstrating impact is one that faces all publicly funded research in Australia. This challenge was heightened by the introduction of the Public Governance, Performance and Accountability Act 2013, which strengthened the planning, performance and reporting requirements for all Australian Government departments and agencies. CSIRO believes that it is beneficial for the broader innovation system for Australia’s publicly funded research organisations to use a common approach to the assessment of the outcomes and impacts of their research. Doing so will allow the outputs of all such evaluations to be used collectively to demonstrate the significant public benefits that are constantly being generated by public funding for science, research and innovation. The collective results of such evaluations can also be used by funding agencies, government departments, and academic analysts in support of improving Australia’s innovation system performance. A secondary motivation for the public release of this Guide is to foster dialogue with CSIRO’s peers relating to research impact evaluation. CSIRO seeks to continually improve its own practices and to strengthen its own internal evaluation culture. Such changes will occur most rapidly and effectively to the degree that CSIRO staff are able to compare their evaluation efforts with those undertaken within other research organisations. It is important to note that the Guide is not being publicly offered because it advocates a new and different methodology to those applied elsewhere. On the contrary, the overall approach proposed within the Guide has been chosen so as to conform with both Australian Government standards and international ‘best practice’. View the guide at: www.csiro.au/en/About/ Our-impact/Evaluating-our-impact Source materials As noted, the methodological approaches set out in the Guide have been developed to harmonise and/or accord with the advice provided by relevant Australian Government departments and agencies. This advice includes: Resource Management Guide Department of Finance, 2015 No. 131: Developing good performance information Guidance note: Cost-benefit Department of the Prime analysis Minister and Cabinet, Office of Best Practice Regulation, 2014 Environmental Policy Analysis: Productivity Commission Staff A Guide to NonMarket Working Paper, 2014 Valuation Valuing the Future: the social Productivity Commission discount rate in cost-benefit Visiting Researcher Paper, analysis 2010 Guidelines for assessing the Australian Centre for impacts of ACIAR’s research International Agricultural activities Research (ACIAR), 2008 Handbook of Cost-Benefit Department of Finance and Analysis Administration, 2006 Introduction to Cost-Benefit Department of Finance and Analysis and Alternative Administration, 2006 Evaluation Methodologies Because it seeks to align with Australian Government practice, this Guide is subject to update as developments occur across the Australian Government, and particularly as part of the Public Management Reform Agenda. Evaluators should consult the most current versions of these resources, apply an updated guidance provided therein, and discuss appreciable differences between them and this Guide to CSIRO. Other sources of reference material relevant to this Guide include: • The Handbook of Practical Program Evaluation, 3rd Edition (Wholey, 2010) – available to CSIRO staff through the CSIRO intranet • Measuring research: A guide to research evaluation frameworks and tools (RAND Europe, 2013) • HM Treasury Green and Magenta Books • Research Excellence Framework (REF) 2014 impact case studies • Research Councils UK Pathways to Impact • U.S. National Institute of Standards and Technology: The Theory and Practice of Public- Sector R&D Economic Impact Analysis • U.S. Department of Energy: Evaluating the Realized Impacts of DOE/EERE R&D Programs • Handbook on the Theory and Practice of Program Evaluation. 10 Impact Evaluation Guide 11 11 Overview CSIRO’s Impact Evaluation Principles To ensure consistency in the application of CSIRO’s Impact Framework (refer Appendix A) and to maximise the opportunity to compare evaluation results, CSIRO has adopted a series of core evaluation principles that are to guide all research impact evaluations conducted by, or on behalf of, CSIRO. The principles are: 1. Impact evaluation4 should be designed to document effective outcomes – the purpose and intended audience must drive the design of the impact evaluation. If appropriation funding was used to conduct the research, then the Australian Government must be considered part of the intended audience. 2. CSIRO is interested in identifying all of the significant impacts (positive and negative, intended and unintended) of its research interventions using a triple-bottom-line lens – i.e. considering economic, environmental and social impacts. Difficulties in evaluating a specific research impact should not discourage its evaluation. 3. As with planning for impact, and monitoring progress towards it, it is important to engage with clients and other stakeholders during the impact evaluation to ensure a more complete investigation, and a more thorough understanding, of the outcomes and impacts and any associated usage or adoption costs. The value of CSIRO work lies with those who adopt the outputs and therefore these users must be consulted regarding the extent of their values. Further, value creation is often driven by the collaborations CSIRO enters into with its key research and industry partners. A discussion of the nature and value of the relationships relevant for the research project or program under evaluation should be included in the case study report. 4. CSIRO uses cost benefit analysis (CBA) as its primary methodology for research impact evaluation and augments this approach with other evaluation methodologies as appropriate depending on the nature of the projects, outcomes or impacts being evaluated. Other evaluation methodologies may include statistical approaches (e.g., regression discontinuity, difference in differences), scientometric approaches, or qualitative analyses. Impacts should be measured relative to a baseline and/or relative to a counterfactual under which the project was not supported and prevailing trends continued. 5. Where possible, all impacts evaluated should reference the relevant associated CSIRO Impact Categories which are fully described in Appendix C, to ensure later comparability and possible aggregation. 6. Where it is appropriate, and it is possible to do so, every effort should be made to quantify and monetise all identified outcomes and impacts – positive and negative. A narrative must be provided to articulate the nature of the outcome or impact, and any assumptions made about it, especially if it is being evaluated through the use of non-market evaluation techniques. 7. All assumptions and key decisions made throughout the evaluation need to be documented in the final Impact Evaluation Report to ensure that the process is transparent, and to enable users of the evaluation findings to know the limits of any future comparison and aggregation across impact evaluations. 8. CSIRO attributes research effort based on a cost share of the total research, development and extension or marketing investment that is necessary to achieve the outputs and outcomes. 4 Within the context of CSIRO’s Impact Evaluation Guide, the term ‘Impact evaluation’ is used to refer specifically to ex post impact evaluation. Refer Appendix B for further details. Impact Evaluation Guide 9. CSIRO uses a standard real discount rate of 7%. All costs need to be expressed in real dollars (i.e., adjusted for inflation) and then discounted using this real social discount rate. The dollar year for inflation adjustment and the base year for discounting should be the same. Costs are assumed to be incurred at the beginning of 10. Where it is at all possible, and in the interests of audit, stakeholders must be asked to validate the quantified and qualitative descriptions of the outcomes and impacts they have benefited from before finalising the impact evaluation. a period, and benefits accrue at the end of a period. 13 Figure 1: CSIRO’s Impact Evaluation Process Establishing the purpose and audience STEP 1 Determine what is being sought from the evaluation, for whom, and how the evaluation outcomes will be used. Identifying the impacts STEP 2 Determine context, impacts to be evaluated and the pathways connecting them back to CSIRO. Clarifying the impacts STEP 3 Create a credible counterfactual, estimate CSIRO’s proportional effort, and establish how much impact has been realised. Evaluating the impacts STEP 4 Measure each impact, distributional effects and externalities. STEP 5 Aggregation and comparability of impacts across programs of work Sensitivity analysis and reporting STEP 6 Perform a reality check and write up the findings Impact Evaluation Guide 15 STEP 1 Establishing the purpose and audience Impact evaluations may be conducted for a range of purposes and to provide information to a range of audiences. The way a particular evaluation is conducted, its unit of evaluation (e.g. project/program/business unit, etc.), the data that is collected for it, and the methodology used to interrogate that data, are all functions of the evaluation’s purpose and audience. Step 1 of CSIRO’s Impact Evaluation Process is therefore concerned with establishing these aspects of an evaluation. Generally speaking, CSIRO impact evaluations are undertaken for one (or more) of the four purposes of impact evaluation (i.e. accountability, allocation, analysis, and advocacy - see the Introduction). Table 1 summarises these purposes in the CSIRO context, as well as the main types of audiences that they relate to. Table 1: Purposes and audiences of CSIRO impact evaluations PURPOSE In light of CSIRO’s commitment to evaluation, all relevant parts of CSIRO should gather and store information that could support future impact evaluations. This includes (but is not limited to) any market or technical publications describing the rationale for action and investment; any analysis results describing or detailing the advantages, disadvantages, costs, or benefits of the solution; and basic data about uptake and usage by different stakeholders (e.g., who, when, where, why, how many, etc.). AUDIENCE Accountability To provide evidence that research funding has been used effectively and in line with its initial intent. External regulatory or funding bodies (e.g. Treasury, Australian National Audit Office). Allocation To assess progress and inform future allocation of research funding to ensure that resources are used in the best and/or most efficient way. Internal:Science, Strategy, Impact and Investment Committee and business unit reviews. Analysis To understand the reasons for success/failure of research outcomes and identify lessons learnt and areas for improvement. Internal: business unit/ program/ group reviews. Advocacy To demonstrate benefits and ‘make the case’ for a specific Community, industry, and other external organisations research area under the program of work, including and the broader public. CSIRO’s social licence to operate in particular fields. TASK 1: Identify purpose and audience Identify the purpose and audience of the impact evaluation that is being undertaken. If there are multiple purposes and audiences, determine their relative priority. Impact Evaluation Guide STEP 2 Identifying the impacts Logically, before an impact can be evaluated it must first be identified; and for an impact to be claimed as a ‘CSIRO impact’ there must be a clear pathway leading from the impact back to CSIRO. Hence, Step 2 of the evaluation process involves identifying the impacts to be evaluated, the pathways connecting them back to the research and innovation activities undertaken within the unit of evaluation, and their broader context. 2.1 Background All CSIRO research projects/programs are undertaken with respect to a certain challenge. This challenge might have been (among other things): • an innovation, or improvement to existing technology or processes, required by a business; • a technology, material, compound, process, organism, or phenomenon, which CSIRO researchers were investigating; • a new task or capability, that CSIRO or its partners were seeking to perform or to develop; • a local, regional, national or global need that CSIRO was addressing through its research or innovation activities. Because the impacts being evaluated arose as a consequence of CSIRO addressing the challenge, they will only be properly understandable when described in that context. For this reason, the evaluation report must provide sufficient details of the challenge to make it clear why CSIRO was involved. Key details of the challenge will also need to be provided to any consultants who are brought in to undertake a cost-benefit analysis for the evaluation. The rationale for CSIRO investment, action, or participation must be clearly articulated in the final evaluation report. As factual information on the history of the business, technology, task or issue that provides the context of the impact evaluation will play an essential role in the final evaluation report, the sources of all these facts – obtained through background research – must be credible and verifiable. TASK 2.1: Background For each unit of evaluation, consider what elements of context must be known for the full significance of the impacts to be properly understood. Undertake thorough background research and carefully reference all factual material derived from this process. Each claimed impact must be connected back to CSIRO The history of the challenge provides essential context for understanding the impacts Facts obtained through background research must be credible and verifiable 2.2 Identifying impacts Impacts may be economic, societal, or environmental. Outcomes and impacts from research can be nuanced and multi-faceted. For example, a new fuel efficiency technology may have economic impacts (reduced fuel expenditure), environmental impacts (avoided emissions from fuel combustion), and societal impacts (avoided adverse health events because of improved air quality or environmental justice for communities near particularly intensive use of the predecessor technology). Further, if the technology is embodied in a new product there may be GDP, employment, and labour income impacts. Although they may not all be quantifiable, all dimensions of impact are important to CSIRO. Economic Impact Impacts on an economic system at a local, national or global level such as changes in revenue, operating costs, profitability, gross domestic product, employment or investment returns. Societal Impacts Impacts on the well-being of the surrounding and wider community. Societal impacts include effects on health, equality, living standards, cohesion, resilience, security and safety practices. Environmental Impacts Impacts on living and nonliving natural systems, including ecosystems, land, air and water. Table 2 provides the current list of subcategories within each of these master categories. A more detailed description of each of the sub-categories is provided within Appendix C. Table 2: CSIRO’s Impact Categories ECONOMIC IMPACTS ENVIRONMENTAL IMPACTS SOCIAL IMPACTS National economic performance Air quality Health and wellbeing Trade and competitiveness Ecosystem health and integrity Access to resources, services and (natural capital) opportunities Productivity and efficiency Climate Quality of life (material security and livelihoods) Management of risk and uncertainty Natural hazards mitigation Safety Policies and programs Energy generation and consumption Security (e.g. cyber, biological, civil and military) New services, products, experiences Land quality Resilience and market niches Animal health and prosperity Aquatic environments Indigenous culture and heritage Securing and protecting existing markets Built environments Innovation and human capital (creativity and invention) Social cohesion (social inclusion, social capital and social mobility) Impact Evaluation Guide This list provides a good starting point for identifying impacts, but it is not exhaustive of all possible impacts. If a project being evaluated has generated impacts that are not on this list then they should be included regardless. Although impacts may be multifaceted, care must be taken not to double count impacts. This is especially true when impacts are being monetised using either market or non-market valuation methods. For example, continuing our example from above, combining fuel cost savings and avoided expenditures to treat adverse health events is acceptable because the values being combined reflect distinct aspects of the impact. However, combining community members’ willingness to pay (WTP) to live in a cleaner environment with avoided health expenditures would be problematic because the WTP estimate likely accounts for the value someone would ascribe to avoiding any health problems they would otherwise face. Doing so would be double counting. Pay attention to the beneficiaries, either direct or indirect, of the project output. Private benefits are measured in the form of profits or cost-savings that accrue specifically to one party, often the innovator. The innovator may be CSIRO or a firm partnering with CSIRO. Public benefits are benefits that accrue to end users of the innovator’s product or service, or those who may benefit (or be affected) because of externalities and spill-overs. The combination of public and private benefits equals social benefits. Care should also be taken to consider where there are transfers of value between two or more Australian parties. Common transfer payments are sales revenue and royalty payments. If the analysis is conducted from the private perspective, then these are important benefits streams. If the analysis is conducted from the social perspective, revenues and royalties are transfers of value between parties, and therefore, may be signals of value creation but not necessarily economic benefits. The exception is when there are royalties and revenues earned from non-Australian sources. This consideration is important because if the analysis is on the returns to CSIRO, then the royalties CSIRO receives from any party can be considered a benefit. If the analysis is on returns to Australian society, then royalties are only a benefit if they come from abroad. This also relates to the above remarks on double counting. Counting royalty payments from an Australian end user of a CSIRO technology would be double counting because the royalty payment likely reflects at least some portion of the benefit the user gains from the technology. End users’ adoption costs reduce benefits; they are not part of the denominator of a benefit-cost ratio. The focus is on the returns to R&D. Therefore, the R&D investment costs are in the denominator. Monetisation of human health impacts should be undertaken with care and following consultation with CSIRO’s Performance & Impact Team about the impacts identified and quantified (e.g., avoided number of deaths, avoided number of injuries). Because CBA is CSIRO’s primary impact methodology, monetisation may be appropriate in order to aggregate results across multiple case studies. However, the evaluator should provide supporting cost effectiveness detail that relates the health impact to the project investment. An example would be number of avoided injuries for every $1 million of project cost. This supporting detail acknowledges the sensitivity around human health impacts and provides supplemental information for stakeholders to assess the meaningfulness of CSIRO’s research relevant to mortality and morbidity. Monetisation related to impacts on indigenous culture and heritage would need to be discussed with relevant stakeholders to assess the appropriateness of this approach. Assessors need to be mindful of any and all cultural sensitivities. TASK 2.2: Identify impacts and their categories Starting with the provided CSIRO Impact Categories, but moving beyond them where necessary, identify the economic, environmental, and societal impacts arising from the project under evaluation. Consider both intended and unintended impacts, and both benefits and adverse consequences. Think broadly about the range of ways in which the project may have led to changes. CSIRO identifies impacts using triple-bottom line categories All impacts should be included in the evaluation 2.3 Establishing the impact pathways An identified impact is only suitable for evaluation if a traceable causal relationship can be shown running from the original research project, through the creation of the research outputs, uptake and adoption outcomes, to the ultimate impacts. This relationship is known as an impact pathway and is encapsulated within CSIRO’s Impact Framework, depicted in Figure 2. It consists of inputs (such as staff, infrastructure and IP), activities (such as R&D, collaboration, extension), outputs (such as materials, technologies, processes and skills), outcomes (such as the adoption of the outputs by research partners) and impacts. Further details on the Impact Framework are provided in Appendix A. Figure 2: CSIRO’s Impact Framework Engagement An impact pathway can be a complex chain of events with a range of variables affecting each link in the chain, especially if the project included early-stage, strategic research. Ideally, outcomes and impacts would have been planned and anticipated using program management tools (such as CSIRO Impact Statements) at the commencement of the research project. Use of established impact plans can greatly reduce the effort required during subsequent evaluation and is recommended as best practice. While it is preferable for CSIRO research projects to develop impact pathways during the planning phase of project initiation, this will not prevent those projects without previously developed impact pathways from developing them for the first time during an impact evaluation.5 However, research projects with previously developed impact pathways will be advantaged by already having collected monitoring information that can inform impact evaluations. Projects guided by an impact plan will also have better opportunities to maximise their outcomes and impacts. Feedback INPUTS ACTIVITIES OUTPUTS OUTCOMES IMPACT Planned work Intended results Can be controlled Direct influence Indirect TASK 2.3: Impact pathways Determine if impact pathways have already been set out for the unit of evaluation. If so, then review and update them as required with input from researchers, Business Development staff, and other relevant stakeholders. Otherwise, undertake background research and develop the relevant pathway. Tracing the causal relationships from research output to impact is imperative CSIRO has developed procedures for mapping impact pathways 5 Indeed, CSIRO researchers have developed specific methods for this purpose (e.g. Lazarow et al., 2015). 20 Impact Evaluation Guide 2.4 Finalise method selection Given the technical nature of CSIRO’s project outputs, most impact case studies are performed using CBA pursuant to a nonexperimental evaluation design. This is because statistical data are not often collected about the specific application of a CSIRO developed technology or other research output. If there is sufficient administrative data available about a technology or its adoption, econometric methods may be used to increase the rigour of the evaluation findings. These methods could include regression discontinuity or difference in differences. These methods exploit variation in available data to assess pre/post effects of a technology. Reliance on information collected through interviews alone adds recall as yet another source of measurement error. Use of econometric approaches mitigates this potential source of error. In addition, some CSIRO activities and impacts occur outside of direct technology adoption. For example, user facilities, education, and workforce development programs may have impacts on the local ecosystem that cannot be studied as effectively using CBA because the focus may be not on the efficiency or productivity of a new technology or process alone. In these instances, it could be useful to employ econometric approaches. For example, in evaluating to what extent a CSIRO facility or program stimulated changes in the regional composition of the workforce, start-up companies, or an industry cluster, understanding the timeline and nature of the CSIRO activity, coupled with available economic data on industry, workforce, and firm composition, affords an opportunity to study how CSIRO supports or nurtures the innovation ecosystem. Social network analysis is another approach to measuring the impact and influence of entities like CSIRO. Studying the patterns of relationships between different groups of innovators and their collaboration on different topics can provide valuable information about influence and the diffusion/uptake of a new idea or technology. Relationships can be measured using surveys, publications, patent data, and other sources. The utility of network analysis is that it can produce evidence about the linkages between CSIRO and ideas. This information is useful when new technologies are emerging and uptake is not yet strong enough to monetise impacts accurately, but anecdotal evidence suggests that ties to CSIRO are strong. Social network analysis can be paired with bibliometric analysis, as well as other traditional statistical approaches. In general, given the budget, timeline, scope, and topical significance of the CSIRO case study, assess whether there are novel approaches which could be applied to assess impact. TASK 2.4: Finalise selection of evaluation methods Select a method that is most relevant for the type of impact the CSIRO activity has delivered. While CBA is the default method, it is not always the most appropriate approach. The extent of uptake and adoption of the CSIRO output should be considered as part of the evaluation method selection process. STEP 3: Clarifying the impacts Having established the impacts to be evaluated and that these impacts can be attributed back to inputs and activities undertaken by CSIRO, Step 3 involves clarifying the impact narrative in the light of: (1) what would have happened even with no involvement by CSIRO; (2) the contributions made by other organisations; and (3) how much of the anticipated impact is still to occur. 3.1 Counterfactual Impact evaluation focuses on those incremental impacts that result from CSIRO’s work, which we refer to as net impacts in this section. The reference point for assessing these net impacts is known as the ‘counterfactual’. The net impact is estimated by comparing the observed or expected benefits with the counterfactual. The counterfactual is the hypothetical situation that would have occurred in the absence of CSIRO’s intervention. When evaluating impact, it is important to be able to rule out alternative explanations for the impact’s cause ; to convincingly establish the degree to which a particular research intervention is responsible for an observed outcome or impact. It must be recognised that the counterfactual may not be static. In the absence of action by CSIRO, prevailing technological trends could mean that progress would have occurred, albeit on a longer time scale, at greater cost, or resulting in outputs producing lower efficiency or productivity. Net impacts should be measured relative to a realistic counterfactual. Counterfactual analysis enables evaluators to attribute cause and effect between the research intervention and the observed or expected outcomes and impacts. By establishing the counterfactual it is possible to isolate the influence of any alternative explanations to reveal the net impact of CSIRO’s research. The key challenge for an impact evaluation is when the counterfactual cannot be directly observed and must be approximated with reference to a comparison group or other intelligence. As discussed in Box 2, a range of accepted approaches exists for determining an appropriate comparison group for counterfactual analysis. Box 2: Data collection for the counterfactual Ideally, consideration of the counterfactual will commence during the planning phase of a research program. If a baseline is established before research commences, and evidence of the state of the counterfactual is collected alongside ongoing monitoring activities of a treatment group, then the scientific method provides a ready solution to generating a robust counterfactual (through the use of control and treatment groups). If not, then the task of establishing a counterfactual retrospectively is still achievable, but is more complicated. Retrospective evaluations are usually conducted after the implementation phase and may exploit existing survey data, although the best evaluations will collect data as close to baseline as possible, to ensure comparability of treatment and control groups. One solution might be to look at the situation at the start of the research. Looking at analogous sectors or situations where adoption has not taken place: what are non adopters doing? 22 Impact Evaluation Guide Developing a realistic counterfactual Counterfactual analysis includes: • an explanation of how changes in the outcomes would have occurred in the absence of a particular program of work; • a test of the effect of changes in those key variables that define the counterfactual through a sensitivity analysis; and • the use of control treatments in field experimentation or a replaced technology identified in adoption surveys as a starting point in technology-oriented impact evaluations. When developing a counterfactual it is essential to identify: 1. substitutes that could have led to similar outcomes/impacts; 2. factors outside of CSIRO that may/did influence changes in the outcomes/impacts of interest. Developing a counterfactual when there are substitutes In a hypothetical world without the specific research intervention under evaluation having been undertaken, the counterfactual may differ from the status quo because new technologies, product varieties, etc. may have become available from other sources (e.g. as a result of other research work being undertaken nationally or internationally or through learning by doing). If equivalent substitutes (e.g. technologies or processes) have been developed, then these need to be identified and incorporated into the counterfactual. In this case, CSIRO’s research impact would be calculated as the difference between its own research impact and the research impact of the next closest substitute. Developing a counterfactual when key outside factors influenced change In the same hypothetical world noted above, the counterfactual may also differ from the status quo as key factors outside of CSIRO’s influence change over time. Social behavioural change, change in consumer preferences, environmental changes, macroeconomic trends, and/or regulatory changes may affect the outcomes and impacts of interest. If, for instance, a particular program of work increases agricultural production, then this increase is to be considered net of changes driven by other factors that had an influence on production such as weather, pest outbreaks, and/or changes to work practices. TASK 3.1: Counterfactual For each impact under investigation, consider: • what would have happened without CSIRO’s work? • are there any substitutes that could have led to similar outcomes/impacts? • have external factors influenced changes in the outcomes/impacts of interest? 3.2 Attribution The next step is to consider how much of the observed impact is in fact attributable to CSIRO. This includes consideration of the work of collaborating organisations, and of new inputs beyond the research intervention under investigation. Attribution of works when collaborating organisations were involved CSIRO often undertakes work in collaboration with other organisations, including sharing of capability, funds, intellectual property, etc. Therefore, calculating CSIRO’s, or another party’s, direct proportional effort towards the realised impact (intended or unintended) requires careful consideration of the key roles of participating organisations in a program of work. Specifically, this involves apportioning benefits when more than one organisation has participated in generating and adapting a technology or new idea, or where other inputs into additional work were also required before the impact occurred. CSIRO uses the practice of attributing effort based on a cost share of the total research, development and extension (RD&E) investment that was necessary to achieve the outputs and outcomes. Attributing impacts with cost shares is particularly useful if the level of effort (or research or other work) involved across organisations is similar. However, if most of the novel work is undertaken by a single organisation then the cost share approach may not be reflective of the actual contribution to research and attribution shares may have to be adjusted. In order to avoid any discrepancies from impacts generated from multiple programs of work, criticism from partners, and/or self-reporting bias by staff members on their own contribution, it is recommended that, where possible, participating organisations agree on the shares (or apportioning method) to be used for the purpose of the impact evaluation. Attribution of work when new inputs were involved In order to apply research, or other works, and translate outputs into outcomes or impacts, a number of new inputs may be required. Those new inputs may in themselves have an effect on outcomes and impacts. In such cases, the outcomes and impacts of CSIRO’s work should be net of the outcomes and impacts associated with those new inputs. For instance, new equipment may need to be purchased to apply a new technology developed by CSIRO and thus translate the technology into increased production. In that case, estimated research impacts (i.e. the value of increased production as a result of CSIRO’s research) should be considered net of those additional input costs. TASK 3.2: Attribution Were any collaborating organisations critical to achieving the outcomes/impacts? • If yes, determine their proportional effort, establish a defensible share of impacts attributable to CSIRO, and then use that share to calculate ‘net’ impacts in Step 4. Were any new inputs, such as new equipment or new skills, critical to achieving the outcomes/impacts? • If yes, calculate their proportional contribution and then use that proportion to calculate ‘net’ impacts in Step 4. Counterfactual analysis isolates the influence of any alternative explanations to reveal the net impact of CSIRO’s research Net impact is CSIRO’s impact minus that of the closest substitute Not all of the impact may be attributable to CSIRO’s activities Attribution shares should be agreed through consultation with collaborating organisations 24 Impact Evaluation Guide 3.3 Adoption It is during the uptake and adoption phase (described in CSIRO’s Impact Framework as ‘outcomes’) that CSIRO works start to be translated into measurable outcomes and impacts. Uptake and adoption may begin in the form of trials undertaken by CSIRO (i.e. internal use only) or by selected ‘next users’ (such as industry partners or government bodies, i.e. external use). It is only when outputs are being used externally that work has a practical application to which a realised value can be attached. Otherwise such valuations are predictions of potential value. Valuation of impacts based purely on internal trial information or an uncertain uptake profile is outside the scope of a purely ex post6 impact evaluation. However, that type of information may be useful in real options analysis (discussed below) or in monitoring progress towards impact if CSIRO should undertake an ex ante7 impact evaluation or when it is monitoring progress toward impact. Adoption profile Focusing on those research outputs that are being used externally, impact evaluation is frequently undertaken at a point in time when the adoption level has not yet matured. Following uptake by innovators (refer Figure 3), the adoption level for a technology may increase rapidly as early adopters are engaged, and then level off as the ‘late majority’ and ‘laggards’ are adopting the new technology/practice. Figure 3: Indicative impact adoption profile It is necessary to understand where on the ‘adoption profile’ the research being evaluated sits, in order to assess the likelihood of further adoption happening over time. It may be possible to develop an uptake and/ or adoption profile based on experience with similar research undertaken in the past (within or external to CSIRO). In that case, the impact evaluation should provide justification of the adoption profile being used. Alternatively, mapping of adoption pathways and indicators of progress towards adoption presented in relevant ex ante evaluations could be also appropriate. The impact should be assessed only on the basis of the additional value derived from the program of work over the evaluation timeframe. If a program of work builds on a previous body of work undertaken by CSIRO, it is likely that the adoption or uptake rates will be influenced positively or negatively by that preceding work. Previous outputs and experience should be considered as part of CSIRO’s existing capabilities and stock of knowledge. Any capabilities developed previously, as well as other factors influencing adoption of outputs (e.g. seminars or workshops), should be used to refine the assumptions around adoption rates in the impact evaluation. For example, CSIRO and partners have been operating for over 30 years in cotton research and, as a result, research outcomes involving technological improvements in this field are likely to experience faster adoption rates than would occur in fields where CSIRO has not undertaken previous research. Impacts realised to date and going forward may be from the collective work that CSIRO and partners have achieved in this research space over the last decade rather than from one particular program of work. For the purpose of the impact evaluation, it is important to focus on those specific or recent achievements that are most closely linked to a particular program of work. Source: Rogers, 1995, p. 247 6 An evaluation of the impact attributable to a program of work after the research has begun producing one or more outcomes external to CSIRO, regardless of whether the research activity has been concluded. 7 An evaluation of a body of work which either hasn’t yet started or has started but has yet to deliver any research outputs (and logically, therefore, no resulting outcomes or impacts have occurred). Valuing research that has not matured – when to use ‘real options’ methods Despite recommendations to wait for a period of time between the delivery of project outputs and impact evaluation, it is inevitable that, on many occasions, impact evaluations confront the realities of impacts where work has not fully matured and adoption has only just begun to occur. In such cases, evaluation can still be undertaken and may include both anticipated and actual outcomes and impacts based on evidence obtained to date. By contrast, in cases in which there is no adoption or evidence of an outcome, impact evaluation cannot be undertaken. In these circumstances, a real options analysis may be appropriate to assess the option value of research, as opposed to evaluating the realised impact of research. The option value is the present value of research, not from a current impact, but from retaining (or opening up) future options to use it at a later stage, and it can be considered as a risk management approach. For example, research into how to address a potential biosecurity hazard creates the option to act in a timely way in the future should that biosecurity threat occur. This option would not be available without the research occurring now. Importantly, a real option can have value now even if the future option is never exercised, in much the same way that an insurance policy can be valuable even though a claim is never made. An example of applied real options analysis is provided within Appendix D. TASK 3.3: Adoption For each program of work under consideration,determine: • whether outputs are being used externally to CSIRO and to what extent • the likely uptake profile for the outputs of the program • the influence that previous work undertaken by CSIRO may have had on that uptake profile. Consider application of real options analysis in situations where: • impacts have not matured and there are no current impacts, but there is a probability that there will be impacts from the research in the future; • research has taken place in an area where there is uncertainty over future impacts, but where that uncertainty will be reduced over time; • there is a need for risk management in an area, and where an option to do certain things in the future is a real value now (e.g. in areas where nothing bad happens if research succeeds). If the program builds on previous work then the evaluation should take this into account The ‘option value’ is the present value derived from opening up future options Not all of the impact may have yet been realised 26 Impact Evaluation Guide 27 STEP 4: Evaluating the impacts Having determined the main purpose of the impact evaluation (Step 1), identified all the impacts that will be measured (Step 2), and having clarified their true extent (Step 3), the next step involves measuring the impacts. To complete this task, the following sub-steps must be taken: (1) selecting the appropriate mix of methods; (2) estimating costs; (3) determining externalities and flow-on effects on non-users; (4) determining distributional effects on users; (5) discounting; and (6) ensuring proper documentation. 4.1 Selecting the appropriate mix of methods The purpose of valuing research benefits is to consider whether a program of works’ benefits are worth its costs; and to allow rigorous and consistent aggregation or comparison further down the track. In the context of CSIRO’s triple-bottom-line impact evaluation, it is important to provide robust measures across all benefits and costs. CSIRO’s standard approach to impact evaluation is a mixed-methods approach, using: • CBA for those impacts that can be assessed in monetary terms (usually economic impacts); • non-market valuation methods for key types of benefits or costs, where these may be indirectly monetised; • cost-effectiveness analysis (CEA) for those impacts that are not monetised but compared to costs, such as human health impacts. Note that aspects of CEA can be easily included in CBA by eliminating the monetisation step for benefits; •non-monetary quantification for impacts where suitable data is available (especially economic and environmental impacts). For example, presenting information on the number of units sold, emissions reduced, or other such data are useful measures that complement CBA results; • econometric methods for assessing impacts (e.g., regression discontinuity, difference in differences) using administrative and statistical data. Again, these can be an input into a CBA; • social network analysis for assessing the interrelationships between CSIRO, ideas, and other innovators; and • qualitative methods (QM) (e.g. surveys, interviews, focus groups) for any remaining impacts. Use of CBA enables comparison of impacts arising from CSIRO activities against the associated costs. The method provides a monetary measure of the current value for the program of work conducted (net present value) as well as calculating the effects on possible future benefits and costs (Benefit Cost Ratio or Rate of Return). More detail on this approach is available from: Boardman et al (2010); Department of Finance and Administration (2006a, b); and Office of Best Practice Regulation (2014). In a CBA, it is the change in benefits and costs that result from CSIRO’s work that is important, not total benefits and costs. The reference point for the change is the baseline that existed at the start of the research PLUS the evaluation of what would have happened without the CSIRO works (i.e. the counterfactual). The benefits and costs are measured relative to the counterfactual that has been established for this category of impact. 28 Impact Evaluation Guide The importance of also capturing and reporting non- market and non-monetary benefits is well recognised yet poorly understood and applied, compared to the more traditional CBA approach. In brief: • Non-market valuation methods aim to elicit an additional value or willingness to pay for otherwise intangible benefits (or to accept a compensation for a reduction in those benefits). •Non-monetary quantification can involve metrics unique to the benefit being quantified, for example, tonnes of production, employment (full time equivalent – FTE), hectares of habitat, or affected population. Relevant techniques include revealed and stated preference methods; benefit transfer approach and social return on investment. Further information on these techniques is provided in Appendix E. Conducting an impact evaluation using a mixed-methods approach (i.e. identifying market and non-market benefits, using both quantitative and qualitative data), provides the most complete possible assessment. TASK 4.1: Evaluation approaches The choice of evaluation approach is driven largely by whether or not the benefits associated with the impact are quantifiable. In general, a mixed-methods approach is desirable as it enables the most complete assessment of impact as possible. 4.2 Estimating costs As important as establishing the quantum of any identified benefits is to a cost-benefit analysis, equally important is establishing the quantum of costs. Developing a comprehensive understanding of the cost basis can be challenging. This is especially true for longer-term investments and activities that emerged from multiple programs. The number of years a project was supported, changes in project accounting codes, and the aggregation and disaggregation of teams, projects, and work structures can complicate the development of an accurate cost basis. Early engagement with appropriate teams within CSIRO is essential for the development of an accurate understanding of the investment relevant to the unit of analysis being evaluated. Care should also be taken to understand and document the CSIRO investment rationale to ensure that the accompanying case study report includes a description of where, when, and how CSIRO chose to invest. 4.2.1 Research and Development (R&D) Costs The R&D cost basis includes the costs incurred by CSIRO and its research partners. These are the input costs incurred to produce the research outputs and include those associated with such things as staff member FTE, non-staff FTE, in-kind contributions, equipment/ facilities and background IP. Input costs can (in some cases) be estimated using internal funding, external funding and grants – the financial resources used to pay for the labour and physical resources noted above. Internal costs within CSIRO that are specific to the body of work being evaluated can be established through finance system project reporting. Other input costs can be established through discussions with research partners involved. Costs by year are necessary given the discounting procedures required in later steps. 4.2.2 Subtracting Usage and Adoption Costs from Benefits Usage and adoption costs are the costs borne by the end users in adopting the research outputs (not the costs associated with developing the outputs), and include such costs of those associated with any trials, further development, market tests or factory retooling required before a new technology can be made available to the market, as well as any marketing costs, training costs, extension costs, and any other usage costs once it is available or which make it more available. Usage and adoption costs are subtracted from benefits (the numerator of a benefit-cost ratio) and are not included in the cost basis (the denominator). When calculating usage costs, it is preferable that the end-user or relevant parties involved in the uptake of research outputs provide this figure, or at least confirm the figures arrived at. It should be noted that the costs of usage may in fact be significantly higher than the input costs incurred by CSIRO and its research partners in producing the original output. If practicable, benefits estimates can be collected net of adoption or usage costs. It is difficult to predict what may or may not happen in the future, and therefore, determining usage costs may be difficult. However, there is usually a body of evidence from which evaluators may draw upon to support their assumptions. Knowledge of the industry in which the research output is being deployed, as well as a strong relationship with end-users, should assist in determining usage costs. Access to relevant experience can also be available through discussions with sector specialists, business development staff, clients, intermediaries, etc. At times it can be extremely difficult to identify precise estimates of all costs and benefits involved. What is required is as good an estimate as time and resources allow to be prepared, including (if necessary) a range of values for either or both of costs and benefits, and with caveats about the uncertainties in measurement. This task is greatly assisted if it is approached in a systematic way following the inputs to impacts chain developed in Step 2. TASK 4.2: Estimating costs If there is uncertainty in users’ actual or potential usage and adoption costs, develop a range of estimates around any point estimates. The maximum and minimum values for this range can be used in sensitivity analysis scenarios. 30 Impact Evaluation Guide 31 • In the market, prices for the crop are likely to fall 4.3 Externalities, spill-overs and sales are likely to increase. For consumers, there and economic flow-on may be an increase in consumer surplus with lower effects on non-users prices and greater consumption. This increase in In principle, evaluation should take account of all benefits and costs arising from CSIRO’s work. This means that as well as taking into account the direct effects of research, the wider effects on other areas of the economy, environment, and society should also be considered. An externality is a direct impact from research on a ‘third party’, that is, someone other than the direct user or adopters of the research outputs and applications. Externalities may be economic, environmental, and/ or social in nature. Sometimes they are intended as an objective of the research; but often they are unintended. A ‘spill-over’ occurs when a user from a different sector adopts a technology for use in sector or application for which the technology was not originally intended. For example, new production technologies were developed for the production of solar panels. Those technologies were later adopted in the semiconductor industry. Spillover impacts are relevant and should be quantified. A related, but subtly different concept, is an economic flow-on (second round, or multiplier) effects. These refer to linkages within an economic system which ensure that initial impacts will lead to subsequent impacts within the system. Flow-on effects are indirect effects, experienced through adjustments in the economy that occur because of the direct impact of the research. The indirect flow-on effect requires that a direct effect has first occurred. An understanding of externalities, spill-overs, and flow- on effects can be gained by considering the impacts of research into disease resistance into a new crop variety: • The landholder adopting the new variety may experience higher yields or a reduction in expenditure on pesticide. These would be direct effects of the research experienced by the landholder. A change in producer surplus resulting from greater sales at lower cost would be another impact. consumer surplus would be a direct effect in the market but an indirect effect upon consumers, in the sense that it was the producer (not the consumer) who was directly affected by the original research. • Another direct impact may be the lower pesticide requirements for landholders neighbouring the adopting landholder. This would be a direct impact and economic in nature, but would be classed as an economic externality as it would be experienced by someone other than the adopter. • Other direct impacts might be improved biodiversity outcomes or reduced pesticide pollution in local surface or groundwater. Once again, such impacts would be externalities. • Changes to prices for pesticides to other landholders (because of lower pesticide demand) may also occur, and these would be classed as economic flow-on effects. There are no concrete guidelines on when to include externalities, spill-overs, or flow-on effects; and their value for inclusion within the analysis should be assessed on a case-by-case basis. Generally, an impact evaluation would include externalities, especially if the research is of a nature that makes them important, but not necessarily spill-overs or flow-on effects. Note: if the impact evaluation is going to be used as part of a broader aggregation, or for comparative purposes, then all impact evaluations being aggregated need to have a consistent treatment of externalities, spill-overs, and flow-on effects. This means that, if some externalities are known and included, then effort needs to be made to include all the other main ones. The most fundamental recommendation is that both the benefits and the costs need to be afforded equal treatment. Otherwise, the aggregation of evaluation outcomes could lead to misleading results. TASK 4.3: Externalities, and spill-over and economic flow-on effects on non users Are externalities, and spill-over and economic flow-on effects on non-users, relevant to this evaluation? If yes, include relevant analysis supported by robust evidence. Impact Evaluation Guide 4.4 Distributional effects on users Final users are not always the only beneficiaries of research; as noted in the previous section, there may be externalities, and spill-over and economic flow-on impacts. An aggregated benefit-cost figure, or some single impact evaluation estimate, may mask a reality of winners and losers across groups of final users, industries or regions. For some impact evaluations, consideration of these winners and losers may be important. For instance, and continuing with the example of disease resistance in crops, a group who may lose from the research may be the manufacturers and retailers of pesticides, or those offering crop dusting services. Furthermore, it is important to understand who the individual groups of winners and losers are, and to make a distinction between users and beneficiaries of the research, along a supply chain to avoid double counting of the same impact. Where the benefit ultimately resides is an important consideration, but it is a distributional one. For instance, beneficiaries of disease resistant crops can include seed breeders, farmers, food processors, food retailers and, eventually, the consumers. Some benefits may reside in some or all of these users. Benefits may also reside downstream of the consumers (such as government, through lower health expenditure). However, those benefits must not be double counted: e.g. the price premium a product achieves at the farm gate is not an additional benefit to the price premium consumers are willing to pay in store. Overall, impacts may vary by industry, region or another level of disaggregation, with the level of granularity typically limited by data availability. When information is patchy, qualitative estimates can be used in case studies to assess impacts on stakeholders of interest. TASK 4.4: Distributional effects on users Does the impact differ across groups of final users, industries or regions? If yes, assess impacts across winning and losing groups. 4.5 Inflation adjustment and discounting Inflation adjustment is recommended to be conducted using either the real GDP index or the CPI, both of which are available from the Australian Bureau of Statistics. Discounting is a technique used to compare costs and benefits that occur in different time periods. It is a separate concept from inflation; and is based on the principle that generally, people prefer to receive goods and services now rather than later. This is known as ‘time preference’. It is important to correctly assign benefits and costs with each time period. Unless there is robust rationale to the contrary, costs are placed at the beginning of a period, and benefits are placed at the end of a period. This is both intuitive (an investment catalyses a return) and lends a degree of conservatism to the results because benefits are, in effect, discounted one additional period. NB: It is not recommended that the net present value (NPV) formula in Excel be used on a time series of inflation adjusted net benefits because the formula does not account for the difference in timing of benefits and costs. The Excel NPV formula is also not recommended because it assumes the base year is the first year of data provided, which is typically not appropriate for ex-post analyses. The discount rate is used to convert all costs and benefits to ‘present value’, so that they can be compared. The difference between the present value of benefits and the present value of costs is the NPV. If the NPV is positive, one can reasonably assume that the investment was advantageous. It is important for CSIRO to be able to compare the results of multiple impact evaluations for a range of purposes. To facilitate this, procedural considerations must be standardised, including the use of a standard real discount rate of 7% in all CSIRO impact evaluations. The base year used to discount values should be the same as the base year selected for inflation adjustments and should ideally be the year of publication. 8 The benefit-cost ratiois the ratio of the present value of all measured benefits to the present value of all measured costs. The BCR accounts for differences in the timing of cash flows, which has implications for the real value of $1 in one time period versus another. A BCR of 1 indicates a project breaks even from a financial perspective. Any project with a BCR greater than 1 is a successful project as defined in terms of monetized benefits exceeding costs. One useful interpretation of the BCR is that the BCR value represents the dollar benefit accruing for every $1 in cost incurred over the time frame of analysis. For example, a BCR of 3.0 (alternately, 3:1) would mean that over the entire time frame $3 of benefit accrued for every $1 in cost. The IRR on an investment is interpreted as the percentage yield on an R&D investment. In mathematical terms, the IRR is the discount rate that sets the NPV equal to zero or results in a BCR of 1. The IRR’s value can be compared with conventional rates of return for comparable or alternative investments. TASK 4.5: Discounting • Use a 7% real discount rate to convert all costs and benefits to ‘present value’. • Calculate the present value of the differences between the streams of costs and benefits (NPV). • Conduct a sensitivity analysis by recalculating the NPV with a range of plausible alternative rates (e.g. 3%, 5% and 10%). 8 This approach harmonises with that adopted by the Office of Best Practice Regulation (2014), but note that differing approaches are used by some other publicly funded research agencies, notably ACIAR (ref. Davis et al., 2008 and Council of Rural Research & Development Corporations, 2007.) Refer also Harrison, 2010. 34 Impact Evaluation Guide 4.6 Documenting assumptions and decisions CSIRO Impact Evaluation Principle 7 states that “all assumptions and key decisions made throughout the evaluation need to be documented in the final Impact Evaluation Report to ensure that the process is transparent and to enable users of the evaluation findings to know the limits of any future comparison and aggregation across impact evaluations”. Ensuring that the assumptions and decisions underlying an impact evaluation are properly documented is of the greatest importance, as the longer-term utility of the evaluation entirely depends upon it. This is because (as noted both earlier, and in the next section) future aggregation of the outcomes of past evaluations can only occur to the degree that the assumptions underlying these evaluations are known to be consistent. Therefore, once the costs and benefits have been calculated, externalities and distributional effects factored in, and discounting has taken place, a retrospective glance should then be made across the entire process, and all aspects of the analyses and calculations should be carefully recorded. Evaluators should provide in the final CBA: • Documentation of the build up of all benefits streams, although end user specific information may need to be blinded or only presented in aggregate • Basis of selection of base year for inflation and discounting • Basis of selection of real GDP or CPI for inflation • Time series of real benefits, real costs, and net benefits used in final CBA calculations • Time series of discounted (present value) benefits • Time series of discounted (present value) costs • Time series of discounted (present value) net benefits • NPV • BCR • IRR. Evaluators should also document software tools and assumptions employed in the cash flow analysis; and ensure that the spreadsheets for all calculations are submitted along with the evaluation report to CSIRO. TASK 4.6: Documenting assumptions and decisions Ensure that all assumptions and key decisions made throughout the evaluation process are documented in the final evaluation report. Impact evaluation takes account of both winners and losers CSIRO uses a standard real discount rate of 7% Documentation is essential if the evaluation is to be useful in the long term 36 Impact Evaluation Guide STEP 5 Aggregation and comparability of impacts across programs of work Programs of works will often yield multiple diverse impacts; and there are often challenges in combining them into a singular impact figure. There are similar challenges involved in using that single impact figure for comparative purposes against other impact figures. It is realistic to accept that even for programs with many clear monetary costs and benefits, there will be other non-monetary costs and benefits that can be included in an impact evaluation. Because this higher level of aggregation is the product of many discrete steps and assumptions, it is even more important that the assumptions made throughout are consistent, and, as much as possible, the same valuation techniques are used for the same types of impacts. Consistency and transparency also help to avoid double counting, especially when research occurs across organisational boundaries (programs, business units, etc.), and especially for subject area reviews. Depending on the objectives of the aggregation or comparative exercise, different aggregate measures may need to be used. For example, a ratio figure such as a benefit-cost ratio is most useful for assessing whether a net positive social outcome has been achieved. Net present value is best when comparing projects or programs, although the benefit-cost ratio is often used to compare as well. There is controversy, however, about whether comparing ratios is legitimate given different scales and abilities to measure all costs and benefits adequately across all projects being compared. Impacts on gross domestic product (GDP) can also be used as one measure of how beneficial the CSIRO work, the program, or the business unit has been. TASK 5: Aggregation • Where possible, present a benefit-cost ratio indicator derived from the cost-benefit analysis. • Where it is not possible to adhere to a rigorous cost-benefit analysis, clearly present the full range of relevant and measurable – monetary and non-monetary – costs and benefits of the work program. Consistent assumptions aid aggregation STEP 6 Sensitivity analysis and reporting This final step provides guidance on sensitivity testing the evaluation results to improve the credibility of the findings, as well as on how to best present the evaluation findings in a consistent CSIRO format. 6.1 Sensitivity analysis At a minimum, the impact evaluation techniques used as well as the accompanying assumptions and resulting findings should be discussed with internal and external stakeholders and end users to gauge the credibility of the process and the results generated. If a higher degree of scrutiny is warranted, then a sensitivity analysis should be conducted. Sensitivity analysis, broadly defined, is the investigation of: • the parameter values and assumptions underlying a model • the degree to which they are subject to potential changes, and of • their impacts on conclusions to be drawn from the model.9 A thorough sensitivity analysis informs the audience of the uncertainty around the estimates of costs and benefits, especially the limits of the estimation techniques used to value non-market costs and benefits. Also, given the importance of the counterfactual in establishing the extent of change attributable to the research intervention, it should also undergo some degree of sensitivity analysis. At the very least, it is good practice to gauge the sensitivity of a cost-benefit analysis to the discount rate used, by re-calculating the NPV with a range of plausible alternative rates of say 3%, 5% and 10%. Although CSIRO uses a standard real discount rate of 7%, it is still important for users of the evaluation to understand if the use of other discount rates would have significantly changed the evaluation’s findings. There is a very large literature on procedures and techniques for sensitivity analysis. Suggested approaches to sensitivity analysis are discussed in Appendix F. In addition to sensitivity analysis, where there is genuine doubt about the range of costs and benefits claimed, the impact evaluation reporting should include that range (and where available, confidence intervals). TASK 6.1: Conducting a sensitivity analysis • What were the key assumptions underlying the cost-benefit analysis? • How do the outcomes of the analysis vary with variations to these key assumptions? Sensitivity analysis demonstrates the robustness of the evaluation’s outcomes 9 Pannell, 1997 38 Impact Evaluation Guide 6.2 Reporting impact evaluation findings The primary purpose of undertaking an evaluation is to inform internal and external audiences of the impacts (both expected and delivered) from the investment in the unit of evaluation, as well as any lessons that may have been learned. Consequently, it is essential that evaluation reports be readable and ‘user-friendly’. To ensure readability, the report should be drafted using language that can be understood by a non-technical audience. The report should acknowledge the evaluation’s limitations, including that: • the evaluation does not constitute the entire assessment of the project • in some cases, quantitative assessment is difficult • while the extent and value of non-market costs and benefits may have been crucial within a particular evaluation, there are limits to the accuracy of non-market estimation techniques. When drafting a CSIRO impact evaluation report, it is desirable to use a structure that mimics, as much as possible, the structure of the evaluation process itself. Doing so confers a range of benefits, including that it: • makes it easier to ensure that no steps of the process have been omitted • ensures that all CSIRO evaluation reports follow a common structure (and so their outcomes are easier to aggregate) • makes it easier for regular readers of CSIRO’s impact evaluations to find the information they are seeking. Of course, every evaluation has unique features and so each evaluation report will need to be drafted accordingly. That said, ideally the report will be structured using the following elements: 1. Executive summary 2. Purpose and audience 3. Background 4. Research impacts and pathways 5. Clarification of the impacts a. counterfactual / attribution / adoption 6. Evaluation of the impacts a. benefits / costs / externalities, spill-overs, flow ons / distributional effects 7. Aggregation of research impacts 8. Measures of economic return (BCR, IRR, NPV) 9. Sensitivity analysis 10.Conclusion While drafting the report it can be extremely helpful to refer to evaluation reports that have been previously (and recently) prepared by, or for, CSIRO. Doing so can, for example, provide an improved understanding of how to report on many separate impacts for a given unit of evaluation, or of cases where a range of evaluation methodologies have been employed. This Guide is accompanied online with a link to past CSIRO evaluations (https://www.csiro.au/en/About/ Our-impact/Our-impact-in-action) and relevant impact evaluations are also available from: • ACIAR Impact Analyses, which are undertaken annually for a small collection of projects. • IFPRI Impact Analyses10. 10 NB: An ex-post impact assessment of IFPRI’s GRP22 program, water resource allocation: Productivity and environmental impacts (Bennett, 2013) provides a useful example of the application of mixed methods. As noted in the introduction, the value of an impact evaluation is ultimately measured by how much its outputs are used by the intended audiences. For this reason, finalisation of the evaluation report should be followed by communication activities to ensure that the report and its contents become available to the full range of stakeholders who are likely to have an interest in it. To assist with this process, a communications plan may be useful; and, in the longer term, it is useful to assess how successful these communication and dissemination efforts have been. Use of a standard structure aids readability and aggregation The evaluation has been successful to the degree that its outcomes are used TASK 6.2: Reporting • Ensure that the language used within the report is appropriate for a non-expert audience. • Set out the limitations of the evaluation. • Refer to pre-existing case studies before and during the drafting of the report. • Consider developing and implementing a communications strategy to ensure that the information generated by the evaluation process is well utilised by stakeholders and CSIRO staff. • Evaluate the success of dissemination efforts. 40 Impact Evaluation Guide 41 APPENDIX A CSIRO’s Impact Framework CSIRO’s approach to planning, monitoring and evaluating impact is built on the concept that, in order to assess the value of research, it must be possible to track the process from inputs to impacts. CSIRO’s logic model, the CSIRO Impact Framework shown in Figure A1, is used to articulate ‘pathways to impact’. It identifies the inputs and activities required to deliver research outputs, and the uptake and adoption outcomes which will need to occur to eventually lead to the desired impacts. Each of these components may be understood as follows: • Inputs: Resources applied to deliver activities, such as people, equipment, funding, etc. • Activities: Actions taken or work performed through which inputs, technical assistance and other types of resources are mobilised with the intention of achieving specific outputs (e.g. technology development, education, engagement). • Outputs:The research solutions, services, and/or capacities that result from the completion of activities within a research portfolio or project (e.g. publications, patents, prototypes, training packages, students trained, reports). Figure A1: CSIRO’s impact framework Feedback Engagement INPUTS ACTIVITIES OUTPUTS IMPACT OUTCOMES Planned work Intended results Can be controlled Direct influence Indirect -Staff FTE -Research/technology -The research -The intended or desired -An effect on, change -Non-staff FTE developments solutions, services, short-to-mediumor benefit to the -Education and/or capacities term effects /change economy, environment -Dollar-value estimates using: -appropriation funding -external funding -grants -Industry engagement (incl. with small and medium enterprises) -International engagement that result from the completion of activities within a research portfolio or project (e.g. publications, patents, prototypes, expected to be realised from successful delivery of research outputs (e.g. adoption of new techniques, process and behavioural changes, new products, or society beyond those contributions to academic knowledge. Impacts include wider economic, environmental and social impacts such as increased economic -in-kind contributions training packages, licences/IP sold – this activity, productivity -equipment/facilities. students trained, reports). component is also called ‘uptake’ in some improvement, water savings, reduced CSIRO examples). emissions, improved health and wellbeing, etc. -Although the Framework is depicted as a linear process, just as science is serendipitous and agile in execution with multiple feedback loops and engagement, the framework should also be operationalised as you would execute the research. 42 Impact Evaluation Guide • Outcomes: The intended or desired medium-term Although the Framework is depicted as a linear process effects/change expected to be realised from the for the sake of simplicity, it should be understood successful delivery of research outputs (e.g. adoption of that, just as science is often serendipitous and agile new techniques, process and behavioural changes, in execution with multiple feedback loops and new products, licences/IP sold – this component engagement at all stages, so too the framework should is also called ‘uptake’ in some CSIRO examples). be operationalised as you would execute the research. • Impact: An effect on, change or benefit to the Figure A2 (below) provides a worked example of an impact economy, environment or society beyond those pathway for a research project using the Framework. contributions to academic knowledge. Impacts include wider economic, environmental and social impacts such as increased economic activity, productivity improvement, water savings, reduced emissions, improved health and wellbeing, etc. Figure A2: Economic impact of a hypothetical CSIRO hydrogen project INPUTS ACTIVITIES OUTPUTS IMPACT OUTCOMES Your planned work Your intended results -7 person CSIRO project team from the Energy Business Unit -2 in-kind researchers from University of Technology, Sydney -3 external industry funding partners -Background IP -Infrastructure and equipment -Pilot project to explore and develop novel value chain pathways for Hydrogen -Research into social licence to operate in the hydrogen industry context -Industry engagement -Communication activities -Next generation, sustainable hydrogen production technologies -Hydrogen distribution and utilisation technology prototypes -Tools for establishing a social licence to operate for the hydrogen industry -Journal articles for review -CSIRO technologies for the production and distribution hydrogen employed to increase the volume of sustainably sourced hydrogen moving through the value chain, with a focus on exporting ‘green’ hydrogen to major international markets -Establishment of a sustainable and viable hydrogen export industry in Australia APPENDIX B Evaluation types Within the context of CSIRO’s Impact Evaluation Guide, the term ‘Impact evaluation’ is used to refer specifically to ex post impact evaluation. Ex post (“after the fact”, retrospective or summative) impact evaluation is defined by CSIRO as: an evaluation of the impact attributable to a program of work after the research has begun producing one or more outcomes external to CSIRO, regardless of whether the research activity has been concluded. In a practical sense, ex post evaluations are still forward-looking in that it is necessary to combine an evaluation of delivered outcomes and impacts with an estimate of the future impact of unrealised outcomes over potentially many years into the future. Ex post impact evaluation may be compared with ex ante (“before the event”, prospective or formative) impact evaluation, which CSIRO defines as: an evaluation of a body of work which either hasn’t yet started or has started but has yet to deliver any research outputs (and logically, therefore, no resulting outcomes or impacts have occurred). This type of analysis is useful in considering whether a project should be undertaken or in comparing alternative prospective projects aimed at common objectives. Ex ante evaluation can also aid the planning and development phase of a project to place some rigor around the identification and, where possible, quantification of the expected benefits to be derived. Assessments carried out during the implementation of a program are termed monitoring, progress or life-of-project/program evaluations and are used as a measure of accountability for funding, to gauge performance to-date, and to provide some guidance for the future allocation of funds and information for project selection. While evaluations are point-in-time objective assessments of observed results for attribution to specific research activities, monitoring is an ongoing assessment of factual results within the context of a predefined framework of intended results (an impact plan). Monitoring provides important evidence for evaluations. If monitoring is not done throughout the life of the project then articulating impact retrospectively and finding corroborating evidence to back up claims can be difficult. These various types of impact evaluations are summarised within Table B1. 44 Impact Evaluation Guide Table B1: Comparison of evaluation types TYPE OF EVALUATION FEATURES Ex ante • Conducted during the decision-making process prior to investment (formative, prospective) • Based on projected values to inform investment choices • Forward-looking assessment of the likely future outcomes and impacts of a new project • Aids design of strategy and project plan, including informing uptake and adoption strategy • Takes place prior to the commencement of a project • Includes ‘baseline study’ which will aid later evaluations – a baseline study identifies all relevant conditions that exist before the CSIRO works take place. Progress / monitoring • Conducted during the lifetime of the project • Useful in deciding whether a project should be extended or investment re-directed • Generally has a formative nature as it is undertaken around the middle period of implementation of the project • Formative evaluation intends to improve performance, most often conducted during the implementation phase of projects Ex post (summative, retrospective) • Conducted for the purpose of evaluation to inform future investment decisions • Largely based on observed values • Counterfactual provides an estimate of what would have transpired without the CSIRO work and builds on the baseline established prior to commencement • Research program has produced outputs and those outputs have produced outcomes • Normally serves the purpose of a summative evaluation since they are undertaken towards the end of the implementation phase of projects or program • Summative evaluation is conducted at the end of a project (or a phase of that project) to determine the extent to which anticipated outcomes were produced • It is intended to provide information about the value of the project APPENDIX C CSIRO’s Impact Categories Table C1: Economic impact categories ECONOMIC IMPACT DEFINITION National economic performance The capability to influence or change at the macroeconomic level, i.e. economy-wide impacts, such as changes in unemployment, national income, rate of growth, gross domestic product, inflation and price levels. Trade and competitiveness The capability of trade-exposed firms to succeed in international competition against leading international competitors. Productivity and efficiency The capability to influence or change the production of products and services such as risk, profitability and productivity aspects, and sustainability of the production and consumption system. This also includes the capability to influence or change the performance measures related to the supply chain members. Management of risk and uncertainty The capacity for rapid innovation at scale to reduce risk of damage or lost opportunity (in the form of early warnings or early identification of opportunities). Policies and programs The capability to influence or change the coordination and governance of social, economic and environmental policies and programs, for example, better return on investment and reduction in green and red tape. New services, products, experiences and market niches The capability to develop new products and services, through technological and organisational innovations, including in the following areas: Food, Soil and Water, Transport, Cybersecurity, Energy and Resources Manufacturing, Environmental Change and Health. Animal health and prosperity The capacity to reduce the likelihood of invasive animal diseases that have the potential to cause significant harm to the economy from entering, emerging, establishing or spreading within Australia. Securing and protecting existing markets The capacity to maintain and/or increase returns from existing market access. 46 Impact Evaluation Guide Table C2: Environmental impact categories ENVIRONMENTAL IMPACT DEFINITION Air quality The degree to which the air in a particular place has changed. Ecosystem health and integrity (natural capital) The variety and connections between plant and animal life in the world or in a particular habitat. Focus on plants and animals within an area and how they interact with each other as well as with other elements such as climate, water and soil. Also the ecosystem services provided to protect ecosystems and biodiversity. Look to add the concepts around natural capital. Climate Focus on atmospheric, land and ocean patterns and the changes in these over time. Natural hazards mitigation Steps taken to contain or reduce the effects of an anticipated or already occurred disastrous events (such as drought, flood, fire, lightning, various levels and types of storms, tornado, storm surge, tsunami, volcanic eruption, earthquake, landslides). Energy generation and consumption The creation of energy using various technologies and processes and its effect on the environment. The effect of the use of created energy and the benefits of efficiency measures. Land quality Land use and management with effects on soil and the surrounding environment. Actions taken to rehabilitate the land after production processes. Aquatic environments Changes in quality and abundance of marine and freshwater resources. Water systems, availability, quality, access and management. Built environments The human-made surroundings in which people live, work, and recreate on a day-to-day basis ranging from buildings and parks to supporting infrastructure, such as water supply or energy networks. Table C3: Social impact categories SOCIAL IMPACT DEFINITION Health and wellbeing The capability to be alive and healthy. Access to resources, services and Access to new or improved knowledge and improved knowledge management opportunities and participation in social and economic life. Quality of life The degree of wealth and material comfort available. (material security and livelihoods) Safety Protection from dangerous materials, products or processes. Security (e.g. cyber, biological, civil and military) Physical and psychological protection against an external threat. Protection from an actual or perceived threat from an internal or external combatant that will affect the greater society. Information security as applied to computers and networks. Resilience The capacity to withstand or recover from loss or adversity including societal, national, regional and individual levels. Indigenous culture and heritage Indigenous tradition, the history of an Indigenous party in an area and/or evidence, of archaeological or historic significance, of Indigenous occupation. Innovation and human capital (creativity and The capacity to contribute to a society in terms of the production of inventions, invention) design and cultural programs as well as embodying knowledge, inspirations, aesthetics and symbolic. Human capital is productive wealth embodied in labour, skills and knowledge. Social cohesion (social inclusion, social OECD defines a cohesive society as one which “works towards the well-being of all capital and social mobility) its members, fights exclusion and marginalisation, creates a sense of belonging, promotes trust, and offers its members the opportunity of upward social mobility”. APPENDIX D Real options analysis An example of the application of Real Options Analysis Narrabeen Lagoon is one of about 70 intermittently closed and open lakes and lagoons in New South Wales. When heavy or sustained rainfall occurs the lagoon fills up like a bathtub, and may flood surrounding areas. Because climate change is expected to increase the frequency and intensity of storms and rainfall in the Narrabeen catchment over the coming century, as well as rising sea levels, the Australian Government Department of Climate Change and Energy Efficiency (2010) commissioned a study of the social costs and benefits to the community of adaptation measures such as levee banks to protect major access roads, widening the lagoon entrance, flood awareness programs, and planning controls. Two observations on historical data (a one in 20 years rainfall extreme event and one in a hundred years) obtained from local authorities were used to estimate the two parameters of a Gumbel extreme value distribution for the year 2009. Eleven runs of climate model simulations supplied by CSIRO were used to generate sets of distributions of rainfall probabilities for the years 2055 and 2090, with intervening years estimated by interpolation. Probability distributions were transformed into cost functions using damage estimates for different flood heights. Using readily available @Risk software, Monte Carlo analysis was applied by sampling from the 11 cost functions for each year from 2010 to 2100 to generate a single probability distribution for costs in each year. An optimisation model was also applied to assess the effect of interdependencies between different adaptation measures. The study found that a flood awareness program, increasing the minimum height of new buildings and a levee at one site next to the lagoon would generate benefits greater than costs if implemented immediately. However, the benefits of widening the lagoon entrance would not exceed the costs until 2035. Source: Abridged from Dobes (2010) and AECOM Australia Pty Ltd (2010) 48 Impact Evaluation Guide APPENDIX E Valuing non-market impacts A range of possible methods are available to enable monetisation of research impacts, even when those impacts relate to non-market goods and services. Monetising environmental and social impacts involves presenting the magnitude of these impacts in real dollar figures, but does not automatically turn them into economic impacts. In practice, working with most of the methods outlined below requires experience and good knowledge of the specific impacts. Expert input is required and this usually belongs to experts external to CSIRO. Benefits associated with non-market goods or services can be monetised in three broad ways: 1. Monetisation based on choices observed/ revealed through other transactions, also known as revealed preference methods; 2. Monetisation based on choices elicited from individuals in hypothetical scenarios, also known as stated preference methods; and 3. Monetisation based on previous valuation studies, i.e. benefit transfer approach. These monetisation methods aim to elicit the additional value or willingness to pay for additional and otherwise intangible benefits (e.g. improvement in levels of comfort or environmental quality) or the willingness to accept a compensation for a reduction in those benefits due to new technologies/services provided. Table E1 provides the definition and typical applications of these methods. It also outlines some advantages, limitations and recommendations on their use. A more detailed discussion of the general issue of non-market valuation is provided by Baker & Ruting (2014). This paper also provides a CSIRO example of the use of non-market valuation methods (refer p. 84). Table E1: Advantages and limitations of common monetisation methods Revealed preference methods Use data from actual events or observed market transactions to construct monetary values. Can be used for Direct Use or Indirect Use values METHOD DESCRIPTION ADVANTAGES DISADVANTAGES COMMON USES Hedonic pricing Used to value impacts that relate to externalities, through their impact on Defensible and objective approach as data is based on real market another market, such as transactions. Data on property prices. For example, the impact of research other markets, such as property prices, improving environmental amenity can be measured through differences in can often be readily available. residential property prices between sites with the improved amenity, and equivalent sites without. Travel cost method Uses how much people pay Yields objective data on to travel and time allocated how much people are to experience a place as the willing to pay, based on value of the place and its real market transactions. attributes. Requires a rich dataset to isolate the impacts of externalities, can be difficult to find equivalent control sites. Affected assets may not be directly associated with the research outcome. Costly and time consuming as it requires data collection of visitors’ expenditure data through survey techniques. Provides an estimate of the minimum willingness to pay but limited to use only for attributes that stimulate travel. Value estimates relate to past decision making not affected by current or future changes of any sort either regarding people who visit personally or to the visited area. Typically used when impacts of research relate to the quality of a place, and changes in real estate prices. Typically used when impacts of research relate to environmental amenities or cultural activities that attract visitors. As such, often applied to value attributes influencing tourism or recreational values. Used to assess one aspect of change in social values associated with changes in environmental condition. 50 Impact Evaluation Guide Estimating Direct Use Values using market values Productivity based Used to value impacts that approach change one or more of the inputs into the production process. As above for revealed preferences (i.e. based on real market transactions) Easy to apply if all inputs into production are known and the value chain is understood. Requires quality data from existing markets that disaggregates the various inputs into production. Producers may limit access to confidential production information. Use if impact of research changes one or more of the inputs into production. Ideally estimate the change in producer surplus. Replacement cost approach Damage cost avoided, replacement cost or substitute cost approaches – The alternatives are often well understood and quantified, and may Danger of overstating costs avoided when cost avoided relates to an Best used when the cost avoided is realistically something society would all of these are variations of the same theme, in which an impact is valued as the costs have been the traditional way of doing something before the new research unrealistic alternative. pay to avoid, especially where research is changing a traditional that the impacts has avoided. arrived. activity. Ecosystem Services Valuation Value of services provided by ecosystems. This approach is similar to Replacement Cost Approach in that the non-market values of the goods and services provided by the environment to the market are estimated by evaluating the services the environment provides or could provide, in replacement for man-made market-based capital or efforts. Applies mainly to Use Values. Can provide reasonable estimates for cases Cannot provide a full estimate of the value Used to estimate values of improved wetlands as where improved ecosystem function can replace current of the environment because it is limited to the market value of replacement for some water treatment plants; value of biodiversity investments in capital or inputs. ecosystem functions and not the full set of values people enjoy from with mixed cropping and shelter belts with its integrated pest environmental integrity – see the other Revealed and Stated Preference management values to replace higher pesticide use with mono-cropping. tools for this. Stated preference methods Use data elicited through surveys by asking respondents to place an economic value on the benefits or losses associated with a research output, for which there may not be a market. Surveys need to be carefully designed as they usually involve presenting hypothetical scenarios, which need to remain plausible and relevant to affected respondents. Can be used for non-market direct use or non-use values. METHOD DESCRIPTION ADVANTAGES DISADVANTAGES COMMON USES Contingent valuation Discrete choice modelling ‘Choice Modelling’ Elicits respondents ’willingness-to-pay’ for goods/services from research outcomes in a specific context. It can be also tailored to quantify the value that people are willing to accept for compensation if goods or services are not provided. Focuses on estimating willingness-to-pay for specific attributes of research outcomes that directly influence the respondent’s level of enjoyment. Examples of attributes include safety, water quality, biodiversity, information provided and price. This method also provides trade-off estimates, which can be used to quantify the compensation that respondents should be provided for decreasing a specific attribute. Powerful tool to value intangible benefits where no markets exist, e.g. benefits for health and environmental services. Values obtained are relevant to societal preferences in Australia. Can also be used to estimate non-use or existence values, i.e. preserving biodiversity. As above (contingent valuation). Unlike contingent valuation, choice modelling forces people to consider trade-offs, which may elicit more realistic hypothetical responses. Resource intensive as it needs well-designed surveys and a rigorous data collection process. Sampling should be carefully planned to ensure representativeness of the target population. Responses to contingent valuation studies are particularly sensitive to the framing of questions. Because it is a hypothetical question, results can be subject to several biases or inaccurate claims. As above (contingent valuation). Also requires collection of large samples to be statistically reliable. Results sensitive to the choices posed to subjects. Can be used across a wide range of impacts, even where no revealed preferences are available. Most commonly applied in cases where a major program of work is anticipated to have substantial health or environmental benefits and where specific valuation is required. Same as contingent valuation, but use when valuation of specific attributes is required. Public agencies in the health sector have increasingly commissioned projects involving choice modelling techniques for the valuation and monetisation of service delivery features which rely on key values, such as the value of statistical life. The OBPR provides guidance on estimating the value of statistical life and the value of a statistical life year. Other Stated Preference Approaches including - Experiments, Contingent Behaviour, Direct Preference Mapping, etc. all characterised by similar pros and cons to those above and require similar expert ability to undertake. Impact Evaluation Guide 53 A decision-tree for the use of these methods when dealing with environmental impacts is provided at Figure E1. Figure E1: Selecting a non-market valuation method – initial questions What types of values do people hold for the non-market environmental outcome? Use values Non-use values Are reliable data available for related market behaviour (such as travel or house purchases?) No Yes Consider revealed preference Consider stated preference Is the non-market outcome associated with visits to a recreational site? Yes No Consider travel cost Is the outcome likely to be reflected in the price of a market good (such as house prices or wages?) Yes No Consider hedonic pricing Consider other methods, such as stated preference or averting behaviour Is the policy change a package of several non-market attributes that could take on different combinations? Yes No Are estimates needed for the value of each attribute, can the attributes be varied independently, and do people value each attribute separately? No Yes Consider choice Consider contingent modelling valuation 54 Impact Evaluation Guide In addition to the methods described above, the following methods have been included because they allow the valuation of social (as opposed to environmental) impacts. The first method involves monetisation, while the latter two involve qualitative valuation (cf. Kelemen et al. 2014). Social Return on Investment (SRoI) SRoI is a composition of stakeholder-driven evaluation and cost-benefit analysis (CBA) which quantifies and places a monetary value on social impacts. The method enables evaluators to measure social impact against three primary performance indicators: appropriateness, effectiveness and efficiency. SRoI adopts the rigor of a CBA but also includes a process that embraces stakeholder informed data, which increases the depth of analysis and engages more broadly with those experiencing any change. SRoI is well suited to the CSIRO Impact Framework as it is based on program logic: its focus is on assessing the relationship between inputs and impact (Nicholls et al. 2009; Social Ventures Australia Consulting 2012). Social Impact Assessment (SIA) SIA is a framework that can be used to assess impacts of a wide range of types of change, from a proposal to build a new freeway to a proposal to change access to a natural resource such as water, a forest or the ocean (Becker & Vanclay 2003, BRS 2005; Coakes 1999; Coakes & Fenton 1999; Franks 2012). The method requires a range of different data including qualitative and quantitative data depending on the methods being applied. The main data needs relate to assessing the direct and indirect effects of proposed changes. This can be done using a variety of data sources, the most common types are: • secondary data – existing data sources can be used to identify the broad level and nature of potential impacts; • primary data – can be collected through surveys, interviews, focus groups etc. if secondary data is not available, not relevant or not appropriate (e.g. not at the right scale). Most Significant Change (MSC) MSC is a qualitative, participatory methodology focused on capturing project participants’ stories of significant change or impact (Clear Horizon 2014; 2015a, b; Davies & Dart 2005). MSC involves collecting and documenting stories from a range of participants. Each story represents the storyteller’s interpretation of impact. These stories are then collated and reviewed and discussed by participants in a participatory, systematic and transparent manner. This process leads to a collective agreement on what have been the most significant changes, or impacts, of a project or program. APPENDIX F Sensitivity analysis Cost-benefit analysis relies on assumptions. A sensitivity analysis is an explicit analysis of the sensitivity of the impact evaluation findings to these assumptions. The amount of effort devoted to this task should be reflective of: • the purpose of the evaluation (i.e. advocacy, accountability, allocation or analysis) • the requirements of the audience (e.g. a client might require some degree of sensitivity testing) • the specific nature of the project (e.g. evaluating the impact of research commissioned to inform public policy development might require a higher degree of scrutiny to assist in the uptake and adoption of the research). Box F1 provides guidance on the two main approaches to sensitivity analysis. Box F1: Conducting a Sensitivity Analysis Partial sensitivity analysis This approach varies one assumption (or one parameter or number) at a time, holding all else constant. For example, if the value of life plays an important role in the analysis, an average value of $3.5 million for the value of statistical life (VSL) might be used in the base case. Using partial sensitivity analysis would involve testing a range of values for the VSL, from $3 million to $15 million, without changing any other assumptions, and then reporting the results. The same process would be applied to test the effect of other uncertain parameters, such as the sensitivity of the costbenefit analysis to the discount rate used, returning each time to the base case figures for everything except the number in question. Extreme case sensitivity analysis This approach varies all of the uncertain parameters simultaneously, picking the values for each parameter that yield either the best- or worst case scenario. If a projects impacts look good even under the worst case assumptions, it strengthens the perceived value of the impact. Conversely, if the calculated impacts are modest even when using the most favourable assumptions, it is unlikely to be successful. Which approach? Both approaches are useful. Partial sensitivity analysis is most useful when there are only a handful of critical assumptions, while extreme case sensitivity analysis is more useful in cases of greater uncertainty. The choice of which approach to use will depend upon the number and type of assumptions made as well as the expectations of the evaluations audience. Source: Wholey et al. (2010) 56 Impact Evaluation Guide References Adam, P., Ovseiko, P., Grant, J., Graham, K., Boukhris, O., Dowd, A., Balling, G., Christensen, R., Pollitt, A., Taylor, M., Sued, O., Hinrichs-Krapels, S., Solans-Domènech, M. & Chorzempa, H. (2018). ISRIA statement: Ten-point guidelines for an effective process of research impact assessment. Health Research Policy and Systems. 16. 10.1186/s12961-018-0281-5. AECOM Australia Pty Ltd (2010) Coastal Inundation at Narrabeen Lagoon Optimising adaptation investment. Department of Climate Change and Energy Efficiency, Canberra Baker R. & Ruting B. (2014) Environmental Policy Analysis: A Guide to Non-Market Valuation. Staff Working Paper. Productivity Commission, Canberra. Becker HA & Vanclay F (editors) (2003) The international handbook of social impact assessment: conceptual and methodological advances. Edward Elgar Publishing, Cheltenham Bennett JW (2013) An ex-post impact assessment of IFPRI’s GRP22 Program, Water Resource Allocation: Productivity and Environmental Impacts. International Food Policy Research Institute: Washington Boardman, EA, Greenberg, DH, Vining AR & Weimer DL (2010) Cost–benefit analysis: concepts and practice, 4th edition. Pearson Prentice Hall, New Jersey Bureau of Rural Sciences (BRS) (2005) Socioeconomic impact assessment toolkit: a guide to assessing the socio-economic impacts of marine protected areas in Australia. BRS,Canberra. Butler JRA, Suadnya IW, Yanuartati Y, Meharg S, Wise RM, Sutaryono Y & Duggan K (In review) Designing and evaluating the priming of adaptation pathways in developing countries. Climate Risk Management Clear Horizon (2014) ‘Oh, the places you’ll go’: Three advances in the Most Significant Change Technique. Clear Horizon Blog. 22 May 2014. Online: http://www. clearhorizon.com.au/discussion/advancesinmsc/ Clear Horizon (2015a)Why should I use the Most Significant Change technique? Clear Horizon blog. 3 June 2015. Online: http://www.clearhorizon.com. au/discussion/why-use-msc/#ixzz3dllQtGrQ Clear Horizon (2015b)What’s changing in how we use the Most Significant Change technique? Clear Horizon Blog. 19 January 2015. Online: http://www. clearhorizon.com.au/training-mentoring/courses/ most-significant-change/#ixzz3dlkhgO5x Coakes S (1999) Social impact assessment: a policy maker’s guide to developing social impact assessment programs. Bureau of Rural Sciences, Canberra. Coakes S & Fenton M (1999) The application of social assessment in the Australian Regional Forest Agreement process. International Forestry Review, vol. 1, no. 1, pp. 11-6. Council of Rural Research & Development Corporations (2007) Guidelines for Evaluation. CRR&DCs, Canberra Davies R & Dart J (2005) The ‘Most Significant Change’ (MSC) Technique. A Guide to its Use. Available online: http://www.mande.co.uk/docs/MSCGuide.pdf Davis J, Gordon J, Pearce D & Templeton D (2008) Guidelines for assessing the impacts of ACIAR’s research activities. ACIAR Impact Assessment Series Report No. 58. Australian Centre for International Agricultural Research, Canberra Department of Finance (DoF) (2015) Resource Management Guide No. 131: Developing good performance information. DoF, Canberra Department of Finance and Administration (DoFA) (2006a)Handbook of Cost Benefit Analysis. Financial Management Reference Material No. 6. DoFA, Canberra Department of Finance and Administration (2006b) Introduction to Cost-Benefit Analysis and Alternative Evaluation Methodologies. DoFA, Canberra Dobes L (2010) Notes on Applying Real Options to Climate Change Adaptation Measures, with examples from Vietnam. Research Report No. 75. Environmental Economics Research Hub Research Reports. Edwards RW (2004) Measuring Social Capital: An Australian Framework and Indicators. Australian Bureau of Statistics, Australia Florida R (2015) The Global Creativity Index 2015. Martin Prosperity Institute, Toronto Franks D (2012) Social impact assessment of resource projects. Mining for Development: Guide to Australian Practice. International Mining for Development Centre, Queensland Guthrie S, Wamae W, Diepeveen S, Wooding S, & Grant J (2013)Measuring research - A guide to research evaluation frameworks and tools. Report MG-1217-AAMC. RAND Europe Harrison M. (2010) Valuing the Future: the social discount rate in cost-benefit analysis. Visiting Researcher Paper. Productivity Commission, Canberra Kelemen E, García-Llorente M, Pataki G, Martín-López B & Gómez-Baggethun, E (2014) Non-monetary techniques for the valuation of ecosystem service. In: Potschin, M & Jax K (eds): OpenNESS Reference Book. EC FP7 Grant Agreement no. 308428. Available via: www.openness-project.eu/library/reference-book Lazarow N, Meharg S, Butler JRA, Connor J, Liu S, Duggan K & Roth C (2015) Impact evaluation methods. DFAT-CSIRO Research for Development Alliance. CSIRO Land and Water Business Unit, Canberra Link AN & Vonortas NS (editors) (2013) Handbook on the Theory and Practice of Program Evaluation. Edward Elgar, London. Nicholls J, Lawlor E, Neitzert E & Goodspeed T (2009) A guide to Social Return on Investment. Cabinet Office, Office of the Third Sector, London O’Connell D, Walker B, Abel N & Grigg N (2015) The Resilience, Adaptation and Transformation Assessment Framework: from theory to application. CSIRO, Australia OECD (2011) Perspectives on Global Development 2012: Social Cohesion in a Shifting World. OECD Publishing Office of Best Practice Regulation (2008) Best Practice Regulation Guidance Note - Value of statistical life. Department of the Prime Minister and Cabinet, Canberra Office of Best Practice Regulation (2014) Guidance note: Cost-benefit analysis. Department of the Prime Minister and Cabinet, Canberra. Pannell DJ (1997)Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16: 139-152 Rogers, EM (1995) Diffusion of innovations (3rd edition). Free Press, New York Social Ventures Australia Consulting (2012) Social Return on Investment: Lessons learned in Australia. Investing in Impact Partnership. Online http://www.socialventures.com.au/ W.K. Kellogg Foundation (2004) Logic Model Development Guide. W.K. Kellogg Foundation, Michigan Walker T, Maredia M, Kelley T, La Rovere R, Templeton D, Thiele G & Douthwaite B (2008) Strategic Guidance for Ex Post Impact Assessment of Agricultural Research. Report prepared for the Standing Panel on Impact Assessment, CGIAR Science Council. Science Council Secretariat. Rome, Italy Wholey et al (editors) (2010) The Handbook of Practical Program Evaluation (3rd edition). Jossey-Bass Publishers, San Francisco 58 Impact Evaluation Guide For further information Performance & Evaluation Dr Anne-Maree Dowd Executive Manager +61 7 3327 4468 anne-maree.dowd@csiro.au csiro.au/impact