Impact Evaluation Guide October 2024 Our purpose is to solve the greatest challenges through innovative science and technology. Our vision is to create a better future for Australia. CSIRO acknowledges the Traditional Owners of the lands, seas and waters, of the area that we live and work on across Australia. We acknowledge all Aboriginal and Torres Strait Islander peoples and their continuing connection to their culture and pay our respects to Elders past and present. CSIRO is committed to reconciliation and recognises that Aboriginal and Torres Strait Islander peoples have made and will continue to make extraordinary contributions to all aspects of Australian life including culture, economy and science. Contents Introduction.................................................................................................................................................3 The centrality of impact for CSIRO....................................................................................................................................3Why evaluate impact?........................................................................................................................................................3Why produce an Impact Evaluation Guide?......................................................................................................................4Why has CSIRO publicly released the Guide?...................................................................................................................5Source materials.................................................................................................................................................................5Overview ......................................................................................................................................................6CSIRO’s impact evaluation principles................................................................................................................................6CSIRO’S impact evaluation process...................................................................................................................................7STEP 1: Establishing the purpose and audience.........................................................................8STEP 2: Clarifying the background information.........................................................................9STEP 3: Identifying the impacts .......................................................................................................103.1 Establishing the impact pathways ........................................................................................................................103.2 Identifying impacts across categories..................................................................................................................113.3 Identifying relevant parties...................................................................................................................................12STEP 4: Clarifying the impacts .........................................................................................................144.1 Counterfactual........................................................................................................................................................144.2 Attribution .............................................................................................................................................................154.3 Adoption ................................................................................................................................................................16STEP 5: Selecting the appropriate mix of methods................................................................185.1 Economic approaches............................................................................................................................................185.2 Non-economic approaches...................................................................................................................................19STEP 6: Evaluating the impacts ......................................................................................................206.1 Estimating costs.....................................................................................................................................................206.2 Estimating benefits................................................................................................................................................216.3 Externalities, spillovers and flow-on effects .......................................................................................................216.4 Distributional effects ............................................................................................................................................226.5 Inflation and currency adjustments......................................................................................................................23STEP 7: Calculating measures of economic return.................................................................247.1 Aggregation of impacts ........................................................................................................................................247.2 Accounting for the passage of time.....................................................................................................................247.3 Calculating return on investment ........................................................................................................................267.4 Conducting a sensitivity analysis..........................................................................................................................27STEP 8: Documenting recommendations for optimising impact ...................................288.1 Lessons learnt.........................................................................................................................................................288.2 Best practices..........................................................................................................................................................288.2 Barriers to adoption...............................................................................................................................................28STEP 9: Reporting impact evaluation findings ........................................................................299.1 Documenting assumptions and decisions...........................................................................................................299.2 Maintain a consistent report structure.................................................................................................................309.3 Refer to exemplary impact assessments..............................................................................................................319.4 Form a communications plan................................................................................................................................31 References..................................................................................................................................................32 APPENDIX A: CSIRO’s Impact Framework.................................................................................34APPENDIX B: CSIRO’s impact categories...................................................................................36APPENDIX C: Evaluation types........................................................................................................38APPENDIX D: Benefit-cost-analysis checklist............................................................................39APPENDIX E: Qualitative analysis approaches ........................................................................42APPENDIX F: Quasi-experimental econometric methods..................................................43APPENDIX G: Non-market valuation ...........................................................................................44APPENDIX H: Library of common statistics ............................................................................48APPENDIX I: Discounting and inflation examples....................................................................51APPENDIX J: Sensitivity analysis ....................................................................................................55 Figures Figure 1: CSIRO’s impact evaluation process...........................................................................................................................7 Figure 2: CSIRO’s Impact Framework.....................................................................................................................................10Figure 3: Indicative impact adoption profile.........................................................................................................................17Figure A1: CSIRO’s Impact Framework ..................................................................................................................................34Figure A2: Economic impact of a hypothetical CSIRO hydrogen project............................................................................35Figure E1: Selecting a non-market valuation method – initial questions...........................................................................47 Tables Table 1: Purposes and audiences of CSIRO impact evaluations (7 A’s of impact evaluation)..............................................8 Table 2: CSIRO’s impact categories.........................................................................................................................................11Table 3: Discount rate guidance from various offices throughout the world....................................................................25Table B1: Economic impact categories...................................................................................................................................36Table B2: Environmental impact categories..........................................................................................................................36Table B3: Social impact categories.........................................................................................................................................37Table C1: Comparison of evaluation types............................................................................................................................38Table J1: When to adjust for inflation ...................................................................................................................................39Table J2: Financial Year and CPI..............................................................................................................................................39Table J3: Example of discounting streams of benefits and costs.........................................................................................40Table J4: ROI measures............................................................................................................................................................40Table E1: Advantages and limitations of common revealed preference methods.............................................................45Table E2: Advantages and limitations of common stated preference methods.................................................................46 Boxes Box 1: What is impact for CSIRO?.............................................................................................................................................3Box 2: Data collection for the counterfactual.......................................................................................................................14Box 3: Research Infrastructure Assessments..........................................................................................................................25Box F1: Conducting a sensitivity analysis..............................................................................................................................55 Tasks Task 1: Establishing purpose and audienceTask 2: Clarifying the background informationTask 3: Identifying the impactsTask 3.1: Establishing the impact pathways Task 3.2: Identifying impacts across categories Task 3.3: Identifying relevant parties Task 4: Clarifying the impactsTask 4.1: Counterfactual Task 4.2: Attribution Task 4.3: Adoption Task 5: Selecting the appropriate mix of methods Task 5.1: Economic approaches Task 5.2: Non-economic approaches Task 6: Evaluating the impactsTask 6.1: Estimating costsTask 6.2: Estimating benefitsTask 6.3: Externalities, and spillover and economic flow‑on effects on nonusersTask 6.4: Distributional effects on UsersTask 6.5: Inflation and currency adjustmentsTask 7: Calculating measures of economic returnTask 7.1: Aggregation of impactsTask 7.2: Accounting for the passage of timeTask 7.3: Calculating Return on InvestmentTask 7.4: Conducting a sensitivity analysis Task 8: Documenting recommendations for optimising impactTask 8.1: Lessons learntTask 8.2: Best practicesTask 8.3: Barriers to adoptionTask 9: Reporting impact evaluation findingsTask 9.1: Documenting assumptions and decisionsTask 9.2: Maintain a consistent report structureTask 9.3: Refer to exemplary impact assessmentsTask 9.4: Form a communications plan List of acronyms ABS Australian Bureau of Statistics AUD Australian dollar BCA (also CBA) Benefit-Cost Analysis (or Cost-Benefit Analysis) BCR Benefit-Cost Ratio CSIRO Commonwealth Scientific and Industrial Research Organisation CPI Consumer Price Index CEA Cost-Effectiveness Analysis DID Difference-in-difference FTE Full-time equivalents FY Fiscal year GDP Gross Domestic Product IRR Internal Rate of Return IP Intellectual Property MSC Most Significant Change NPV Net Present Value EERE Office of Energy Efficiency and Renewable Energy OLS Ordinary least squares PGPA Public Governance, Performance and Accountability Act 2013 PV Present Value ROR Rate of Return RD Regression-discontinuity R&D Research and Development RD&E Research, development, and extension ROI Return on Investment SMEs Small and Medium Enterprises SIA Social Impact Assessment Glossary of terms TERM DEFINITION Adoption profile The level of anticipated uptake or use of research knowledge and techniques over time. Base year Same year as when impact evaluation is conducted and reported. Used as the year of comparison for inflation and discounting calculations. Benefit-Cost Analysis (BCA) Process for identifying, quantifying and comparing benefits and costs of an investment, action or policy. Benefit-Cost Ratio (BCR) Ratio of present value of benefits to present value of costs. Consumer Price Index (CPI) Measure of average change over time in prices paid by urban consumers for a market basket of consumer goods and services. Cost-Effectiveness Analysis (CEA) Method that compares interventions by estimating how much it costs to gain a common unit of outcome (e.g. life years gained or deaths prevented). Counterfactual The hypothetical situation that would have occurred in absence of CSIRO’s intervention. Difference-in-difference (DID) Compares the changes between a treatment group and a control group before and after exposure to treatment. Discount rate Rate of return used to discount future cash flows back to their present value. CSIRO uses a standard social discount rate of 7%. Discounting Technique used to compare costs and benefits that occur in different time periods by accounting for the social time preference. Double counting Counting an impact twice when only one instance is needed. End user Beneficiaries who use developed technologies, products, services or policy recommendations to address specific needs. Ex-ante Before the event (prospective or formative). Ex-post After the fact (retrospective or summative). Externality An indirect impact from research on a ‘third party’, that is, someone other than the direct user or adopters of the research outputs and applications. Flow-on effect Linkage within system that ensures initial impacts will lead to subsequent impacts within the system. Gross Domestic Product (GDP) Measure of value added created through production of goods and services in a country during a certain time period. Impact pathway Methodology of how impact occurs and can be measured. Includes Inputs, Activities, Outputs, Outcomes and Impacts. Inflation Rate of increase in prices over a given period of time. Innovation Improvement to existing technology or processes. Internal Rate of Return (IRR) The percentage yield on an investment. Most Significant Change (MSC) Qualitative, participatory methodology focused on capturing project participants’ stories of significant change or impact. Net Present Value (NPV) Difference between present value of benefits and present value of costs. Office of Energy Efficiency and Renewable Energy (EERE) Office under the U.S. Department of Energy that aims to make renewable energy cost-competitive with traditional sources of energy. Ordinary Least Squares (OLS) Method for choosing the unknown parameters in a linear regression model by the principle of least squares. Primary data Collecting data firsthand through surveys, interviews, focus groups, etc. if secondary data is not available, not relevant or not appropriate. Private benefits Profits or cost savings that accrue specifically to one party. Public benefits Benefits that accrue to end users of the product or service, or those who may be affected because of externalities and spillovers. TERM DEFINITION Quasi-experimental approach Used to estimate causal impact of an intervention, but specifically lacks the element of random assignment to treatment or control. Rate of Return (ROR) Net gain or loss of an investment over a specified time period. Expressed as a percentage of an investment’s initial cost. Regression-Discontinuity (RD) Method which aims to determine causal effects of interventions by assigning a cutoff or threshold above or below which an intervention is assigned. Return on Investment (ROI) Calculation of monetary value of an investment versus its cost. Secondary data Existing data sources. Sensitivity analysis Investigation of parameter values and assumptions underlying a model, the degree to which they are subject to potential charges, and their impacts on conclusions to be drawn from the model. Small and Medium Enterprises (SMEs) Businesses whose personnel and revenue numbers fall below certain limits. Social benefits Private and Public benefits. Social Impact Assessment (SIA) Framework that can assess impacts of a wide range of types of change. Requires a range of different data including qualitative and quantitative data depending on methods being applied. Social time preference Principle that generally, people prefer to receive goods and services sooner rather than later. Spillover User from different sector adopts technology for use in an unintended way. Introduction The centrality of impact for CSIRO Our purpose is to solve the greatest challenges through innovative science and technology. As one of the world’s largest multidisciplinary science and research organisations, we focus on issues that matter the most for Australia’s quality of life, the economy and our environment.1 CSIRO, the Commonwealth Scientific and Industrial Research Organisation, was established to produce positive impacts for the people of Australia. CSIRO’s origins date back to 1916, with the formation of the Commonwealth Advisory Council for Science and Industry,2 and since that time CSIRO has grown to become one of the largest multidisciplinary science and research organisations in the world. CSIRO now helps the nation overcome six challenges and turn them to Australia’s unique advantage, to help future-proof our quality of life, our economy and our environment. The six challenges are health and wellbeing, food security and quality, a secure Australia and region, resilient and valuable environments, sustainable energy and resources, and future industries. In addition, the organisation manages research facilities for the nation and provides services such as education and outreach, connection to small and medium enterprises (SMEs), and strategic advising. Working from sites across the nation and around the world, the aim of every CSIRO staff member is to solve seemingly impossible problems and create new value and a better future for all Australians. Box 1: What is impact for CSIRO? CSIRO defines impact as a positive effect on or benefit to the economy, society and environment, beyond contributions to academic knowledge. For the purposes of CSIRO’s impact evaluations, impact is the effect of CSIRO work that is generated after this work has been adopted. CSIRO was established to produce positive impact CSIRO’s vision is to create a better future for Australia Evaluation provides robust evidence of impact Why evaluate impact? Stating the goal of producing positive impact is not enough. For CSIRO to fulfil its purpose, each year it must provide interested parties (and itself) with robust evidence that this goal is being accomplished. This is the purpose of CSIRO’s impact evaluation activities: to provide a firm evidence base of the effects of CSIRO’s research and innovation activities on the economy, environment and society. Industrialised economies are increasingly relying on technology development and deployment to raise productivity and thereby increase economic competitiveness, as well as to address other significant challenges beyond economics. Equally important, the complexity of new technologies combined with the pressure to develop and deploy them quickly mandates more efficient technology-based growth and innovation models. Managing technology development and deployment programs requires not only a solid rationale, but also real-time monitoring and ex-post evaluation of the impacts of these programs. The main drivers behind CSIRO’s increasing interest in evaluating its research impact are represented in the 7 A’s of impact evaluation.3 Evidence generated through impact evaluation is provided to relevant parties, including: • Government, for the purposes of accountability as required under legislation4 and by principles of better practice performance management; • CSIRO leadership, to improve the alignment of purpose, mission, vision, goals, objectives and outcomes; to steer adaptation in organisational structure, culture, activities and priorities; and to inform future funding allocation towards areas that show the greatest promise; • CSIRO researchers and business development managers, to support analysis on how to improve CSIRO research and innovation activities; and • the Australian public, to acclaim the value of activities undertaken, outputs produced and changes made; and to advocate for the vitally important role that science, research and innovation play in ensuring Australia’s security and prosperity. 1 https://www.csiro.au/en/about/corporate-governance/corporate-plan/23-24-corporate-plan 2 This body was replaced in 1920 by the Commonwealth Institute of Science and Industry, which was replaced in 1926 by the Commonwealth Council for Scientific and Industrial Research, which was replaced in 1949 by CSIRO. 3 Adapted from Adam et al., 2018 4 Most particularly, CSIRO’s establishing legislation and the Public Governance, Performance and Accountability Act 2013. Ultimately, the value of an impact evaluation is measured by the strength of the evidence produced and the credibility of the evaluation to its intended audience(s). Most particularly, though, it is demonstrated by the use of the evaluation information to inform and improve future decisions and actions. For these reasons, CSIRO actively seeks to ensure that its research evaluation reports are well used by their intended audiences. Accountability To Parliament, our clients and the public Required by funding bodies and legislation (PGPA Act 2013) Acclaim Communication of the value of CSIRO activities, outputs and changes made Greater confidence for stakeholders Advocacy Evidence-based articulartion of the value of science and innovation Increase public buy-in Alignment Inform strategy Purpose, mission, vision, goals, objectives, outcomes One CSIRO Adaptation Inform strategy Identify needs for organisational change Nimbly address societal changes Allocation Inform strategy Inform investment decisions for greater returns Align capabilities to customer needs Analysis Greater awareness of collective action and impact informing future program design and delivery Improvement in performance management Why produce an Impact Evaluation Guide? CSIRO’s research activities and their impacts are diverse and occur across many sectors of the economy. Some impacts can be evaluated quantitatively using economic analysis or statistical methodologies, with results expressed in monetary terms. Other types of impacts—especially those relating to social effects—may have to be evaluated qualitatively. Ultimately though, each impact must be assessed within the context of a common framework if a comprehensive understanding of CSIRO’s impact and return on investment is to be developed. This Impact Evaluation Guide articulates that common framework; its consistent and rigorous use across CSIRO supports comparability of results from each evaluation across business units and time. The Guide describes the minimum requirements for all CSIRO impact evaluations, regardless of the purpose of the evaluation or the ‘unit of evaluation’ (which could be an individual project, subject area, business unit or the whole enterprise). It guides researchers, CSIRO staff and engaged external support to address key relevant questions in a logically consistent manner, to select the appropriate resources and methods in the evaluation of CSIRO research and to ensure consistency in analysing results. Each impact must be assessed within the context of a common framework if a comprehensive understanding of CSIRO’s impact and return on investment is to be developed. Why has CSIRO publicly released the Guide? The Guide has been publicly released because the need to demonstrate impact faces all publicly funded research in Australia. This need was heightened by the introduction of the Public Governance, Performance and Accountability Act 2013, which strengthened the planning, performance and reporting requirements for all Australian Government departments and agencies. CSIRO believes that it is beneficial for the broader innovation system for Australia’s publicly funded research organisations to use a common approach to the assessment of the outcomes and impacts of their research. Doing so will allow the outputs of all such evaluations to be used collectively to demonstrate the significant public benefits that are constantly being generated by public funding for science, research and innovation. The collective results of such evaluations can also be used by funding agencies, government departments and academic analysts in support of improving Australia’s innovation system performance. A secondary motivation for the public release of this Guide is to foster dialogue with CSIRO’s peers relating to research impact evaluation. CSIRO seeks to continually improve its own practices and to strengthen its own internal evaluation culture. Such changes will occur most rapidly and effectively to the degree that CSIRO staff are able to compare their evaluation efforts with those undertaken within other research organisations. It is important to note that the Guide is not being publicly offered because it advocates a new and different methodology in comparison to those applied elsewhere. On the contrary, the overall approach proposed within the Guide has been chosen to conform with both Australian Government standards and international ‘best practice’. Source materials As noted, the methodological approaches set out in the Guide have been developed to accord with advice provided by relevant Australian Government departments and agencies. This advice includes: TITLE AUTHOR, YEAR Guidance Note: Cost Benefit Analysis Department of the Prime Minister and Cabinet, The Office of Impact Analysis, 2023 Resource Management Guide No. 131: Developing good performance information Department of Finance, 2024 Guide to economic appraisal Infrastructure Australia, 2021 Environmental Policy Analysis: A Guide to NonMarket Valuation Productivity Commission Staff Working Paper, 2014 Valuing the Future: the social discount rate in cost–benefit analysis Productivity Commission Visiting Researcher Paper, 2010 Guidelines for assessing the impacts of ACIAR’s research activities Australian Centre for International Agricultural Research (ACIAR), 2008 Because it seeks to align with Australian Government practice, this Guide is subject to update as developments occur across the Australian Government, and particularly as part of the Australian Public Service Reform Agenda. Evaluators should consult the most current versions of these resources, apply any updated guidance provided therein, and communicate appreciable differences between them and this Guide to CSIRO. Below are other sources of reference material relevant to this Guide. • Discount Rates (New Zealand Treasury, 2024) • The Economic Appraisal of Investment Projects at the EIB: 2nd Edition (European Investment Bank, 2023) • Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs (U.S. Office of Management and Budget, 2023) • HM Treasury Green Book (2024) and Magenta Book (2020) • Evaluating the Realized Impacts of DOE/EERE R&D Programs: Standard Impact Evaluation Method, 4th Edition (O’Connor & Walsh, 2024) • Measuring research: A guide to research evaluation frameworks and tools (RAND Europe, 2013) • The Handbook of Practical Program Evaluation, 3rd Edition (Wholey et al., 2010) • Handbook on the Theory and Practice of Program Evaluation (Link & Vonortas, 2013) Overview CSIRO’s impact evaluation principles To ensure consistency in the application of CSIRO’s Impact Framework (refer to Appendix A) and to maximise the opportunity to compare evaluation results, CSIRO has adopted a series of core evaluation principles to guide all research impact evaluations conducted by, or on behalf of, CSIRO. The principles are: 1. Impact evaluation5 should be designed to document effective outcomes—the purpose and intended audience must drive the design of the impact evaluation. If appropriation funding was used to conduct the research, then the Australian Government must be considered part of the intended audience. 2. CSIRO is interested in identifying all significant impacts (positive and negative, intended and unintended) of its research interventions using a triple-bottom-line lens; one that considers economic, environmental and social impacts. Difficulties in evaluating a specific research impact should not discourage its evaluation. 3. As with planning for impact, and monitoring progress towards it, it is important to engage with all relevant parties during the impact evaluation to ensure a more complete investigation of the outcomes and impacts and any associated usage or adoption costs. The value of CSIRO work lies with those who adopt the outputs and therefore these users must be consulted regarding the extent of their values. Further, value creation is often driven by the collaborations CSIRO enters into with its key research and industry partners. A discussion of the nature and value of the relationships relevant for the research project or program under evaluation should be included in the case study report. 4. CSIRO uses benefit-cost analysis (BCA) as its primary methodology for research impact evaluation and augments this approach with other evaluation methodologies as appropriate depending on the nature of the projects, outcomes or impacts being evaluated. Other evaluation methodologies may include cost- effectiveness analysis (CEA), real options analysis, social network analysis or other qualitative analyses. Impacts should be measured relative to a baseline or counterfactual in which prevailing trends continued. 5. Where possible, all impacts evaluated should reference the relevant associated CSIRO Impact Categories— described in Appendix B—to ensure later comparability and possible aggregation. 6. When appropriate and feasible, every effort should be made to quantify and monetise all identified outcomes and impacts—both positive and negative. A narrative must be provided to articulate the nature of the outcome or impact along with any assumptions made about it, especially if it is being evaluated using non- market evaluation techniques. 7. All assumptions and key decisions made throughout the evaluation need to be documented in the final Impact Evaluation Report to ensure that the process is transparent, and to enable users of the evaluation findings to know the limits of any future comparison and aggregation across impact evaluations. 8. CSIRO attributes research effort primarily based on a cost share of the total research, development and extension or marketing investment that is necessary to achieve the outputs and outcomes. If other shares are appropriate, they should be agreed through consultation with collaborating organisations. 9. CSIRO uses a standard social discount rate of 7%. All costs and benefits need to be expressed in real dollars (i.e. adjusted for inflation) and then discounted using this social discount rate. The base year for inflation adjustment and discounting should be the same. Costs are assumed to be incurred at the beginning of a period, and benefits accrue at the end of a period. 10. Where it is at all possible, and in the interests of audit, all relevant parties must be asked to validate the quantitative and qualitative descriptions of the outcomes and impacts they provide before finalising the impact evaluation. 5 Within the context of CSIRO’s Impact Evaluation Guide, the term ‘Impact evaluation’ is used to refer specifically to ex-post impact evaluation. CSIRO’S impact evaluation process The recommended steps for conducting a CSIRO impact evaluation are summarised below and detailed throughout the rest of this guide. Figure 1: CSIRO’s impact evaluation process STEP 1 Establishing the purpose and audience Determine what is being sought from the evaluation, for whom, the relative priority, and how the evaluation outcomes will be used. STEP 2 Clarifying the background information Clarify the context by identifying the need CSIRO addresses with the work. STEP 3 Identifying the impacts Determine impacts to be evaluated, the pathways connecting them back to CSIRO, and the relevant parties engaged throughout those pathways. STEP 4 Clarifying the impacts Identify a credible counterfactual, estimate CSIRO’s proportional effort, and establish how much impact has been realised. STEP 5 Selecting the appropriate mix of methods Select relevant economic and non-economic analysis methods for estimating impacts. STEP 6 Evaluating the impacts Measure each impact, including costs, benefits, externalities and distributional effects, with inflation and discounting in mind. STEP 7 Calculating measures of economic return Present an aggregate net present value and benefit-cost-ratio across all types of impacts measured and perform a reality check. STEP 8 Documenting recommendations for optimising impact Identify and document lessons learnt, best practices, and barriers to adoption. STEP 9 Reporting Document assumptions, decisions and limitations, write up the findings and disseminate results. STEP 1: Establishing the purpose and audience Impact evaluations may be conducted for a range of purposes and to provide information to a range of audiences. The way a particular evaluation is conducted, its unit of evaluation (e.g. project, program, business unit), the data that is collected for it and the methodology used to interrogate that data, are all functions of the evaluation’s purpose and audience. Step 1 of CSIRO’s Impact Evaluation Process is therefore to establish these aspects of the evaluation. Generally, CSIRO impact evaluations are undertaken for one or more of the four purposes of impact evaluation (i.e. accountability, allocation, analysis and advocacy—see the Introduction and Table 1 below). Considering CSIRO’s commitment to evaluation, all relevant parts of CSIRO should gather and store information that could support future impact evaluations. This includes but is not limited to any market or technical publications describing the rationale for action and investment; any analysis results describing or detailing the advantages, disadvantages, costs or benefits of the solution; and basic data about uptake and usage by all relevant parties (e.g. who, when, where, why, how many). Table 1: Purposes and audiences of CSIRO impact evaluations (7 A’s of impact evaluation) PURPOSE AUDIENCE Accountability To provide evidence that research funding has been used effectively and in line with its initial intent External regulatory or funding bodies (e.g. Treasury, Australian National Audit Office) Allocation To assess progress and inform future allocation of research funding to ensure that resources are used in the best and/or most efficient way Internal: Executive Team and Executive Science Reviews Alignment To position purpose, mission, vision, goals, objectives and outcomes in the same direction Internal: Executive Team and Executive Science Reviews Adaptation To steer change in organisational structures, behaviours and cultures, and activities and priorities Internal: Executive Team and Executive Science Reviews Analysis To understand the reasons for success/failure of research outcomes and identify lessons learnt and areas for improvement Internal: business unit, program or group reviews Advocacy To demonstrate benefits and ‘make the case’ for a specific research area under the program of work, including CSIRO’s social licence to operate in particular fields Community, industry and other external organisations and the broader public Acclaim To compare and recognise the value of activities undertaken, outputs produced and changes made Community, industry and other external organisations and the broader public Source: Adapted from Adam et al., 2018 TASK 1: Identify purpose and audience Identify the purpose and audience of the impact evaluation that is being undertaken. If there are multiple purposes and audiences, determine their relative priority. STEP 2: Clarifying the background information All CSIRO research projects and programs are carried out to meet an existing industrial, local, regional, national or global need. Addressing this need might entail (among other things): • developing or implementing an innovation, new task or capability; and/or • investigating a technology, material, compound, process, organism or phenomenon. Because the impacts being evaluated arose from CSIRO addressing the need, they will only be properly understandable when described in that context. For this reason, the evaluation report must provide sufficient details of the need to make it clear why CSIRO was involved. The rationale for CSIRO investment, action or participation must be clearly articulated in the final evaluation report. Because contextual information on the history of the business, technology, task or issue will play an essential role in the final evaluation report, the sources of all these facts—obtained through background research—must be credible and verifiable. TASK 2: Background For each unit of evaluation, consider what elements of context must be known for the full significance of the impacts to be properly understood. Undertake thorough background research and carefully reference all factual material derived from this process. The history of the need CSIRO addressed provides essential context for understanding the impacts Facts obtained through background research must be credible and verifiable STEP 3: Identifying the impacts Logically, before an impact can be evaluated it must first be identified; and for an impact to be claimed as a ‘CSIRO impact’ there must be a clear pathway leading from the impact back to CSIRO. Hence, Step 3 of the evaluation process involves identifying the impacts to be evaluated, the pathways connecting them back to the research and innovation activities undertaken within the unit of evaluation, and their broader context. 3.1 Establishing the impact pathways An identified impact is only suitable for evaluation if a traceable causal relationship can be shown running from the original CSIRO initiative, through the creation of outputs, uptake and adoption outcomes, to the ultimate impacts. This relationship is known as an impact pathway and is encapsulated within CSIRO’s Impact Framework, depicted in Figure 2. It consists of inputs (such as funding, staff, infrastructure and intellectual property (IP)), activities (such as research and development (R&D), collaboration and extension), outputs (such as materials, technologies, processes and skills), outcomes (such as the adoption of the outputs) and impacts (economic, social and environmental). Further details on the Impact Framework are provided in Appendix A. Figure 2: CSIRO’s Impact Framework Multiple pathways to impact An impact pathway can be a complex chain of events with a range of variables affecting each link in the chain, especially if the project included early-stage, strategic research. Additionally, impacts may be delivered through multiple pathways. These may include commercial, capacity and capability building, advice, policy and regulatory, intellectual property and prospective pathways. Capturing all possible pathways within the impact pathway is important in a science and technology innovation space, as policy settings, existing innovation capacity and other factors can present either barriers or opportunities in enabling impact. Impact statements and plans Ideally, outcomes and impacts would be planned and anticipated using program management tools (such as CSIRO Impact Statements) at the commencement of the project or program. Projects without previously developed impact pathways can develop them for the first time during an impact evaluation.6 However, projects with previously developed impact pathways will be advantaged by already having collected monitoring information that can inform impact evaluations. Projects guided by an impact plan will also have better opportunities to maximise their outcomes and impacts. 6 Indeed, CSIRO researchers have developed specific methods for this purpose (e.g. Lazarow et al., 2015). TASK 3.1: Impact pathways Determine if impact pathways have already been set out for the unit of evaluation. • If so, then review and update them as required with input from researchers, Business Development staff, and other relevant stakeholders. • If not, develop the relevant pathway. Be sure to consider all possible pathways through which impacts are delivered within the impact pathway. Tracing the causal relationships from research input to impact is imperative and should be incorporated into the project planning phase when possible CSIRO has developed procedures for mapping impact pathways 3.2 Identifying impacts across categories Outcomes and impacts from research can be nuanced and multifaceted. For example, a new fuel efficiency technology may have economic impacts (reduced fuel expenditure), environmental impacts (avoided emissions from fuel combustion) and social impacts (avoided adverse health events because of improved air quality or environmental justice for communities near particularly intensive use of the predecessor technology). Further, if the technology is embodied in a new product, there may be gross domestic product (GDP), employment and labour income impacts. Triple-bottom-line impacts Impacts may be economic, societal or environmental. Although they may not all be quantifiable, all dimensions of impact are important to CSIRO. • Economic Impacts: Impacts on an economic system at a local, national or global level such as changes in revenue, operating costs, profitability, GDP, employment or investment returns. • Social Impacts: Impacts on the wellbeing of the surrounding and wider community. Social impacts include effects on health, equality, living standards, cohesion, resilience, security and safety practices. • Environmental Impacts: Impacts on living and non‑living natural systems, including ecosystems, land, air and water. Table 2 provides the current list of subcategories within each of these master categories. A more detailed description of each of the subcategories is provided in Appendix B. This list provides a good starting point for identifying impacts, but it is not exhaustive of all possible impacts. If a project being evaluated has generated impacts that are not on this list, then they should be included. Table 2: CSIRO’s impact categories ECONOMIC IMPACTS ENVIRONMENTAL IMPACTS SOCIAL IMPACTS National economic performance Air quality Health and wellbeing Trade and competitiveness Ecosystem health and integrity (natural capital) Access to resources, services and opportunities Productivity and efficiency Climate Quality of life (material security and livelihoods) Management of risk and uncertainty Natural hazards mitigation Safety and security (cyber, biological, civil and military) Policies and programs Energy generation and consumption Individual and community resilience New services, products and markets Land quality Indigenous culture and heritage Livestock health and prosperity Aquatic environments Innovation and human capital (creativity and invention) Protecting existing markets Built environments Social cohesion (social inclusion, social capital and social mobility) Avoiding double counting Although impacts may be multifaceted, care must be taken not to double count impacts. This is especially true when impacts are being monetised using either market or non-market valuation methods. Continuing the example from above, combining fuel cost savings and avoided expenditures to treat adverse health events is acceptable because the values being combined reflect distinct aspects of the impact. However, combining community members’ willingness-to-pay (WTP) to live in a cleaner environment with avoided health expenditures would be double counting because the WTP estimate likely accounts for the value someone would ascribe to avoiding any health problems they would otherwise face. TASK 3.2: Identify impacts and their categories Starting with the provided CSIRO impact category examples, but moving beyond them where necessary, identify the economic, environmental and social impacts of the project being evaluated. Consider both intended and unintended impacts and both benefits and adverse consequences. CSIRO identifies impacts using triple-bottom-line categories: Economic, Social, Environmental All impacts should be identified in the evaluation, regardless of monetisation 3.3 Identifying relevant parties As shown in CSIRO’s Impact Framework, CSIRO has control over its inputs, activities and outputs, but can only influence outcomes and impacts, either directly or indirectly. This means that CSIRO is unable to deliver its outcomes and impacts in isolation. An impact pathway articulates a systemic view where partners and other relevant parties play a role. It is thus important to identify partners and other relevant parties whom CSIRO has worked with throughout its impact pathway. For example: • Inputs: Who are CSIRO’s funders and resource providers? • Activities: Who is working with CSIRO to mobilise the inputs towards achieving the outputs? • Outputs: Who are CSIRO’s collaborators in delivering the outputs? • Outcomes: Who are the impact generators taking CSIRO’s outputs through to outcomes and helping CSIRO deliver the impacts? • Impact: Who are the beneficiaries to whom the impacts are targeted? Identifying public and private beneficiaries Pay attention to the beneficiaries, either direct or indirect, of the project output. Private benefits are measured in the form of profits or cost savings that accrue specifically to one party, often the innovator. The innovator may be CSIRO or a firm partnering with CSIRO. Public benefits are benefits that accrue to end users of the product or service, or those who may be affected because of externalities and spillovers. The combination of public and private benefits equals social benefits. Accounting for benefits transfers Care should be taken to consider where there are transfers of value between two or more Australian parties. Common transfer payments are sales revenue and royalty payments. If the analysis is conducted from the private perspective, then these are important benefits streams. If the analysis is conducted from the social perspective, revenues and royalties are transfers of value between parties, and therefore, may be signals of value creation but not of net economic gains. The exception is when there are revenues earned by Australian parties from non-Australian sources. Because the evaluation focus is typically on returns to Australian society, royalties to Australian parties (including CSIRO) are only a net benefit to Australia if they come from abroad. This also relates to the above remarks on double counting. For example, counting royalty payments from an Australian end user of a CSIRO technology along with monetised benefits experienced by end users would be double counting because the royalty payment likely reflects at least some portion of the benefit the user gains from the technology. TASK 3.3: Identify beneficiaries Identify relevant parties at each point of the impact pathway. Identify all beneficiaries, both direct and indirect and public and private. Properly account for benefits transfers among beneficiaries. Transfers between beneficiaries can be important indicators of value even if they do not increase net benefits Impacts affecting beneficiaries across a supply chain must not be double counted STEP 4: Clarifying the impacts Having established the impacts to be evaluated and that these impacts can be attributed back to CSIRO inputs and activities, Step 3 involves clarifying the impact narrative in the light of (1) what would have happened even with no involvement by CSIRO; (2) the contributions made by other organisations; and (3) how much of the anticipated impact is still to occur. The impact evaluation must also encompass an analysis of any significant negative unintended impacts to ensure a holistic assessment of the initiative’s effectiveness. 4.1 Counterfactual Impact evaluation focuses on those incremental impacts that result from CSIRO’s work, which we refer to as net impacts in this section. The net impact is estimated by comparing the observed or expected benefits with what is known as the ‘counterfactual’. The counterfactual is the hypothetical situation that would have occurred in the absence of CSIRO’s intervention. It is important to rule out alternative explanations for the cause of observed outcomes or impacts to convincingly establish the degree to which a particular research intervention is responsible for an observed outcome or impact. It must be recognised that the counterfactual may not be static. In the absence of action by CSIRO, prevailing technological trends could mean that progress would have occurred, albeit on a longer time scale, at greater cost or resulting in outputs producing lower efficiency or productivity. Net impacts should be measured relative to a realistic counterfactual. Counterfactual analysis enables evaluators to attribute cause and effect between the research intervention and the observed or expected outcomes and impacts. By establishing the counterfactual, it is possible to isolate the influence of any alternative explanations to reveal the net impact of CSIRO’s research. When the counterfactual cannot be directly observed, it must be approximated with reference to a comparison group or other intelligence. As discussed in Box 2, a range of accepted approaches exists for determining an appropriate comparison group for counterfactual analysis. Box 2: Data collection for the counterfactual Ideally, consideration of the counterfactual will commence during the planning phase of a research program. If a baseline is established before research commences, and evidence of the state of the counterfactual is collected alongside ongoing monitoring activities of a treatment group, then the scientific method provides a ready solution to generating a robust counterfactual (through the use of control and treatment groups). If not, then the task of establishing a counterfactual retrospectively is achievable, but is more complicated. Retrospective evaluations are usually conducted after the implementation phase and may exploit existing survey data, although the best evaluations will collect data as close to baseline as possible to ensure comparability of treatment and control groups. One solution might be to look at the situation at the start of the research. Looking at analogous sectors or situations where adoption has not taken place: what are non‑adopters doing? Components Counterfactual analysis includes: • an explanation of how changes in the outcomes would have occurred in the absence of a particular program of work; • a test of the effect of changes in those key variables that define the counterfactual through a sensitivity analysis; and • the use of control treatments in field experimentation or a replaced technology identified in adoption surveys as a starting point in technology-oriented impact evaluations. When developing a counterfactual, it is essential to identify: • substitutes that could have led to similar outcomes/ impacts; and • factors outside of CSIRO that may/did influence changes in the outcomes/impacts of interest. Treatment of substitutes In a hypothetical world without the specific research intervention under evaluation having been undertaken, the counterfactual may differ from the status quo because new technologies, product varieties, and other factors may have become available from other sources (e.g. as a result of other research work being undertaken nationally or internationally or through learning by doing). If equivalent substitutes (e.g. technologies, processes) have been developed, then these need to be identified and incorporated into the counterfactual. In this case, CSIRO’s research impact would be calculated as the difference between its own research impact and the research impact of the next closest substitute. Treatment of outside factors In the same hypothetical world noted above, the counterfactual may also differ from the status quo as key factors outside of CSIRO’s influence change over time. Social behavioural change, change in consumer preferences, environmental changes, macroeconomic trends and regulatory changes may affect the outcomes and impacts of interest. If, for instance, a particular program of work increases agricultural production, then this increase is to be considered net of changes driven by other factors that had an influence on production such as weather, pest outbreaks and changes to work practices. TASK 4.1: Counterfactual For each impact under investigation, consider: • what would have happened without CSIRO’s work? • are there any substitutes that could have led to similar outcomes/impacts? • have external factors influenced changes in the outcomes/impacts of interest? 4.2 Attribution The next step is to consider how much of the observed impact is in fact attributable to CSIRO. This includes consideration of the work of collaborating organisations and of new inputs beyond the research intervention under investigation. Treatment of collaborating organisations CSIRO often undertakes work in collaboration with other organisations, which includes for example sharing capability, funds and intellectual property. Therefore, calculating CSIRO’s, or another party’s, direct proportional effort towards the realised impact (intended or unintended) requires careful consideration of the key roles of participating organisations in a program of work. Specifically, this involves apportioning benefits when more than one organisation has participated in generating and adapting a technology or new idea, or where other inputs into additional work were also required before the impact occurred. CSIRO primarily uses the practice of attributing effort based on a cost share of the total research, development and extension (RD&E) investment that was necessary to achieve the outputs and outcomes. For example, if CSIRO funded 60% of the RD&E and an external organisation funded the remaining 40%, then 60% of the impact would be attributed to CSIRO. Attributing impacts with cost shares is particularly useful if the level of effort (or research or other work) involved across organisations is similar. However, if most of the novel work is undertaken by a single organisation, then the cost share approach may not be reflective of the actual contribution to research and attribution shares may have to be adjusted. To avoid any discrepancies from impacts generated from multiple programs of work, criticism from partners and self-reporting bias by staff members on their own contribution, it is recommended that, where possible, participating organisations agree on the shares (or apportioning method) to be used for the purpose of the impact evaluation. Treatment of new inputs To apply research, or other works, and translate outputs into outcomes or impacts, a number of new inputs may be required. Those new inputs may themselves affect outcomes and impacts. In such cases, the outcomes and impacts of CSIRO’s work should be net of the outcomes and impacts associated with those new inputs. For instance, new equipment may need to be purchased to apply a new technology developed by CSIRO and thus translate the technology into increased production. In that case, estimated research impacts (i.e. the value of increased production as a result of CSIRO’s research) should be considered net of those additional input costs. TASK 4.2: Attribution Were any collaborating organisations critical to achieving the outcomes/impacts? • If yes, determine their proportional effort, establish a defensible share of impacts attributable to CSIRO and then use that share to calculate ‘net’ impacts in Step 4. Were any new inputs, such as new equipment or new skills, critical to achieving the outcomes/impacts? • If yes, calculate their proportional contribution and then use that proportion to calculate ‘net’ impacts in Step 4. Counterfactual analysis isolates the influence of any alternative explanations to reveal the net impact of CSIRO’s research Net impact is CSIRO’s impact minus that of the closest substitute Not all of the impact may be attributable to CSIRO’s activities Attribution shares should be agreed through consultation with collaborating organisations 4.3 Adoption It is during the uptake and adoption phase (described in CSIRO’s Impact Framework as ‘outcomes’) that CSIRO works start to be translated into measurable outcomes and impacts. Uptake and adoption may begin in the form of trials undertaken by CSIRO (i.e. internal use only) or by selected ‘next users’ (such as industry partners or government bodies, i.e. external use). It is only when outputs are being used externally that work has a practical application to which a realised value can be attached. Otherwise, such valuations are predictions of potential value. Valuation of impacts based purely on internal trial information or an uncertain uptake profile is outside the scope of a purely ex-post7 impact evaluation. However, that type of information may be useful in real options analysis (discussed below) or in monitoring progress towards impact if CSIRO should undertake an ex-ante8 impact evaluation or when it is monitoring progress towards impact. Focusing on those research outputs that are being used externally, impact evaluation is frequently undertaken at a point when the adoption level has not yet matured. Following uptake by innovators (refer to Figure 3), the adoption level for a technology may increase rapidly as early adopters are engaged, and then level off as the ‘late majority’ and ‘laggards’ are adopting the new technology/practice. It is necessary to understand where on the ‘adoption profile’ the research being evaluated sits to assess the likelihood of further adoption happening over time. It may be possible to develop an uptake and/or adoption profile based on experience with similar research undertaken in the past (within or external to CSIRO). In that case, the impact evaluation should provide justification of the adoption profile being used. Alternatively, mapping of adoption pathways and indicators of progress towards adoption presented in relevant ex-ante evaluations could also be appropriate. 7 An evaluation of the impact attributable to a program of work after the research has begun producing one or more outcomes external to CSIRO, regardless of whether the research activity has been concluded. 8 An evaluation of a body of work which either has not yet started or has started but has yet to deliver any research outputs (and logically, therefore, no resulting outcomes or impacts have occurred). Figure 3: Indicative impact adoption profile Source: Rogers, 1995, p. 247 Additionality The impact should be assessed only on the basis of the additional value derived from the program of work over the evaluation time frame. If a program of work builds on a previous body of work undertaken by CSIRO, it is likely that the adoption or uptake rates will be influenced positively or negatively by that preceding work. Previous outputs and experience should be considered as part of CSIRO’s existing capabilities and stock of knowledge. Any capabilities developed previously, as well as other factors influencing adoption of outputs (e.g. seminars, workshops), should be used to refine the assumptions around adoption rates in the impact evaluation. For example, CSIRO and partners have been operating for over 30 years in cotton research and, as a result, research outcomes involving technological improvements in this field are likely to experience faster adoption rates than would occur in fields where CSIRO has not undertaken previous research. Impacts realised to date and going forward may be from the collective work that CSIRO and partners have achieved in this research space over the last decades rather than from one particular program of work. For an impact evaluation, it is important to focus on those specific or recent achievements that are most closely linked to a particular program of work. Market competitors The competitive landscape can also influence the uptake and adoption of CSIRO’s work. Key competitors should be identified as they can present opportunities or barriers for uptake and adoption. The adoption profile should reflect the presence of these market competitors by setting a level of uptake that realistically divides the market between current and future competitors. TASK 4.3: Adoption For each activity under consideration, determine: • whether outputs are being used externally to CSIRO and to what extent; • the likely uptake profile for the outputs of the program; • the influence of previous work undertaken by CSIRO; and • competitors in the market landscape that may affect adoption. If the program builds on previous work, then the evaluation should take this into account STEP 5: Selecting the appropriate mix of methods After determining the main purpose of the impact evaluation (Step 1), identifying all the impacts that will be measured (Step 3) and clarifying their true extent (Step 4), the next step entails selecting the appropriate evaluation approach fit to the expected impacts. The purpose of valuing impacts is to consider whether the benefits of a research program or initiative outweigh its costs, and to allow rigorous and consistent aggregation or comparison across studies. In the context of CSIRO’s triple‑bottom-line impact evaluation, it is important to provide robust measures across all benefits and costs. CSIRO’s standard approach to impact evaluation9 entails a mix of evaluation approaches depending on the type, measurability and timing of expected impacts, including: • BCA for impacts that can be assessed in monetary terms; • CEA for impacts that are not monetised but are quantifiable and can be compared to costs; • real options analysis for valuing research that has not yet matured but may generate future impacts if used; • social network analysis for assessing the interrelationships between CSIRO and other innovators; and • other qualitative analyses for describing impacts that are too broad, intangible or far from being realised to be monetised or quantified with confidence. 5.1 Economic approaches Benefit-cost analysis (BCA) BCA enables the comparison of impacts arising from CSIRO activities against the associated costs. The method provides a monetary measure of the current value of the work conducted beyond costs (net present value—NPV) as well as the relative value of the work in comparison to costs (benefit-cost ratio—BCR or rate of return—ROR). More detail on BCA is available from Office of Impact Analysis (2023) and Boardman et al. (2010). Appendix D also provides a checklist for completing a benefit-cost analysis (BCA). In a BCA, it is the change in benefits and costs that result from CSIRO’s work that is important, not total benefits and costs. The reference point for the change is the baseline that existed at the start of the research PLUS the evaluation of what would have happened without the CSIRO work (i.e. the counterfactual). The benefits and costs are measured relative to the counterfactual that has been established for this category of impact. Social return on investment is a composition of BCA and qualitative evaluation which quantifies and places a monetary value on social impacts. The method enables evaluators to measure social impact against three primary performance indicators: appropriateness, effectiveness and efficiency. Social return on investment is well suited to the CSIRO Impact Framework as it is based on program logic: its focus is on assessing the relationship between inputs and impact (Nicholls et al., 2009; Social Ventures Australia Consulting, 2012). Cost-effectiveness analysis (CEA) CEA is useful when benefits are quantifiable but not monetised. The approach entails the same cost assessment components as a BCA, but instead compares costs to non-monetised outcome measures to assess the cost effectiveness with which outcomes are met (Infrastructure Australia, 2021). For example, it can be politically unpopular to use the value of a statistical life to monetise outcomes to human life or health. In this case, CEA can provide results such as the number of lives saved or patients treated per $1000 spent without relying on a monetised estimate of those health benefits. In another example, the long-term monetised impacts of educational programs for primary or secondary school students are difficult, if not impossible, to identify with confidence. Without monetising impacts, CEA can provide the cost per student reached for a range of educational programs. CEA can also be helpful in comparing multiple approaches to achieve the same target outcomes. As an example, CEA can be used to assess the cost effectiveness of various plans to reach agreed upon greenhouse gas emission reduction targets. Although the value of greenhouse gas 9 Note that, in the context of this guide, the term ‘Impact evaluation’ is used to refer specifically to ex-post impact evaluation. Appendix C provides more details about ex-post, process, and ex-ante evaluations. emission reductions can be monetised to generate a CBA, because the targets have already been agreed upon, a CEA would focus more directly on the costs of various programs and initiatives. 5.2 Non-economic approaches Real options analysis Sometimes, impact evaluations are requested before work has fully matured or when adoption has just begun to occur. In such cases, evaluation can still be undertaken and may include both anticipated and actual outcomes and impacts based on evidence obtained to date. By contrast, in cases in which there is no adoption or evidence of an outcome, impact evaluation cannot be undertaken. In these circumstances, a real options analysis may be appropriate to assess the option value of research, as opposed to evaluating the realised impact of research. The option value is the present value of research, not from a current impact, but from retaining (or opening up) future options to use it at a later stage, and it can be considered as a risk management approach. For example, research into how to address a potential biosecurity hazard creates the option to act in a timely way in the future should that biosecurity threat occur. This option would not be available without the research occurring now. Importantly, a real option can have value now even if the future option is never exercised, in much the same way that an insurance policy can be valuable even though a claim is never made. See Dobes (2010) and AECOM Australia Pty Ltd (2010) for an example of applied real options analysis. Social network analysis Social network analysis is another approach to measuring the impact and influence of entities like CSIRO. Studying the patterns of relationships between different groups of innovators and their collaboration on different topics can provide valuable information about influence and the diffusion/uptake of a new idea or technology. Relationships can be measured using surveys, interviews, publication records, patent data and other sources. The utility of network analysis is that it can produce evidence about the linkages between CSIRO and ideas. This information is useful when new technologies are emerging and uptake is not yet strong enough to monetise impacts accurately, but anecdotal evidence suggests that ties to CSIRO are strong. Social network analysis can be paired with bibliometric analysis, as well as other traditional statistical approaches. See Popelier (2018) for a review on the use of social network analysis for evaluation purposes. Other qualitative analyses Qualitative assessment involves analysing non-quantifiable data to enable a more comprehensive evaluation of impacts. It enriches an impact analysis in one or more of the following ways: • Where quantitative data is available, qualitative analysis provides depth and context to measures of outcomes and impacts. • In scenarios where quantitative data on adoption and/or impact is either inaccessible or does not exist, such as in early-stage projects or ex-ante assessments, qualitative data can provide insights into the complexities and nuances in the impact pathway. • Many significant social impacts such as improvements in quality of life, cultural shifts, and enhancements in built environments, are inherently unquantifiable and necessitate qualitative evaluation. Similarly, the subjective experiences of researchers and communities, the societal acceptance of new technologies and the ethical implications of scientific advancements can only be thoroughly assessed through qualitative methods. In summary, qualitative assessment offers a more holistic view of impact beyond measurable indicators, thereby supporting more informed decision-making. See Appendix E for more details on qualitative analysis approaches. TASK 5: Evaluation approaches The choice of evaluation approach is driven largely by the quantifiability or monetisability of the associated impacts as well as the timing of when impacts are expected to occur. STEP 6: Evaluating the impacts The next step in the evaluation involves measuring the impacts. To complete this task, the following substeps must be taken: (1) estimating costs; (2) estimating benefits; (3) determining externalities, spillovers and flow-on effects on non-users; (4) determining distributional effects on users; and (5) making inflation and currency adjustments as needed. 6.1 Estimating costs Developing a comprehensive understanding of the cost basis is pivotal for completing many types of economic impact assessments. However, the process of collecting all appropriate costs is not always straightforward, especially for long-term investments and for activities that emerged from multiple programs. Changes in project accounting practices over time and the aggregation of teams, projects and work structures can complicate the development of an accurate cost basis. Early engagement with appropriate teams within CSIRO is essential for the development of an accurate understanding of the investment being evaluated. Care should also be taken to understand and document the CSIRO investment rationale to ensure that the accompanying case study report includes a description of where, when and how CSIRO chose to invest. Research and development (R&D) costs The R&D cost basis includes the costs incurred by CSIRO and its research partners. These are the input costs incurred to produce the research outputs and include costs associated with such things as staff full-time equivalents (FTE), non‑staff FTE, in-kind contributions, equipment/facilities and background IP. In some cases, input costs can be estimated using internal funding, external funding and grants—in other words, the financial resources used to pay for the labour and physical resources noted above. Internal costs within CSIRO that are specific to the body of work being evaluated can be established through finance system project reporting. Other input costs can be established through discussions with research partners. Costs by year are necessary given the discounting procedures required in later steps. Subtracting usage and adoption costs from benefits Usage and adoption costs are the costs borne by the end users in adopting the research outputs (not the costs associated with developing the outputs), and include such costs of those associated with any trials, further development, market tests or factory retooling required before a new technology can be made available to the market, as well as any marketing costs, training costs, extension costs and any other usage costs once it is available or which make it more available. Usage and adoption costs are subtracted from benefits (the numerator of a BCR) and are not included in the cost basis (the denominator). When calculating usage costs, it is preferable that the end user or relevant parties involved in the uptake of research outputs provide this figure, or at least confirm the figures arrived at. It should be noted that usage costs may be significantly higher than the input costs incurred by CSIRO and its research partners in producing the original output. If practical, benefits estimates should be collected net of adoption or usage costs. It is difficult to predict what may happen in the future, and therefore, determining usage costs may be difficult. However, there is usually a body of evidence evaluators may draw upon to support their assumptions. Knowledge of the industry in which the research output is being deployed as well as a strong relationship with end users should assist in determining usage costs. Relevant experience can also be accessed through discussions with sector specialists, business development staff, clients and intermediaries. At times it can be extremely difficult to identify precise estimates of all costs and benefits involved. What is required is as good an estimate as time and resources allow to be prepared, including (if necessary) a range of values for either (or both) costs and benefits, and with caveats about the uncertainties in measurement. This task is greatly assisted if it is approached in a systematic way following the impact pathway developed in Step 3. TASK 6.1: Estimating costs Account for all annual CSIRO R&D costs and for usage and adoption costs where appropriate. Subtract usage and adoption costs from benefits rather than adding them to CSIRO R&D costs. If there is uncertainty in any cost measures, develop a range of estimates that can be used in sensitivity analysis scenarios. 6.2 Estimating benefits Benefits are estimated through a mixed-methods approach depending on the type of data available, including: • quasi-experimental econometric methods (e.g. regression-discontinuity or difference in differences) for assessing impacts using administrative and statistical data (see Appendix F for more detail); • market analysis for estimating economic impacts to consumers or industries (i.e. increases in consumer or producer surplus, see Office of Impact Analysis (2023) for more detail); • non-market valuation methods (i.e. stated or revealed preference methods) for key types of impacts that can be indirectly monetised (e.g. environmental impacts—see Appendix G for more detail); • benefits transfers for cases where impacts have already been monetised in previous research (again, see Appendix G for more detail); • non-monetary quantification for impacts that can be described statistically, even if they cannot be monetised (e.g. people reached, units sold, emissions reduced); and • qualitative methods for any remaining impacts (e.g. surveys, interviews, focus groups, workshops— see Appendix E for more detail). Conducting an impact evaluation using a mixed-methods approach (e.g. identifying market and non-market benefits using both quantitative and qualitative data), provides the most complete possible assessment. Relying on established statistics where possible allows for more comparable results across case studies. Appendix H provides a library of common statistics identified by CSIRO and current case study contractors. TASK 6.2: Estimating benefits A mixed-methods approach maximises the use of available data and enables the most complete estimation of benefits, while relying on common statistics where possible ensures comparable results across case studies. 6.3 Externalities, spillovers and flow-on effects Evaluation should take account of all benefits and costs arising from CSIRO’s work. This means that beyond accounting for the direct effects of research, the wider effects on other areas of the economy, environment and society should also be considered. These take the form of externalities, spillovers and flow-on effects. There are no concrete guidelines on when to include externalities, spillovers or flow-on effects. Their value for inclusion within the analysis should be assessed on a case-by-case basis. Generally, an impact evaluation would include any of these impacts if they are especially sizeable or important for the case under consideration. Importantly, externalities, spillovers and flow-on effects can all be positive, in the form of added benefits, or negative, in the form of added costs. The most fundamental recommendation is that both benefits and the costs need to be afforded equal treatment. Otherwise, the aggregation of evaluation outcomes could lead to misleading results. Externalities An externality is an indirect impact from research on a ‘third party’, that is, someone other than the direct user or adopters of the research outputs and applications. Externalities may be economic, environmental and/or social in nature. Sometimes they are intended as an objective of the research, but often they are unintended. Spillovers A ‘spillover’ occurs when a user from a different sector adopts a technology for use in an application for which the technology was not originally intended. For example, new technologies were developed to produce solar panels. Those technologies were later adopted in the semiconductor industry. Spillover impacts are relevant and should be quantified. Flow-on effects A related, but subtly different concept, is economic flow-on (second round, or multiplier) effects. These refer to linkages within an economic system which ensure that initial impacts will lead to subsequent impacts within the system. Flow-on effects are indirect effects, experienced through adjustments in the economy that occur because of the direct impact of the research. The indirect flow-on effect requires that a direct effect has first occurred. Example An understanding of externalities, spillovers and flow‑on effects can be gained by considering the impacts of research into disease resistance for a new crop variety: • A landholder adopting the new variety may experience higher yields or a reduction in expenditure on pesticide. The landholder would thus experience an increase in producer surplus from greater sales at lower cost. This would be a direct effect of the research experienced by the landholder. • An indirect impact may be the lower pesticide requirements for landholders neighbouring the adopting landholder. This would be classified as a positive economic externality as it would be experienced by someone other than the adopter. • Other indirect impacts might be improved biodiversity outcomes or reduced pesticide pollution in local surface or groundwater. Such impacts would be positive environmental externalities. • In the market, prices for the crop are likely to fall and sales are likely to increase. For consumers, there may be an increase in consumer surplus with lower prices and greater consumption. This increase in consumer surplus would represent an indirect flow-on effect upon consumers. The producer was directly affected by the original research and consumers are also positively impacted as the end users in the supply chain of the product. • Changes to prices for pesticides because of lower pesticide demand may also occur, and these would also be classified as economic flow-on effects. Importantly, the decrease in pesticide prices due to decreased demand would represent a positive flow-on effect for landholders purchasing pesticides, but a negative flow-on effect for pesticide producers, resulting in an economic transfer from one party to another rather than a net positive gain. TASK 6.3: Externalities, and spillover and economic flow-on effects on non-users • Are externalities, and spillover and economic flow- on effects on non-users, relevant to this evaluation? If yes, include relevant analysis supported by robust evidence. • Be sure to treat both positive and negative effects equally. 6.4 Distributional effects An aggregated benefit-cost figure, or some single impact evaluation estimate, may mask a reality of winners and losers across groups of final users, industries or regions. For some impact evaluations, consideration of these winners and losers may be important. Of key importance is whether benefits and costs are experienced differently among different socioeconomic populations. An initiative can generate net benefits while benefiting primarily advantaged groups and neglecting or even harming others based on their income, location or other demographic features. Distributional effects can be analysed qualitatively or quantitatively depending on the type of information that can be identified and the availability of data. The first step in any distributional analysis is to identify the effected subpopulations and determine the costs and benefits accruing to each group. To conduct a quantitative distributional analysis, it is necessary to isolate the types and proportions of costs and benefits accruing to each subpopulation and to identify impacted subpopulations by a relevant and measurable factor, such as income, geography or other demographic characteristic. One way to account for distributional impacts among measured subpopulations is to apply income weights to the impacts realised by each subpopulation. Income weights account for diminishing marginal utility of income, wherein a dollar has a higher perceived value among individuals with lower starting incomes. See Circular A-94 (U.S. OMB, 2023) or the HM Treasury 2022 Green Book (2024) for more detail on applying income weights to conduct a distributional analysis. When quantitative data on disaggregated impacts is unavailable, qualitative estimates can be used in case studies to assess impacts on relevant parties. TASK 6.4: Distributional effects on users Does the impact differ across groups of final users, industries or regions? If yes, assess impacts across groups, including winners and losers and different socioeconomic populations. 6.5 Inflation and currency adjustments Inflation adjustment is recommended using the Consumer Price Index (CPI) available from the Australian Bureau of Statistics (ABS). The base year should be the same year as when the impact evaluation will be conducted and reported. There are two main instances when values need to be adjusted for inflation: • if the data is an annual time series of historical costs or benefits in nominal terms (e.g. administration or staffing costs); • if the data point is a single estimate from a past publication that will be applied to other years (e.g. applying a benefit estimate from a journal article to the current analysis). If using a constant annual estimate of costs or benefits for future values (e.g. assumed materials costs), then do not adjust for inflation because these values are already in real terms based on current dollars. See Appendix I for a detailed example of inflation calculations. In addition to adjusting for inflation, all values must be converted into Australian dollars (AUD) using currency exchange rates when necessary. Exchange rates between any two nations fluctuate over time and do not directly coincide with changes to inflation. Thus, it is ideal to use historical exchange rates when appropriate before adjusting for inflation. For example, if an impact value drawn from existing literature is in Euros from 2018, then the 2018 exchange rate should be applied to convert from Euros to AUD before inflating the resulting 2018 AUD value to the current AUD value. TASK 6.5: Inflation and currency adjustments • Convert all foreign currency values to AUD using historically appropriate exchange rates. • Inflate nominal values using the CPI. • Use the publication year as the base year for inflation. STEP 7: Calculating measures of economic return 7.1 Aggregation of impacts Programs of work will often yield multiple diverse impacts. Even for programs with clear monetary costs and benefits, there will be other non-monetary costs and benefits that can be included in an impact evaluation. Monetising non- monetary costs and benefits enables the aggregation of diverse impacts into a single set of impact measures. Because this higher level of aggregation is the product of many discrete steps and assumptions, it is even more important that the assumptions made throughout are consistent, and, as much as possible, the same valuation techniques are used for the same types of impacts. Consistency and transparency also help to avoid double counting, especially when research occurs across organisational boundaries (programs, business units, etc.), and especially for subject area reviews. TASK 7.1: Aggregation Where possible, present an aggregate NPV and BCR derived from the BCA across types of impacts measured. Relying on consistent assumptions aids aggregation and prevents double-counting. 7.2 Accounting for the passage of time It is important that impact assessments appropriately account for relevant project inputs, activities, outputs, outcomes and impacts occurring in both the past and the future. This requires setting an appropriate time frame for each assessment and accounting for the social time preference of consumption through discounting. Analysis time frame Assessments should capture costs and benefits as comprehensively as possible over the full duration of the relevant investment. An assessment of a discrete project should track costs and benefits going back to the first year of project funding. An assessment of a long-standing research program may only have budget information available for a near-term portion of their operating history. In addition to capturing as many years of realised costs and benefits as possible, assessments should include reasonable projections of costs and benefits at least 10 years into the future. Longer-term investments, that is, those initiating a new research program or establishing research infrastructure, may also require projections beyond 10 years to appropriately capture the intended value of the initiative. Box 3 discusses the case of assessing the impact of long‑term research infrastructure investments. Discounting Discounting is a technique used to compare costs and benefits that occur in different time periods. It is a separate concept from inflation and is based on the principle that generally, people prefer to receive goods and services sooner rather than later. This is known as the ‘social time preference’. Box 3: Research Infrastructure Assessments Research infrastructure is of strategic importance to the nation and requires significant and continual investment over a long period. Consequently, research facilities are frequently asked by funding bodies, policymakers and other interest holders to demonstrate their impacts beyond purely scientific benefits. Impact evaluation of research infrastructure takes a different approach from that of individual research projects or programs due to the unique role played by research infrastructure in supporting a wide range of research activities over an extended period. Research infrastructure serves various fields of science, offers multidimensional research opportunities, catalyses innovation and assists in developing new knowledge to underpin decision-making. While research programs are typically assessed over a 10-year period, this time frame may not be suitable for research infrastructure, given that these facilities often represent multi-year strategic investments that may take longer than a decade to generate tangible benefits. Typically, research infrastructure serves the research sector as the initial user and beneficiary. These initial research‑driven benefits lay the foundation for economic, environmental and social impacts that extend well beyond the immediate sphere of scientific inquiry. Therefore, it is recommended that the evaluation approach includes the long‑term research and innovation capacity-building and system enhancing effects of research infrastructure in addition to other social, economic and environmental impacts to which the research infrastructure might contribute. This approach demonstrates both the direct and systemic benefits of research infrastructure within the broader research ecosystem over time. The social discount rate is used to convert all costs and benefits to a ‘present value’ (PV), so that they can be compared. It is important for CSIRO to be able to compare the results of multiple impact evaluations for a range of purposes. To facilitate this, procedural considerations must be standardised, including the use of a standard social discount rate of 7% in all CSIRO impact evaluations. The base year used to discount values should be the same as the base year selected for inflation adjustments and, again, should ideally be the year of publication10. It should be noted that while the latest guidance from the Office of Impact Analysis (2023) has continued to recommend using a social discount rate of 7%, many other nations throughout the globe have been adjusting their recommended discount rates downward. The relatively high discount rate recommended by the Australian Government generates more conservative BCA estimates compared to analyses conducted using other global standards. Table 3 summarises the latest discount rate guidance from various offices throughout the world. Table 3: Discount rate guidance from various offices throughout the world TITLE AUTHOR DISCOUNT RATE Guidance Note: Cost Benefit Analysis Department of the Prime Minister and Cabinet, Office of Impact Analysis (2023) Base Rate: 7% Sensitivity checks: 3% and 10% Circular No. A-94: Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs United States Office of Management and Budget (2023) 2% The Green Book (2022) HM Treasury (2023) 3.5% base rate; 1.5% for health impacts The Economic Appraisal of Investment Projects at the EIB European Investment Bank (2023) 3.5% for low growth settings or long-term applications; 5% for high growth settings or private sector applications Discount Rates The Treasury of New Zealand (2024) 5% for most applications; 6% for telecommunications, media and technology, IT and equipment, and knowledge economy (R&D) applications 10 This approach harmonises with that adopted by the Office of Impact Assessment (2023), but differing approaches are used by some other publicly funded research agencies, notably ACIAR (ref. Davis et al., 2008 and Council of Rural Research & Development Corporations, 2007.) Refer also Harrison, 2010. It is important to correctly assign benefits and costs to each year in the analysis period. Unless there is robust rationale to the contrary, costs are placed at the beginning of a period, and benefits are placed at the end of a period. This is both intuitive (an investment catalyses a return) and lends a degree of conservatism to the results because benefits are, in effect, discounted one additional period. See Appendix I for a detailed example of discounting calculations. TASK 7.2: Accounting for time • Assess realised project costs and benefits as far back as possible. • Project future costs and benefits at least 10 years into the future, and farther for long-term investments. • Use a 7% social discount rate to convert all costs and benefits to ‘present value’. • Use the publication year as the base year for both inflation and discounting. 7.3 Calculating return on investment Depending on the objectives of the aggregation or comparative exercise, different aggregate measures may be used. For example, a ratio figure such as a BCR is most useful for assessing the relative social benefits compared to costs in the form of how much benefit is derived from each dollar invested. Although the BCR is often used to compare different programs, focusing only on maximising relative gains neglects consideration of the full costs and potential social benefits under consideration. The NPV in contrast provides estimates of the total social benefits realised net of program costs, allowing for selection of the program which maximises total gains. Impacts on GDP can also be used as one measure of how beneficial the CSIRO work, program or business unit has been. The NPV is the difference between the PV of benefits and the PV of costs. If the NPV is positive, one can reasonably assume that the investment was advantageous. The BCR is the ratio of the PV of benefits to the PV of costs. A BCR of 1 indicates a project breaks even from a financial perspective. Any project with a BCR greater than 1 is a successful project as defined in terms of monetised benefits exceeding costs. One useful interpretation of the BCR is that it represents the dollar benefit accruing for every $1 in cost incurred over the time frame of analysis. For example, a BCR of 3.0 (alternately, 3:1) would mean that over the entire time frame, $3 of benefit accrued for every $1 in cost. The internal rate on return (IRR) on an investment is interpreted as the percentage yield on an investment. In mathematical terms, the IRR is the discount rate that sets the NPV equal to zero or results in a BCR of 1. The IRR can be compared with conventional rates of return for comparable or alternative investments. For example, in using a social discount rate of 7%, an IRR greater than that value would imply returns that are higher than those needed to account for the time preferences of society. Note: It is not recommended that the NPV formula in Excel be used on a time series of inflation-adjusted net benefits because the formula does not account for difference in the timing of benefits and costs. The Excel NPV formula is also not recommended because it assumes the base year is the first year of data provided, which is typically not appropriate for ex-post analyses. See Appendix I for a detailed example of these calculations. TASK 7.3: Return-On-Investment • Calculate the difference between the present value of the streams of costs and benefits (NPV). • Calculate the ratio of the present value of benefits to the present value of costs (BCR). • Calculate the percentage yield on the investment (IRR). 7.4 Conducting a sensitivity analysis At a minimum, the impact evaluation techniques used as well as the accompanying assumptions and resulting findings should be discussed with relevant internal and external parties and end users to gauge the credibility of the process and the results generated. If a higher degree of scrutiny is warranted, then a sensitivity analysis should be conducted. Sensitivity analysis, broadly defined, is the investigation of: • the parameter values and assumptions underlying a model; • the degree to which they are subject to potential changes; and • their impacts on conclusions to be drawn from the model11. A thorough sensitivity analysis informs the audience of the uncertainty around the estimates of costs and benefits, especially the limits of the estimation techniques used to value non-market costs and benefits. Also, given the importance of the counterfactual in establishing the extent of change attributable to the research intervention, it should also undergo some degree of sensitivity analysis. At the very least, it is good practice to gauge the sensitivity of a BCA to the discount rate used, by recalculating the NPV with a range of plausible alternative rates of 3% and 10%. Although CSIRO uses a standard social discount rate of 7%, it is still important for users of the evaluation to understand if the use of other discount rates would have significantly changed the evaluation findings. There is a very large body of literature on procedures and techniques for sensitivity analysis. Suggested approaches to sensitivity analysis are discussed in Appendix J. In addition to sensitivity analysis, where there is genuine doubt about the range of costs and benefits claimed, the impact evaluation reporting should include that range (and where available, confidence intervals). TASK 7.4: Conducting a sensitivity analysis • What were the key assumptions underlying the BCA? • How do the outcomes of the analysis vary with variations to these key assumptions? • Detail the limitations of the evaluation. Sensitivity analysis demonstrates the robustness of the evaluation outcomes 11 Pannell, 1997 STEP 8: Documenting recommendations for optimising impact During the impact evaluation process, it is natural that recommendations for optimising impact will be revealed. These could take the form of lessons learnt, best practices drawn from similar efforts, or barriers to adoption to be addressed. 8.1 Lessons learnt Ideally, CSIRO staff are constantly learning lessons from the process of carrying out CSIRO projects and programs. Identifying and documenting these lessons helps ensure that ongoing and future work processes are optimised. Thus, it is vital to incorporate lessons learnt into CSIRO impact evaluations. Evaluators can elicit this information through ongoing meetings with CSIRO staff throughout the evaluation process and through interviews with CSIRO staff and other relevant parties. 8.2 Best practices For some impact evaluations, it is relevant to compare CSIRO initiatives to those of similar initiatives throughout the world. This is particularly relevant when assessing the impact of large-scale CSIRO programs and infrastructure. A comparative assessment of global best practices not only identifies the areas in which CSIRO is excelling, but also areas in which CSIRO initiatives could be improved to optimise impact moving forward. 8.2 Barriers to adoption While modelling adoption of CSIRO research, technology and infrastructure for the impact evaluation process, barriers to optimised adoption may be identified. Some barriers may be external to CSIRO activities, such as those resulting from policy or regulations. However, other barriers may be within CSIRO’s sphere of influence, such as those related to accessibility or usability. Identifying and documenting these barriers in an impact evaluation can highlight additional opportunities to optimise future impact. TASK 8: Optimising impact Throughout the evaluation, identify and document recommendations for optimising ongoing and future impact, including lessons learnt, best practices and barriers to adoption. Documenting recommendations for optimising impact identified through the impact evaluation process provides valuable insight to CSIRO staff working on the evaluated initiatives STEP 9: Reporting impact evaluation findings The primary purpose of undertaking an evaluation is to inform internal and external audiences of the impacts (both expected and delivered) from investments in CSIRO activities, as well as any lessons that may have been learnt. Consequently, it is essential that evaluation reports be readable and ‘user-friendly’. To ensure readability, the report should be drafted using language that can be understood by a non-technical audience. 9.1 Documenting assumptions and decisions CSIRO Impact Evaluation Principle 7 states that: All assumptions and key decisions made throughout the evaluation need to be documented in the final Impact Evaluation Report to ensure that the process is transparent, and to enable users of the evaluation findings to know the limits of any future comparison and aggregation across impact evaluations. Ensuring that the assumptions and decisions underlying an impact evaluation are properly documented is of the greatest importance, as the longer-term utility of the evaluation entirely depends upon it. This is because (as noted earlier) future aggregation of the outcomes of past evaluations can only occur to the degree that the assumptions underlying these evaluations are known to be consistent. Therefore, once the costs and benefits have been calculated, externalities and distributional effects factored in, and discounting has taken place, a retrospective glance should be made across the entire process, and all aspects of the analyses and calculations should be carefully recorded. Evaluators should provide in the final BCA: • documentation of the build-up of all benefits streams, although information specific to end users may need to be blinded or only presented in aggregate; • base year for inflation and discounting; • sources of exchange rates and the CPI used for inflation; • time series of real benefits, real costs and net benefits used in final BCA calculations; • time series of discounted (present value) benefits, costs and net benefits; and • return on investment (ROI) measures, including NPV, BCR and IRR. Evaluators should also document software tools and assumptions employed in the cash flow analysis and ensure that the spreadsheets for all calculations are submitted along with the evaluation report to CSIRO. Finally, the report should clearly acknowledge the evaluation’s limitations, including that: • the evaluation does not constitute a full assessment of the project; • in some cases, quantitative assessment is difficult; and • while the extent and value of non-market costs and benefits may have been crucial within a particular evaluation, there are limits to the accuracy of non-market estimation techniques. TASK 9.1: Documenting assumptions and decisions Ensure that all assumptions and key decisions made throughout the evaluation process are documented in the final evaluation report. Clearly acknowledge the limitations of the evaluation. Documentation is essential if the evaluation is to be useful in the long term 9.2 Maintain a consistent report structure When drafting a CSIRO impact evaluation report, it is desirable to use a structure that mimics, as much as possible, the structure of the evaluation process itself. Doing so confers a range of benefits, including that it: • makes it easier to ensure that no steps of the process have been omitted; • ensures that all CSIRO evaluation reports follow a common structure (and so their outcomes are easier to aggregate); and • makes it easier for regular readers of CSIRO’s impact evaluations to find the information they are seeking. Quantitative impact assessment structure Of course, every evaluation has unique features and so each evaluation report will need to be drafted accordingly. That said, ideally quantitative assessment reports will be structured using the following elements: 1. Executive summary 2. Purpose and audience 3. Background 4. Methods 5. Impact pathway a. Inputs / Activities / Outputs / Outcomes / Impacts 6. Clarification of the impacts a. Counterfactual / Attribution / Adoption 7. Evaluation of the impacts a. Costs / Benefits / Externalities, spillovers, flow-ons / Distributional effects 8. Measures of economic return a. Aggregation of research impacts / Return on investment / Sensitivity analysis 9. Optimising impact a. Lessons learnt / Best practices / Barriers to adoption 10. Limitations 11. Conclusion Qualitative impact assessment structure Qualitative assessment reports are not as likely to benefit from adhering to the same structure as quantitative assessments. The same guidance above applies, in that every evaluation has unique features that need to be accommodated structurally throughout the report. Still, ideally, qualitative assessment reports will be structured using the following elements: 1. Executive summary 2. Purpose and audience 3. Background 4. Methods 5. Impact pathway a. Inputs / Activities / Outputs / Outcomes / Impacts 6. Evaluation of the impacts a. Counterfactual / Attribution / Adoption b. Economic / Social / Environmental 7. Optimising impact a. Lessons learnt / Best practices / Barriers to adoption 8. Measuring impact a. Data gaps / Recommended impact measures 9. Limitations 10. Conclusion 9.3 Refer to exemplary impact assessments While drafting the report, it can be helpful to refer to evaluation reports that have been recently prepared by or for CSIRO. Doing so can provide an improved understanding of how to report on separate impacts, or of cases where a range of evaluation methodologies have been employed. This Guide is accompanied online with a link to past CSIRO evaluations (https://www.csiro.au/en/about/Corporate- governance/Ensuring-our-impact). Exemplar impact evaluation reports can be found here: • CSIRO (2020) Microencapsulation technology: Impact evaluation; and • CSIRO (2021) Dual-purpose canola impact case study. Relevant impact evaluations are also available from: • ACIAR Impact Analyses, which are undertaken annually for a small collection of projects; and • IFPRI Impact Analyses.12 9.4 Form a communications plan As noted in the introduction, the value of an impact evaluation is ultimately measured by how much its outputs are used by the intended audiences. For this reason, finalisation of the evaluation report should be followed by communication activities to ensure that the report and its contents become available to the full range of parties who are likely to have an interest in it. To assist with this process, a communications plan may be useful. In the longer term, it is also useful to assess how successful these communication and dissemination efforts have been. TASK 9: Reporting • Ensure that the language used within the report is appropriate for a non-expert audience. • Follow the recommended report structure as much as possible and appropriate based on the project being evaluated. • Refer to pre-existing case studies before and during the drafting of the report. • Consider developing and implementing a communications strategy to ensure that the information is well used by relevant parties. • Evaluate the success of dissemination efforts. Use of a standard structure aids readability and aggregation The evaluation has been successful to the degree that its outcomes are used 12 Note: An ex-post impact assessment of IFPRI’s GRP22 program, water resource allocation: Productivity and environmental impacts (Bennett, 2013) provides a useful example of the application of mixed methods. References Adam P, Ovseiko P, Grant J, Graham K, Boukhris O, Dowd A, Balling G, Christensen R, Pollitt A, Taylor M, Sued O, Hinrichs-Krapels S, Solans-Domènech M & Chorzempa H (2018) ISRIA statement: Ten-point guidelines for an effective process of research impact assessment. Health Research Policy and Systems. 16. 10.1186/s12961-018-0281-5. AECOM Australia Pty Ltd (2010) Coastal Inundation at Narrabeen Lagoon Optimising adaptation investment. Department of Climate Change and Energy Efficiency, Canberra Baker R & Ruting B (2014) Environmental policy analysis: A guide to non-market valuation. Staff Working Paper. Productivity Commission, Canberra. Becker HA & Vanclay F (editors) (2003) The international handbook of social impact assessment: conceptual and methodological advances. Edward Elgar Publishing, Cheltenham Bennett JW (2013) An ex-post impact assessment of IFPRI’s GRP22 Program, Water Resource Allocation: Productivity and Environmental Impacts. International Food Policy Research Institute: Washington Boardman EA, Greenberg DH, Vining AR & Weimer DL (2010) Cost–benefit analysis: concepts and practice, 4th edition. Pearson Prentice Hall, New Jersey Bureau of Rural Sciences (BRS) (2005) Socio-economic impact assessment toolkit: a guide to assessing the socio- economic impacts of marine protected areas in Australia. BRS, Canberra. Callaway B (2023) Difference-in-differences for policy evaluation. In: Zimmermann, K.F. (eds) Handbook of Labor, Human Resources and Population Economics. Springer, Cham. Clear Horizon (2014) ‘Oh, the places you’ll go’: Three advances in the Most Significant Change Technique. Clear Horizon Blog. 22 May 2014. Online: http://www. clearhorizon.com.au/discussion/advancesinmsc/ Clear Horizon (2015a) Why should I use the Most Significant Change technique? Clear Horizon blog. 3 June 2015. Online: http://www.clearhorizon.com. au/discussion/why-use- msc/#ixzz3dllQtGrQ Clear Horizon (2015b) What’s changing in how we use the Most Significant Change technique? Clear Horizon Blog. 19 January 2015. Online: http://www. clearhorizon. com.au/training-mentoring/courses/ most-significant- change/#ixzz3dlkhgO5x Coakes S (1999) Social impact assessment: a policy maker’s guide to developing social impact assessment programs. Bureau of Rural Sciences, Canberra. Coakes S & Fenton M (1999) The application of social assessment in the Australian Regional Forest Agreement process. International Forestry Review, vol. 1, no. 1, pp. 11-6. Council of Rural Research & Development Corporations (2007) Guidelines for evaluation. CRR&DCs, Canberra CSIRO (2020) Microencapsulation technology: Impact evaluation. Available online: https://www.csiro.au/-/media/ About/Files/Impact-case-studies/Full-Reports/CSIRO--- Microencapsulation_2020-PUBLIC.pdf CSIRO (2021) Dual-purpose canola impact case study. Available online: https://www.csiro.au/-/media/About/Files/ Impact-case-studies/Full-Reports/CSIRO-Dual-Purpose- Canola-Impact-Case-Study-FINAL_-EXTERNAL-version-2.pdf Davies R & Dart J (2005) The ‘Most Significant Change’ (MSC) Technique. A guide to its use. Available online: http:// www.mande.co.uk/docs/MSCGuide.pdf Davis J, Gordon J, Pearce D & Templeton D (2008) Guidelines for assessing the impacts of ACIAR’s research activities. ACIAR Impact Assessment Series Report No. 58. Australian Centre for International Agricultural Research, Canberra Department of the Prime Minister and Cabinet, The Office of Impact Analysis (OIA) (2023) Guidance note: Cost benefit analysis. OIA, Canberra https://oia.pmc.gov.au/resources/ guidance-assessing-impacts/cost-benefit-analysis Dobes L (2010) Notes on applying real options to climate change adaptation measures, with examples from Vietnam. Research Report No. 75. Environmental Economics Research Hub Research Reports. European Investment Bank (EIB) (2023) The economic appraisal of investment projects at the EIB: 2nd Edition. Available at https://www.eib.org/en/publications/20220169- the-economic-appraisal-of-investment-projects-at-the-eib Franks D (2012) Social impact assessment of resource projects. Mining for Development: Guide to Australian Practice. International Mining for Development Centre, Queensland Giancolo EAL, Eichler M, Muensterer O, Strauch K & Blettner M (2020). Methods for evaluating causality in observational studies. Deutsches Arzteblatt International, 117(7): 101-107. Harrison M (2010) Valuing the Future: the social discount rate in cost-benefit analysis. Visiting Researcher Paper. Productivity Commission, Canberra. Available at https:// ssrn.com/abstract=1599963 HM Treasury (2020) the Magenta Book (2011). Updated 1 April 2020. Available at https://www.gov.uk/government/ publications/the-magenta-book HM Treasury (2024) the Green Book (2022). Updated 16 May 2024. Available at https://www.gov.uk/government/ publications/the-green-book-appraisal-and-evaluation-in- central-government/the-green-book-2020 Infrastructure Australia (2021) Guide to economic appraisal. ISBN: 978-1-925352-56-6 Kelemen E, García-Llorente M, Pataki G, Martín-López B & Gómez-Baggethun E (2014) Non-monetary techniques for the valuation of ecosystem service. In: Potschin, M & Jax K (eds): OpenNESS Reference Book. EC FP7 Grant Agreement no. 308428. Available at: www.openness-project.eu/library/ reference-book Lazarow N, Meharg S, Butler JRA, Connor J, Liu S, Duggan K & Roth C (2015) Impact evaluation methods. DFAT-CSIRO Research for Development Alliance. CSIRO Land and Water Business Unit, Canberra Link AN & Vonortas NS (editors) (2013) Handbook on the theory and practice of program evaluation. Edward Elgar, London. New Zealand Treasury (2024) Discount rates. Updated 8 April 2024. Available at https://www.treasury.govt.nz/ information-and-services/state-sector-leadership/guidance/ reporting-financial/discount-rates Nicholls J, Lawlor E, Neitzert E & Goodspeed T (2009) A guide to social return on investment. Cabinet Office, Office of the Third Sector, London Office of Impact Analysis (2023) Guidance note: Cost benefit analysis. Department of the Prime Minister and Cabinet, Canberra. Pannell DJ (1997) Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16: 139-152 Popelier L (2018). A scoping review on the current and potential use of social network analysis for evaluation purposes. Evaluation, 24(3), 325-352. https://doi. org/10.1177/1356389018782219 RAND Europe (2013) Measuring research: A guide to research evaluation frameworks and tools. Available at https://www.rand.org/content/dam/rand/pubs/ monographs/MG1200/MG1217/RAND_MG1217.pdf Rogers EM (1995) Diffusion of innovations (3rd edition). Free Press, New York Social Ventures Australia Consulting (2012) Social return on investment: Lessons learned in Australia. Investing in Impact Partnership. Online http://www.socialventures.com.au/ Stuart EA (2010) Matching methods for causal inference: A review and a look forward. Statistical Science, 25(1): 1-21. O’Connor AC, Walsh AC (2024) Evaluating the realized impacts of DOE/EERE R&D programs: Standard impact evaluation method, 4th Edition (2024). Prepared for: U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE). U.S. Office of Management and Budget (OMB) (2023) Guidelines and discount rates for benefit-cost analysis of federal programs. Circular No. A-94. Available at https:// www.whitehouse.gov/wp-content/uploads/2023/11/ CircularA-94.pdf Wholey et al. (editors) (2010) The handbook of practical program evaluation (3rd edition). Jossey-Bass Publishers, San Francisco Zawadski R, Grill JD, Gillen DL & the Alzheimer’s Disease Neuroimaging Initiative (2023) Frameworks for estimating causal effects in observational settings: Comparing confounder adjustment and instrumental variables. BMC Medical Research Methodology, 23(122). INPUTS ACTIVITIES OUTPUTS OUTCOMES IMPACT Planned work Intended results Can be controlled Direct influence Indirect - Staff FTE - Non-staff FTE - Dollar-value estimates using: - appropriation funding - external funding - grants - in-kind contributions - equipment/facilities. - Research/technology developments - Education - Industry engagement (incl. with small and medium enterprises) - International engagement - The research solutions, services, and/or capacities that result from the completion of activities within a research portfolio or project (e.g. publications, patents, prototypes, training packages, students trained, reports). - The intended or desired short-to-medium- term effects /change expected to be realised from successful delivery of research outputs (e.g. adoption of new techniques, process and behavioural changes, new products, licences/IP sold – this component is also called ‘uptake’ in some CSIRO examples). - An effect on, change or benefit to the economy, environment or society beyond those contributions to academic knowledge. Impacts include wider economic, environmental and social impacts such as increased economic activity, productivity improvement, water savings, reduced emissions, improved health and wellbeing, etc. - Although the Framework is depicted as a linear process, just as science is serendipitous and agile in execution with multiple feedback loops and engagement, the framework should also be operationalised as you would execute the research. Engagement Feedback APPENDIX A: CSIRO’s Impact Framework CSIRO’s approach to planning, monitoring and evaluating impact is built on the concept that, in order to assess the value of research, it must be possible to track the process from inputs to impacts. CSIRO’s logic model, the CSIRO Impact Framework shown in Figure A1, is used to articulate ‘pathways to impact’. It identifies the inputs and activities required to deliver research outputs, and the uptake and adoption outcomes which will need to occur to eventually lead to the desired impacts. Each of these components may be understood as follows: • Inputs: Resources applied to deliver activities, such as people, equipment, funding, etc. • Activities: Actions taken or work performed through which inputs, technical assistance and other types of resources are mobilised with the intention of achieving specific outputs (e.g. technology development, education, engagement). • Outputs: The research solutions, services and capacities that result from the completion of activities within a research portfolio or project (e.g. publications, patents, prototypes, training packages, students trained, reports). • Outcomes: The intended or desired medium-term effects/change expected to be realised from the successful delivery of research outputs (e.g. adoption of new techniques, process and behavioural changes, new products, licences/IP sold—this component is also called ‘uptake’ in some CSIRO examples). Figure A1: CSIRO’s Impact Framework Note: Although depicted as a linear process, just as science is serendipitous and agile in execution with multiple feedback loops and engagement, the Framework should also be operationalised as you would execute the research. - 7 person CSIRO project team from the Energy Business Unit - 2 in-kind researchers from University of Technology, Sydney - 3 external industry funding partners - Background IP - Infrastructure and equipment - Pilot project to explore and develop novel value chain pathways for Hydrogen - Research into social licence to operate in the hydrogen industry context - Industry engagement - Communication activities - Next generation, sustainable hydrogen production technologies - Hydrogen distribution and utilisation technology prototypes - Tools for establishing a social licence to operate for the hydrogen industry - Journal articles for review - CSIRO technologies for the production and distribution hydrogen employed to increase the volume of sustainably sourced hydrogen moving through the value chain, with a focus on exporting ‘green’ hydrogen to major international markets - Establishment of a sustainable and viable hydrogen export industry in Australia INPUTS ACTIVITIES OUTPUTS OUTCOMES IMPACT Your planned work Your intended results • Impact: An effect on, change or benefit to the economy, environment or society beyond those contributions to academic knowledge. Impacts include wider economic, environmental and social impacts such as increased economic activity, productivity improvement, water savings, reduced emissions, improved health and wellbeing, etc. The Framework articulates a systemic view where partners and other relevant parties play a role. While CSIRO controls its inputs, activities and outputs, CSIRO is unable to deliver its outcomes and impacts in isolation and can only influence them (directly or indirectly). Identifying partners and other relevant parties whom CSIRO has worked, or is working with, from Inputs, Activities, Outputs and Outcomes, through to Impacts, is thus crucial. Although depicted as a linear process for the sake of simplicity, it should be understood that, just as science is often serendipitous and agile in execution with multiple feedback loops and engagement at all stages, the Framework should be operationalised as you would execute the research. Figure A2 provides a worked example of an impact pathway for a research project using the Framework. Figure A2: Economic impact of a hypothetical CSIRO hydrogen project APPENDIX B: CSIRO’s impact categories Table B1: Economic impact categories ECONOMIC IMPACT DEFINITION National economic performance The capability to influence or change economy-wide impacts (i.e. at the macroeconomic level), such as unemployment, national income, GDP, inflation and price levels. Trade and competitiveness The capability of trade-exposed firms to succeed in international competition against leading international competitors. Productivity and efficiency The capability to influence or change the production of products and services such as risk, profitability and productivity aspects, and sustainability of the production and consumption system. This also includes the capability to influence or change the performance measures related to the supply chain members. Management of risk and uncertainty The capacity for rapid innovation at scale to reduce risk of damage or lost opportunity (in the form of early warnings or early identification of opportunities). Policies and programs The capability to influence or change the coordination and governance of social, economic and environmental policies and programs, for example, better return on investment and reduction in green and red tape. New services, products and markets The capability to develop new products and services, through technological and organisational innovations, including in the following areas: Food, Soil and Water, Transport, Cybersecurity, Energy and Resources Manufacturing, Environmental Change and Health. Livestock health and prosperity The capacity to reduce the likelihood of invasive animal diseases that have the potential to cause significant harm to the economy from entering, emerging, establishing or spreading within Australia. Protecting existing markets The capacity to maintain and/or increase returns from existing market access. Table B2: Environmental impact categories ENVIRONMENTAL IMPACT DEFINITION Air quality The degree to which the air in a particular place has changed. Ecosystem health and integrity (natural capital) The variety and connections between plant and animal life in the world or in a particular habitat. Focus on plants and animals within an area and how they interact with each other as well as with other elements such as climate, water and soil. Also the ecosystem services provided to protect ecosystems and biodiversity. Look to add the concepts around natural capital. Climate Focus on atmospheric, land and ocean patterns and the changes in these over time. Natural hazards mitigation Steps taken to contain or reduce the effects of an anticipated or already occurred disastrous events (such as drought, flood, fire, lightning, various levels and types of storms, tornado, storm surge, tsunami, volcanic eruption, earthquake, landslides). Energy generation and consumption The creation of energy using various technologies and processes and its effect on the environment. The effect of the use of created energy and the benefits of efficiency measures. Land quality Land use and management with effects on soil and the surrounding environment. Actions taken to rehabilitate the land after production processes. Aquatic environments Changes in quality and abundance of marine and freshwater resources. Water systems, availability, quality, access and management. Built environments The human-made surroundings in which people live, work and recreate on a day-to-day basis ranging from buildings and parks to supporting infrastructure, such as water supply or energy networks. Table B3: Social impact categories SOCIAL IMPACT DEFINITION Health and wellbeing The degree to which the air in a particular place has changed. Access to resources, services and opportunities Access to new or improved knowledge and improved knowledge management and participation in social and economic life. Quality of life (material security and livelihoods) The degree of wealth and material comfort available. Safety Protection from dangerous materials, products or processes. Security (e.g. cyber, biological, civil and military) Physical and psychological protection against an external threat. Protection from an actual or perceived threat from an internal or external combatant that will affect the greater society. Resilience The capacity to withstand or recover from loss or adversity including societal, national, regional and individual levels. Indigenous culture and heritage Indigenous tradition, the history of an Indigenous party in an area and/or evidence, of archaeological or historic significance, of indigenous occupation. Innovation and human capital (creativity and invention) The capacity to contribute to a society in terms of the production of inventions, design and cultural programs as well as embodying knowledge, inspirations, aesthetics and symbolic. Human capital is productive wealth embodied in labour, skills and knowledge. Social cohesion (social inclusion, social capital and social mobility) OECD defines a cohesive society as one which ‘works towards the wellbeing of all its members, fights exclusion and marginalisation, creates a sense of belonging, promotes trust, and offers its members the opportunity of upward social mobility’. APPENDIX C: Evaluation types Ex-post (“after the fact”, retrospective or summative) impact evaluation is defined by CSIRO as: an evaluation of the impact attributable to a program of work after the research has begun producing one or more outcomes external to CSIRO, regardless of whether the research activity has been concluded. In a practical sense, ex-post evaluations are still forward- looking in that it is necessary to combine an evaluation of delivered outcomes and impacts with an estimate of the future impact of unrealised outcomes over potentially many years into the future. Ex-ante (“before the event”, prospective or formative) impact evaluation is defined by CSIRO as: an evaluation of a body of work which either has not yet started or has started but has yet to deliver any research outputs (and logically, therefore, no resulting outcomes or impacts have occurred). This type of analysis is useful in considering whether a project should be undertaken or in comparing alternative prospective projects aimed at common objectives. Ex-ante evaluation can also aid the planning and development phase of a project to place some rigour around the identification and, where possible, quantification of the expected benefits to be derived. Assessments carried out during the implementation of a program are termed monitoring, progress, life-of-project or program evaluations and are used as a measure of accountability for funding, to gauge performance to date, and to provide some guidance for the future allocation of funds and information for project selection. While evaluations are point-in-time assessments of observed results for attribution to specific research activities, monitoring is an ongoing assessment of results within the context of a predefined framework of intended results (an impact plan). Monitoring provides important evidence for evaluations. If monitoring is not done throughout the life of the project, then articulating impact retrospectively and finding corroborating evidence to back up claims can be difficult. These various types of impact evaluations are summarised within Table C1. Table C1: Comparison of evaluation types TYPE OF EVALUATION FEATURES Ex-ante (formative, prospective) • Conducted during the decision-making process prior to investment • Based on projected values to inform investment choices • Forward-looking assessment of the likely future outcomes and impacts of a new project • Aids design of strategy and project plan, including informing uptake and adoption strategy • Includes a ‘baseline study’ which will aid later evaluations—a baseline study identifies all relevant conditions that exist before the CSIRO works take place Progress / monitoring • Conducted during the lifetime of the project • Useful in deciding whether a project should be extended or investment redirected • Provides information to improve performance Ex-post (summative, retrospective) • Undertaken towards the end of the implementation phase of work, when the work has produced outputs and those outputs have produced outcomes or potential scenarios of outcomes are well known • Based on observed and projected values • Counterfactual provides an estimate of what would have transpire without the CSIRO work and builds on the baseline established prior to commencement • Determines the extent to which anticipated outcomes were produced • Provides information about the value of the project to inform future investment decisions APPENDIX D: Benefit-cost-analysis checklist This checklist is intended to serve to aid to evaluators in adhering to the best practices recommended throughout the CSIRO Impact Evaluation Guide. The focus is on the modelling process included in conducting a rigorous benefit-cost analysis on CSIRO projects and programs. The checklist is intended as a complement to the detailed Guide, not a replacement. Defer to the guide for additional details and examples articulating the impact evaluation process. Impact pathway ☐ Include all 5 steps of the Impact Pathway: Inputs, Activities, Outputs, Outcomes, Impacts. ☐ Categorise benefits into triple-bottom-line impact categories: economic, environmental, social. ☐ Identify relevant parties across all 5 steps of the Impact Pathway. Clarifying the impacts ☐ Establish the counterfactual scenario from which to compare project/program impacts. ☐ Determine the attribution of benefits to CSIRO based on the proportion of CSIRO’s financial and in-kind contributions or another appropriate ratio backed by relevant partners and experts. ☐ Identify a realistic level of adoption accounting for the adoption profile and the influence of previous CSIRO work and the presence of market competitors. Estimating costs ☐ Measure all R&D costs incurred by CSIRO and outside partners, if available across all relevant years of the project or program. ☐ Additional usage and adoption costs should be subtracted from the estimated benefits of use rather than added to project costs. Estimating benefits ☐ Use an appropriate mix of methods to estimate benefits across CSIRO’s triple-bottom line. ☐ Estimate any sizeable or important externalities, spillovers or flow-on effects. When including these indirect effects, be sure to treat positive and negative effects equally. ☐ If possible, estimate the distributional effects realised across impacted populations. Inflation adjustments ☐ Adjust nominal values occurring in different years for inflation before discounting. For detailed guidance on when to adjust for inflation, see Table J1. Table J1: When to adjust for inflation SITUATION PROPER ACTION 1 Annual time series of past costs or benefits in nominal terms (what was spent or earned that year) Adjust time series for inflation 2 Constant annual estimate of costs or benefits for either past or future values Do nothing – this time series is already in real dollars 3 Value estimate from a specific year (e.g. from a past publication) that will be used as an annual value estimate Inflate value from year it was provided to current year and apply that inflated value to all relevant years ☐ Use the CPI from ABS unless there is a strong reason to use a different inflation index. ☐ Use the December CPI since Australia’s FY runs from July through June. See Table J2 for examples. Table J2: Financial Year and CPI FINANCIAL YEAR DURATION CPI CPI VALUE FY2018 July 2017 – June 2018 Dec-2017 112.1 FY2019 July 2018 – June 2019 Dec-2018 114.1 FY2020 July 2019 – June 2020 Dec-2019 116.2 FY2021 July 2020 – June 2021 Dec-2020 117.2 FY2022 July 2021 – June 2022 Dec-2021 121.3 ☐ Calculate the inflation factor for each year using the equation below: Equation - (Inflation Factor)y = (CPI)b/(CPI)y / , where y is the year for which the factor is being calculated, and b is the base year. ☐ Set the base year for inflation as the year of report publication. Currency adjustments ☐ Convert all foreign currency into AUD. It is recommended that you use historical exchange rates matching the year of data collected and then inflate to current AUD using the CPI from ABS. Discounting ☐ Use the same base year for both discounting and inflation. ☐ Use a 7% social discount rate for your main scenario. ☐ Calculate separate discount factors for costs and benefits using the equation below. Costs are discounted at the beginning of each period. Benefits are discounted at the end of each period. This is because costs typically occur up-front before benefits are received. See Table J3 for examples. Equation - Discount Factor = 1 / (1+f)t = (1 + d)-t 1 , where d is the discount rate, and t is the period. Table J3: Example of discounting streams of benefits and costs YEAR PERIOD FOR COSTS DISCOUNT FACTOR FOR COSTS PERIOD FOR BENEFITS DISCOUNT FACTOR FOR BENEFITS 2018–2019 -3 1.23 -2 1.14 2019–2020 -2 1.14 -1 1.07 2020–2021 -1 1.07 0 1.00 2021–2022 (base year) 0 1.00 1 0.93 2022–2023 1 0.93 2 0.87 2023–2024 2 0.87 3 0.82 Calculating return on investment (ROI) ☐ Include as many years of realised benefits and costs in the analysis as are available and relevant. Project future benefits and costs forward at least 10 years, and longer if analysing long-term investments. ☐ Sum the annual discounted benefits and costs derived above across all analysis years to calculate the PV Benefits and PV Costs, respectively. ☐ Calculate relevant ROI measures, including the BCR and NPV. The IRR and/or Payback Period may also be useful depending on the report context. See Table J4 for equations for each of these measures. Table J4: ROI measures MEASURE EQUATION INTERPRETATION BCR PV Benefits / PV Costs The dollar value returned in benefits for each dollar invested. A BCR>1 indicates a positive ROI. NPV PV Benefits – PV Costs The total benefits accrued beyond project costs. An NPV>0 indicates a positive ROI. IRR Discount rate at which PV Benefits = PV Costs The interest rate that would generate the realised benefits from the level of investment made. An IRR above the social discount rate (7%) indicates a strong ROI. Payback Period The first year at which PV Benefits > PV Costs The time before the accrued benefits begin to outweigh the level of investment. ☐ Always provide ROI measures based on 10 years of projections of benefits and costs. When longer-term assessments are appropriate, provide complementary ROI measures for the appropriate period and detail why returns take longer than 10 years to accrue. Sensitivity analysis ☐ Run modelling scenarios where determining assumptions are altered to reflect the likely range of potential impacts. For example, model a low, medium and high scenario based on varying assumptions of adoption and impact. ☐ Apply alternate discount rates of 3% and 10% to the various scenarios. Report the low scenario at a 10% discount rate (most conservative) and the high scenario at a 3% discount rate (most optimistic). The main analysis will be the medium scenario at a 7% discount rate. Reporting ☐ Ensure that all modelling steps, assumptions and study limitations are clearly documented in the final written report. ☐ It is important to keep the Excel workbook used for modelling clean, organised and well documented for auditing and replication purposes. ☐ Carefully cite sources for all assumptions and values. Include citations in the final Excel workbook and written report. ☐ Follow the recommended report structure for quantitative analyses as closely as possible: 1. Executive summary 2. Purpose and audience 3. Background 4. Methods 5. Impact pathway a. Inputs / Activities / Outputs / Outcomes / Impacts 6. Clarification of the impacts a. Counterfactual / Attribution / Adoption 7. Evaluation of the impacts a. Costs / Benefits / Externalities, spillovers, flow-ons / Distributional effects 8. Measures of economic return a. Aggregation of research impacts / Return on investment / Sensitivity analysis 9. Optimising impact a. Lessons learnt / Best practices / Barriers to adoption 10. Limitations 11. Conclusion Common errors to avoid ☐ Avoid double counting benefits realised across a supply chain or incurred as transfers between beneficiaries in the same area of analysis. ☐ It is not recommended to use Microsoft Excel’s NPV formula because the default base year is the first year for which data are available rather than the desired base year. Additional discounting adjustments are needed if using this formula. ☐ Similarly, it is not recommended to use Microsoft Excel’s IRR formula because the default base year is the first year for which data are available rather than the desired base year. APPENDIX E: Qualitative analysis approaches The following methods allow for the qualitative valuation of social impacts (cf. Kelemen et al. 2014). Social impact assessment (SIA) SIA is a framework that can be used to assess impacts of a wide range of types of change, from a proposal to build a new freeway to a proposal to change access to a natural resource such as water, a forest or the ocean (Becker & Vanclay 2003, BRS 2005; Coakes 1999; Coakes & Fenton 1999; Franks 2012). The method requires a range of different data including qualitative and quantitative data depending on the methods being applied. The main data needs relate to assessing the direct and indirect effects of proposed changes. This can be done using a variety of data sources, the most common types are: • secondary data – existing data sources can be used to identify the broad level and nature of potential impacts; and • primary data – can be collected through surveys, interviews, focus groups etc. if secondary data is not available, not relevant, or not appropriate (e.g. not at the right scale). Most significant change (MSC) MSC is a qualitative, participatory methodology focused on capturing project participants’ stories of significant change or impact (Clear Horizon 2014; 2015a, b; Davies & Dart 2005). MSC involves collecting and documenting stories from a range of participants. Each story represents the storyteller’s interpretation of impact. These stories are then collated and reviewed and discussed by participants in a participatory, systematic and transparent manner. This process leads to a collective agreement on what have been the most significant changes, or impacts, of a project or program. Qualitative measurement approaches Some of the most common ways of gathering qualitative data for impact assessments include: i. Discovery Workshop: A Discovery Workshop is a meeting organised by the evaluators to engage relevant parties, such as core members of the R&D team and industry partners, at the early stages of an evaluation. This workshop serves to collect essential information, request pertinent reports and clarify the objectives and scope of the evaluation. ii. Interviews with Relevant Parties: These include one- on-one conversations with relevant parties who are directly or indirectly affected by the project or who are experts in the field of research. The process helps gather in-depth insights about their experiences and their perceptions or expectations of the project’s impact. iii. Surveys: Surveys are versatile data collection tools used in impact evaluation to assess the effects of scientific interventions on relevant parties. These can be designed to obtain both quantitative and qualitative data through structured questions that explore deeper insights including expert opinions and personal experiences. iv. Desktop literature review: Literature reviews entails the systematic collection and analysis of existing research papers, evaluations, case -studies, technical reports, etc., pertinent to the project and its impacts. v. Impact Workshops/Focus Groups: This involves guided discussions with a group of participants to explore their attitudes, beliefs and reactions to a specific subject. The discussions are designed to gather insights on adoption and anticipated impacts among potential users or beneficiaries. APPENDIX F: Quasi-experimental econometric methods Quasi-experimental methods replicate some of the benefits of experimental designs without relying on the random assignment of individuals to treated and control groups. Experimental methods are rarely possible in conducting research on program implementation and are often undesirable as they purposefully exclude individuals from receiving the benefits of the targeted intervention. Instead, quasi-experimental approaches rely on practical comparison groups that are as similar as possible to the groups exposed to the intervention. Two common quasi- experimental approaches include difference-in-difference (DID) and regression-discontinuity (RD). Difference-in-difference (DID) DID requires data for both treated and control groups before and after exposure to the treatment or intervention. DID estimates the effect of the treatment by comparing the changes in outcomes between the groups before and after exposure. DID estimates the average treatment effect on the treated, or the mean difference between post-treatment outcomes and potential comparison outcomes among the treated group. See Callaway (2022) for more information on DID assumptions and applications. Regression-discontinuity (RD) RD methods are useful when a threshold decision determines if a person is exposed to an intervention. Those around the threshold can be considered similar, with assignment into the treated group as random. Those just below the threshold of acceptance would be considered the control group, while those just above the threshold would be the treatment group. Once treatment and control groups are assigned using RD, the same DID methods described above are applied to these samples, assessing the difference in outcome measures before and after treatment. Another type of RD analysis entails looking at a single population over time without a separate control group. In this case, time is considered the selection variable with the intervention event as the threshold for selection. Outcomes are compared for the same group of individuals just before and just after exposure to the intervention. Because this approach does not include a separate control group, it is more difficult to reliably assign observed effects to the intervention rather than some other concurrent event. See Giancolo et al. (2020) for more information on RD assumptions and applications. Other methods Other quasi-experimental approaches include instrumental variables, propensity score matching and coarsened exact matching. These methods all attempt to account for or reduce confounding similarities between potential treatment and control groups by using observable characteristics found in the data. See Zawadski et al. (2023) and Stuart (2010) for overviews of these methods. Another alternative is to carry out an ordinary least squares (OLS) regression on groups that were and were not exposed to the intervention even if these groups are too different to be considered viable treatment and control groups. To reduce the confounding effects of these group differences, the model needs to include as many potentially confounding factors as possible to isolate these effects from the treatment effect. This approach is still limited though by unmeasurable group differences. APPENDIX G: Non-market valuation A range of possible methods are available to enable monetisation of research impacts, even when those impacts relate to non-market goods and services. Monetising environmental and social impacts involves presenting the magnitude of these impacts in real dollar figures but does not automatically turn them into economic impacts. In practice, working with most of the methods outlined below requires experience and good knowledge of the specific impacts. Expert input is required and this usually belongs to experts external to CSIRO. Benefits associated with non-market goods or services can be monetised in three broad ways: 1. Monetisation based on choices observed or revealed through other transactions, also known as revealed preference methods; 2. Monetisation based on choices elicited from individuals in hypothetical scenarios, also known as stated preference methods; and 3. Monetisation based on previous valuation studies (i.e. benefit transfer approach). These monetisation methods aim to elicit the additional value or willingness-to-pay for additional and otherwise intangible benefits (e.g. improvement in levels of comfort or environmental quality) or the willingness to accept a compensation for a reduction in those benefits due to new technologies or services provided. Table E1 and Table E2 provide the definition and typical applications of these methods. The tables also outline some advantages, limitations and recommendations on their use. A more detailed discussion of the general issue of non‑market valuation is provided by Baker & Ruting (2014). This paper also provides a CSIRO example of the use of non‑market valuation methods (refer to p. 84). Table E1: Advantages and limitations of common revealed preference methods Revealed preference methods Use data from actual events or observed market transactions to construct monetary values. Can be used for Direct Use or Indirect Use values METHOD DESCRIPTION ADVANTAGES DISADVANTAGES COMMON USES Hedonic pricing Used to value impacts that relate to externalities, through their impact on another market, such as property prices. For example, the impact of research improving environmental amenity can be measured through differences in residential property prices between sites with the improved amenity, and equivalent sites without. Defensible and objective approach as data is based on real market transactions. Data on other markets, such as property prices, can often be readily available. Requires a rich dataset to isolate the impacts of externalities, can be difficult to find equivalent control sites. Affected assets may not be directly associated with the research outcome. Typically used when impacts of research relate to the quality of a place, and changes in real estate prices. Travel cost method Uses how much people pay to travel and time allocated to experience a place as the value of the place and its attributes. Yields objective data on how much people are willing to pay, based on real market transactions. Costly and time consuming as it requires data collection of visitors’ expenditure data through survey techniques. Provides an estimate of the minimum willingness- to-pay but limited to use only for attributes that stimulate travel. Value estimates relate to past decision-making not affected by current or future changes. Typically used when impacts of research relate to environmental amenities or cultural activities that attract visitors. As such, often applied to value attributes influencing tourism or recreational values. Used to assess one aspect of change in social values associated with changes in environmental condition. Productivity based approach Used to value impacts that change one or more of the inputs into the production process. As above for revealed preferences (i.e. based on real market transactions). Easy to apply if all inputs into production are known and the value chain is understood. Requires quality data from existing markets that disaggregates the various inputs into production. Producers may limit access to confidential production information. Use if impact of research changes one or more of the inputs into production. Ideally estimate the change in producer surplus. Replacement cost approach Damage cost avoided, replacement cost or substitute cost approaches – all of these are variations of the same theme, in which an impact is valued as the costs that the impacts has avoided. The alternatives are often well understood and quantified and may have been the traditional way of doing something before the new research arrived. Danger of overstating costs avoided when cost avoided relates to an unrealistic alternative. Best used when the cost avoided is realistically something society would pay to avoid, especially where research is changing a traditional activity. Ecosystem services valuation Value of services provided by ecosystems. This approach is similar to the replacement cost approach in that the non-market values of the goods and services provided by the environment to the market are estimated by evaluating the services the environment provides or could provide, in replacement for man-made market-based capital or efforts. Applies mainly to Use Values. Can provide reasonable estimates for cases where improved ecosystem function can replace current investments in capital or inputs. Cannot provide a full estimate of the value of the environment because it is limited to the market value of ecosystem functions and not the full set of values people enjoy from environmental integrity – see the other Revealed and Stated Preference tools for this. Used to estimate values of improved wetlands as replacement for some water treatment plants; value of biodiversity with mixed cropping and shelter belts with its integrated pest management values to replace higher pesticide use with mono-cropping. Table E2: Advantages and limitations of common stated preference methods Stated preference methods Use data elicited through surveys by asking respondents to place an economic value on the benefits or losses associated with a research output, for which there may not be a market. Surveys need to be carefully designed as they usually involve presenting hypothetical scenarios, which need to remain plausible and relevant to affected respondents. Can be used for non-market direct use or non-use values. METHOD DESCRIPTION ADVANTAGES DISADVANTAGES COMMON USES Contingent valuation Elicits respondents ‘willingness-to-pay’ for goods/services from research outcomes in a specific context. It can also be tailored to quantify the value that people are willing to accept for compensation if goods or services are not provided. Powerful tool to value intangible benefits where no markets exist, e.g. benefits for health and environmental services. Values obtained are relevant to societal preferences in Australia. Can also be used to estimate non-use or existence values, i.e. preserving biodiversity. Resource intensive as it needs well-designed surveys and a rigorous data collection process. Sampling should be carefully planned to ensure representativeness of the target population. Responses to contingent valuation studies are particularly sensitive to the framing of questions. Because it is a hypothetical question, results can be subject to several biases or inaccurate claims. Can be used across a wide range of impacts, even where no revealed preferences are available. Most commonly applied in cases where a major program of work is anticipated to have substantial health or environmental benefits and where specific valuation is required. Discrete choice modelling Focuses on estimating willingness-to-pay for specific attributes of research outcomes that directly influence the respondent’s level of enjoyment. Examples of attributes include safety, water quality, biodiversity, information provided and price. This method also provides trade-off estimates, which can be used to quantify the compensation that respondents should be provided for decreasing a specific attribute. As above (contingent valuation). Unlike contingent valuation, choice modelling forces people to consider trade‑offs, which may elicit more realistic hypothetical responses. As above (contingent valuation). Also requires collection of large samples to be statistically reliable. Results sensitive to the choices posed to subjects. Same as contingent valuation but use when valuation of specific attributes is required. Public agencies in the health sector have increasingly commissioned projects involving choice modelling techniques for the valuation and monetisation of service delivery features which rely on key values, such as the value of statistical life. The OBPR provides guidance on estimating the value of statistical life and the value of a statistical life year. Other Stated Preference Approaches including - Experiments, Contingent Behaviour, Direct Preference Mapping, etc. all characterised by similar pros and cons to those above and require similar expert ability to undertake. A decision-tree for the use of these methods when dealing with environmental impacts is provided at Figure E1. Figure E1: Selecting a non-market valuation method – initial questions What types of values do people hold for the non-market environmental outcome? Use values Non-use values Are reliable data available for related market behaviour (such as travel or house purchases?) No Yes Consider revealed preference Consider stated preference Is the non-market outcome associated with visits to a recreational site? Yes No Consider travel cost Is the outcome likely to be reflected in the price of a market good (such as house prices or wages?) Yes No Consider hedonic pricing Consider other methods, such as stated preference or avertingbehaviour Is the policy change a package ofseveral non-market attributes thatcould take on different combinations? Yes No Are estimates needed for the value of each attribute, can the attributes be varied independently, and do people value each attribute separately? NoYes Consider choiceConsider contingent modellingvaluation APPENDIX H: Library of common statistics NO. STATISTIC UPDATE SCHEDULE LEVEL AND/OR TYPE OF DATA DATA SOURCE WEBSITE LINK TO DATA SOURCE Economy Financials 1 Discount rate Follows national standards 7% (3%-10%) CSIRO EIA Guidelines 2 Price adjustment index Quarterly, use December National CPI Australian Bureau of Statistics (ABS) https://www.abs.gov.au/statistics/ economy/price-indexes-and-inflation/ consumer-price-index-australia/latest- release#data-downloads 3 Exchange rates Monthly Historical monthly exchange rates Reserve Bank of Australia https://www.rba.gov.au/statistics/ historical-data.html 4 Tax rate Annually Case specific based on relevant tax bracket/ industry Australian Taxation Office https://www.ato.gov.au/Rates/ 5 Cost of borrowing Monthly (1 month lag) National lender's interest rates Reserve Bank of Australia https://www.rba.gov.au/statistics/ interest-rates/ 6 Total Production Quarterly, use December National GDP ABS https://www.abs.gov.au/statistics/ economy/national-accounts/australian- national-accounts-national-income- expenditure-and-product Human Life & Health 7 Value of statistical life Irregular $5.3m (2022 dollars) Office of Impact Analysis https://oia.pmc.gov.au/resources/ guidance-assessing-impacts/value- statistical-life 8 Health data Varies by source Behaviours and risk factors; Health conditions, disability and deaths; Health and welfare services Australian Institute of Health and Welfare https://www.aihw.gov.au/reports-data Population & Labour 9 Population data Varies by source National, state, territory ABS https://www.abs.gov.au/statistics/ people/population 10 Labour costs Quarterly Industry-level wage price index ABS https://www.abs.gov.au/statistics/ economy/price-indexes-and-inflation/ wage-price-index-australia/latest-release 11 Labour force Updated monthly and quarterly Employment, industry and hours worked at national, state and territory levels by age, sex and education ABS https://www.abs.gov.au/statistics/labour/ employment-and-unemployment/labour- force-australia-detailed/latest-release 12 Labour safety data Updated annually and biannually (financial year time lag) Work-related fatalities, health and compensation at national, state and territory levels by industry, occupation and mechanism Safe Work Australia https://data.safeworkaustralia.gov.au/ interactive-data/topic NO. STATISTIC UPDATE SCHEDULE LEVEL AND/OR TYPE OF DATA DATA SOURCE WEBSITE LINK TO DATA SOURCE Environment 13 Weather and climate data Updated daily Regional rainfall, temperature and solar exposure Bureau of Meteorology http://www.bom.gov.au/climate/data/index. shtml?bookmark=136 14 Great Barrier Reef data Updates vary by source Reef weather, monitoring, crown- of-thorns starfish outbreaks, bleaching Australian Institute of Marine Science https://www.aims.gov.au/data Tourism 15 Domestic Tourism Monthly Trips and expenditures at national, state and territory levels Tourism Research Australia (Australian Trade and Investment Commission) https://www.tra.gov.au/domestic/monthly- snapshot/monthly-snapshot 16 International Tourism Monthly Trips and expenditures at national, state and territory levels Tourism Research Australia (Australian Trade and Investment Commission) https://www.tra.gov.au/international/ international-monthly-snapshot/monthly- snapshot Agriculture 17 Farm data Annually Production, land, costs and sales at national, state, regional and industry levels Australian Bureau of Agricultural and Resource Economics and Sciences https://www.agriculture.gov.au/abares/ data/farm-data-portal 18 Agricultural industry Quarterly Commodities and trade at national, state and industry levels Australian Bureau of Agricultural and Resource Economics and Sciences https://www.agriculture.gov.au/abares/ research-topics/agricultural-outlook/data Emissions 19 Emissions inventories Quarterly National, state, territory, economic sector Australia's National Greenhouse Accounts https://www.greenhouseaccounts. climatechange.gov.au/ 20 Emissions factors Irregularly (via legislation) Case specific based on relevant technology Federal Register of Legislation: National Greenhouse and Energy Reporting (Measurement) Determination 2008: Schedule 1 https://www.legislation.gov.au/Details/ F2022C00737 21 Australian carbon credit units (ACCU) Annually, sometimes biannually Average price per tonne of abatement, number of projects, tonnes of carbon abatement by abatement method Clean Energy Regulator https://www.cleanenergyregulator.gov.au/ ERF/auctions-results NO. STATISTIC UPDATE SCHEDULE LEVEL AND/OR TYPE OF DATA DATA SOURCE WEBSITE LINK TO DATA SOURCE Energy 22 Energy production Annually, September (1 FY lag) Case specific based on relevant industry/ technology Department of Climate Change, Energy, the Environment and Water (DCCEEW) https://www.energy.gov.au/publications/ australian-energy-update-2022 23 Energy demand Annually, September (1 FY lag) Consumption, exports, imports based on relevant industry/ technology DCCEEW https://www.energy.gov.au/publications/ australian-energy-update-2022 Petroleum 24 Petroleum production Monthly (1 month lag) Case specific based on relevant industry/ technology DCCEEW https://www.energy.gov.au/publications/ australian-petroleum-statistics-2023 25 Petroleum demand Monthly (1 month lag) Sales, exports, imports based on relevant industry/technology DCCEEW https://www.energy.gov.au/publications/ australian-petroleum-statistics-2023 26 Petroleum prices Monthly (1 month lag) National LPG for premium, regular unleaded and diesel DCCEEW https://www.energy.gov.au/publications/ australian-petroleum-statistics-2023 Electricity & Gas 27 Electricity generation Annually, September (1 FY lag) Case specific based on relevant technology DCCEEW https://www.energy.gov.au/publications/ australian-energy-update-2022 28 Electricity and gas demand Current/forecast in real time; Historic updated monthly State, territory Australian Energy Market Operator (AEMO) https://www.aemo.com.au/energy- systems/electricity/national-electricity- market-nem/data-nem/aggregated-data 29 Electricity and gas forecasting Annually Consumption and demand at national, state and territory levels AEMO http://forecasting.aemo.com.au/ 30 Electricity and gas prices Current/forecast in real time; Historic updated monthly State, territory AEMO https://www.aemo.com.au/energy- systems/electricity/national-electricity- market-nem/data-nem/aggregated-data Water 31 Water prices Annually, October (1 FY lag) National ABS https://www.abs.gov.au/statistics/ environment/environmental- management/water-account-australia/ latest-release 32 Water demand overall Annually, October (1 FY lag) National water consumption ABS https://www.abs.gov.au/statistics/ environment/environmental- management/water-account-australia/ latest-release 33 Agricultural water use Annually Water use at regional, state and territory levels by crop type ABS https://www.abs.gov.au/statistics/ industry/agriculture/water-use-australian- farms/latest-release APPENDIX I: Discounting and inflation examples Here we provide a detailed example of discounting and inflation calculations for a hypothetical BCA case study. The general assumptions for this hypothetical case study include: • the report is being written in 2022, and • the program being modelled started in 2018. Following the recommended guidelines, the base year for both discounting and inflation is FY2022, benefits and costs are projected forward 10 years, and a discount rate of 7% is applied. Discounting and inflation Inflation is first applied to nominal values to determine real values. Because the CSIRO FY spans July through June, the December CPI available from the ABS should be used to calculate the inflation factor for each year. The equation for calculating the inflation factor in each period is: Equation - (Inflation Factor)y = (CPI)b/(CPI)y / , where y is the year for which the factor is being calculated, and b is the base year. After adjusting for inflation, discounting is conducted by applying a discount factor to annual values. The equation for determining the discount factor for each period, or distance from the base year, is: Equation - Discount Factor = 1 / (1+f)t = (1 + d)-t 1 , where d is the discount rate, and t is the period. Different discount factors are calculated for costs and benefits. For costs, the base year is period 0, so the discount factor for the base year is 1. For benefits, the base year is period 1, so the discount factor for the base year is 0.93 for benefits. This is because costs are assumed to be incurred at the beginning of each period while benefits are accrued at the end of each period. Costs In the hypothetical case study, program costs include provided budget information for annual administrative/staffing costs plus a rough estimate of $2.5 million spent on materials over 5 years. In this circumstance, the provided budget information for annual administrative/staffing costs need to be adjusted for inflation to bring the nominal values to real 2022$. In contrast, absent additional information, the $2.5 million can be split evenly across 5 years with no inflation adjustments. Benefits In the hypothetical example, two benefits categories are being considered. First, the program is anticipated to have positive impacts on the Australian GDP from 2018 beyond the end of the analysis time frame. The estimated value of this impact came from a 2022 report and is in 2022 dollars. The value can be applied to each year without being adjusted for inflation, even though it is being applied as early as 2018, because it is already in 2022 dollars. In addition, the program is anticipated to have environmental benefits in the form of eagle conservation. The best available estimate for the annual economic contributions of eagle-based tourism to Australia is $2mil from a 2018 study. This environmental benefit estimate needs to be adjusted for inflation from 2018$ to 2022$. After adjusting for inflation, the same annual benefit can be applied to each year in which it is expected to occur. Note that even though the example assumes the first year of benefits will be in 2023, 2022 values are still applied because that is the base year for the report. Benefits across categories are then added together before being compared to program costs. Even though the benefits accrue to Australia in different ways, by estimating their values monetarily, they are converted into comparable units that can be combined. Discounting After inflation adjustments have been made and benefits have been aggregated across categories, discounting is conducted on both benefits and costs by multiplying the associated discount factors to the estimates for each year. This provides annual discounted benefits and annual discounted costs. Subtracting discounted benefits from discounted costs for each year provides the annual discounted net benefits. Summing across all years of the discounted benefits, discounted costs and discounted net benefits provides the PV Benefits, PV Costs and NPV. The BCR is then the PV Benefits divided by the PV Costs. To calculate the NPV for each year, you would add the annual discounted net benefits across years up to each year. Hence, the NPV in year 1 would just be the annual discounted net benefit for that year, while the NPV for year 2 would be the sum of the discounted net benefits for year 1 and year 2, and so on. The same can be done to calculate the PV Benefits or PV Costs for each year. The Payback Period is the first year in which the NPV is positive. This is the first year when the PV Benefits outweighs the PV Costs. In the hypothetical example, the PV Benefits is $42.88 million, the PV Costs is $10 million, the NPV is $32.88 million, the BCR is 4.29, and the Payback Period is 2023, or 5 years after the project start. INFLATION FACTOR COSTS BENEFITS DISCOUNT FACTORS DISCOUNTED VALUES Annual NPV Year CPI CPI Factor Provided (Nominal) Admin & Staffing Costs ($mil) Real Admin & Staffing Costs ($mil 2022) Assumed Real Materials Cost ($mil 2022) Total Real Costs ($mil 2022) Real Economic Benefits ($mil 2022) Real Environmental Benefits ($mil 2022) Total Real Benefits ($mil 2022) Period Discount Factor Costs Discount Factor Benefits Discounted Costs ($mil 2022) Discounted Benefits ($mil 2022) Discounted Net Benefits (Benefits - Costs) FY2018 112.1 1.082 $1.00 $1.08 $0.50 $1.58 $0.00 -4 1.31 1.23 $2.07 $0.00 -$2.07 -$2.07 FY2019 114.1 1.063 $1.10 $1.17 $0.50 $1.67 $0.00 -3 1.23 1.14 $2.05 $0.00 -$2.05 -$4.12 FY2020 116.2 1.044 $1.20 $1.25 $0.50 $1.75 $3.00 $3.00 -2 1.14 1.07 $2.01 $3.21 $1.20 -$2.92 FY2021 117.2 1.035 $1.30 $1.35 $0.50 $1.85 $3.00 $3.00 -1 1.07 1.00 $1.97 $3.00 $1.03 -$1.89 FY2022 121.3 1 $1.40 $1.40 $0.50 $1.90 $3.00 $3.00 0 1.00 0.93 $1.90 $2.80 $0.90 -$0.99 FY2023 1 $3.00 $2.16 $5.16 1 0.93 0.87 $0.00 $4.51 $4.51 $3.52 FY2024 1 $3.00 $2.16 $5.16 2 0.87 0.82 $0.00 $4.21 $4.21 $7.73 FY2025 1 $3.00 $2.16 $5.16 3 0.82 0.76 $0.00 $3.94 $3.94 $11.67 FY2026 1 $3.00 $2.16 $5.16 4 0.76 0.71 $0.00 $3.68 $3.68 $15.35 FY2027 1 $3.00 $2.16 $5.16 5 0.71 0.67 $0.00 $3.44 $3.44 $18.79 FY2028 1 $3.00 $2.16 $5.16 6 0.67 0.62 $0.00 $3.21 $3.21 $22.00 FY2029 1 $3.00 $2.16 $5.16 7 0.62 0.58 $0.00 $3.00 $3.00 $25.00 FY2030 1 $3.00 $2.16 $5.16 8 0.58 0.54 $0.00 $2.81 $2.81 $27.81 FY2031 1 $3.00 $2.16 $5.16 9 0.54 0.51 $0.00 $2.62 $2.62 $30.43 FY2032 1 $3.00 $2.16 $5.16 10 0.51 0.48 $0.00 $2.45 $2.45 $32.88 TOTAL $6.00 $6.25 $2.50 $8.75 $39.00 $21.60 $60.60 $10.00 $42.88 $32.88 Multiply the total real costs by the discount factor for costs to get discounted annual costs. Multiply the total real benefits by the discount factor for benefits to get discounted annual benefits. The totals across all years provide the PV Benefits, PV Costs, and NPV. The BCR is the PV Benefits divided by the PV Costs. The Payback Period is 2023 APPENDIX J: Sensitivity analysis BCA relies on assumptions. A sensitivity analysis is an explicit analysis of the sensitivity of the impact evaluation findings to these assumptions. The amount of effort devoted to this task should be reflective of: • the purpose of the evaluation (i.e. advocacy, accountability, allocation or analysis); • the requirements of the audience (e.g. a client might require some degree of sensitivity testing); and/or • the specific nature of the project (e.g. evaluating the impact of research commissioned to inform public policy development might require a higher degree of scrutiny to assist in the uptake and adoption of the research). Box F1 provides guidance on the two main approaches to sensitivity analysis. Box F1: Conducting a sensitivity analysis Partial sensitivity analysis This approach varies one assumption (or parameter or number) at a time, holding all else constant. For example, if the value of life plays an important role in the analysis, an average value of $3.5 million for the value of a statistical life (VSL) might be used in the base case. Partial sensitivity analysis would involve testing a range of values for the VSL, from $3 million to $15 million, without changing any other assumptions, and then reporting the results. The same process would be applied to test the effect of other uncertain parameters, such as the sensitivity of the BCA to the discount rate used, returning each time to the base case figures for everything except the number in question. Extreme-case sensitivity analysis This approach varies all uncertain parameters simultaneously, picking the values for each parameter that yield either the best- or worst-case scenario. If project impacts are strongly positive even under worst-case assumptions, it strengthens the perceived value of the project. Conversely, if calculated impacts are modest even when using the most favourable assumptions, the project is unlikely to be successful. Which approach? Both approaches are useful. Partial sensitivity analysis is most useful when there are only a handful of critical assumptions, while extreme-case sensitivity analysis is more useful in cases of greater uncertainty. The choice of which approach to use will depend upon the number and type of assumptions made as well as the expectations of the evaluation’s audience. Source: Wholey et al. (2010)