There were 165 participants from 20 different countries (31 presenters/ moderators and 134 participants) in the event. 38 per cent of the participants were guests from abroad, and the rest of 102 participants were Lithuanians. The number of conference participants was similar to that of 2011 conference (then there were 167 participants out of whom 55 were foreigners and 112 ‒ Lithuanians). The evaluation conference of 2013 was also a Lithuanian EU Council Presidency event which attracted the record number of participants – 192 guests (88 foreigners and 104 representatives of Lithuania).
This year conference was devoted to the questions of the use of evaluation results and evaluation of evaluation impact. The conference programme was composed of plenary and parallel sessions as well as expert discussion about challenges and solutions in promoting the use of evaluation results.
The summary of conference contents is presented below.
Measures to promote use of evaluation results. Participants of the conference stressed the role of the evaluation contractors (specialists) in the process of “translating” evaluation results into the language which is understandable to policy makers. Such “translating” means to deliver relevant evidence (evaluation results) at the appropriate time of political agenda. Although the use of evidence and evaluation hardly depends on the general culture of public management, but in the EU as well as in international and national institutions the current attention to the evidence and evaluation results is increasing. Not only the quality determines the use of evaluation results and the implementation of their recommendations, but also the organization of evaluation process, involvement of interested stakeholders, and also abilities to communicate relevant evaluation results together with inspiration and encouragement for stakeholders to act (the importance of psychological factors was emphasized by several conference speakers).
Measures to increase the benefits of evaluation use, and warnings about the quality and reliability of evaluation results. The importance of evaluation results synthesis (systematic reviews) and metaevaluations is growing with the increasing number of evaluations and their results. Both, in Lithuania and in other countries, this receives rather little attention, therefore, it is necessary that each new evaluation would be initiated only after assessment of already available results and gaps of evidence. Speakers draw attention to the methodological limitations of evaluations. For instance, it is dangerous to give prominence to quantitative evaluations results, as they too much simplify the reality and are not always based on valid and verifiable assumptions. Because of the lack of knowledge, for contractors and policy makers it is difficult to understand and assess such limitations. Evaluations of Cohesion Policy are rather too much focused on the impact of the investments of the EU funds, however, broader benefits of the EU membership are left without evaluation. It is important that not only the demand side effect of EU funds investments would be evaluated (which in many cases is temporal and finishes with the end of financial support), but also the supply side effect which asks if investments contributed to larger efficiency, and if changes are sustainable, etc.
Limitations of evaluation to keep in mind and act on. Evaluations do not have to be only quantitative and very advanced in methodological terms, because such evaluations require very comprehensive and high quality data, which is not always available. Sometimes qualitative research is enough to highlight tendencies and also sufficient to understand the situation as well as to offer solutions. Limitations of data quality and availability are very common to international organizations that pay a lot of attention to data gathering and verifying. Not the complexity of evaluation method is important to the increase of evaluation impact and its benefit, but the reduction of fragmented evaluations. Fragmented evaluations could be understood as an insufficient number of evaluations in the context of the scope of investments; too strong orientation to the financial source (for example, fund), but not the policy goal (for example, so far there are no evaluations at the level of the EU or the countries of the impact of various investments of different funds on the development of specific territories, for instance rural).
First plenary session. What factors influence evaluation use and impact? In this session there were two main questions of evaluation use discussed. Bastiaan de Laat from European Investment Bank stressed that the use of evaluation results is still problematic. Although good evaluation technique may increase the credibility of an evaluation, however, it is not the main factor that determines evaluation uptake. Several other factors come into play, e.g. timeliness, senior management support and the evaluation process. Also the systems ‒ and notably incentives ‒ for the concrete follow-up of recommendations are important. Ramūnas Dilba from the Lithuanian Ministry of Finance and Lina Kriaučiūnienė from Visionary Analytics in their presentation delivered conclusions about advantages and impact of evaluations carried out in Lithuania. The main finding was that around half of evaluation recommendations could be considered as of high quality, and 70 per cent of them were implemented. The largest number of evaluations contributed to better management and administration of EU co-funded programmes. However, the role of new knowledge in designing better programmes (i.e. reflection on objectives and implementation instruments) is considerably smaller. Karol Olejniczak from Warsaw University highlighted the importance of knowledge brokering in the use of evaluation results. In the context of Cohesion Policy, evaluation units play the role of knowledge brokering. While being intermediaries between evidence producers and users, evaluation units are in charge of accumulating and steering knowledge flows. This capacity, although promising, is difficult to develop because it requires experimentation and real life experience.
Second plenary session. How to conduct evaluation to make it reliable and influential? The first presenter of the second plenary session prof. David Gough from University College London draw attention to the fact that there is little investment to the synthesis of primary research. In order to improve decision making process, it is necessary to consider all available data and thus to carry out primary research synthesis. One way of the evidence synthesis is metaevaluation ‒ synthesis of previously undertaken research findings and preparation of the systematic review. In his presentation prof. D. Gough reviewed the techniques of evidence synthesis and evaluation use. He also used the examples from the London 2012 Olympics to the development of practice guidance in the UK National Health Service. The second presenter Grzegorz Gorzelak from Warsaw University discussed the experience of new EU member states in implementing Cohesion Policy. The main findings show that in almost all the EU states there was an evident progress in developing infrastructure (communication, environment). On the other hand, the impact of Cohesion Policy on economic potential and innovations of the new EU member states was less evident. It is recommended that in the future the investments of Cohesion Policy should be more devoted to the creation of innovative economic structures and development of research. According to the third session presenter Andrea Saltelli from Bergen University, the prevailing modern positivistic model of science for policy, known as ‘evidence based policy’, is based on dramatic simplifications and compressions of available perceptions of the state of affairs and possible explanations. Therefore, this model can result in seriously flawed prescriptions. Evidence based policy has to be replaced by robust policy, where robustness is tested with respect to feasibility (compatibility with processes outside human control); viability (compatibility with processes under human control, in relation to both the economic and technical dimensions), and desirability domain (compatibility with a plurality of normative considerations relevant to a plurality of actors).
Third plenary session. Evidence-based policy making and dissemination of evaluation results in the real world: challenges and solutions. The first presenter of the session Tony Verheijen, representative of the World Bank, emphasized that effective engagement with governments is essential in order to implement public policy goals (reaching more people with better health, education and infrastructure services, etc.). The creation of national strategies which involve public policy goals should be based on evidences. The World Bank proposes new evidence based policy tool ‒ the Systematic Country Diagnostic (SCD) which provides an in-depth analysis of the critical constraints to sustainable and inclusive economic development. This tool provides evidence-based platform for discussion on national priorities. While country strategies themselves remain the outcome of a discussion on client needs and interests and what the World Bank as an institution can provide, there is early evidence that the introduction of SCD can create a different dynamic in the discussion. The authors of the second presentation Zsuzsanna Lonti and Sorin Dan from OECD draw upon OECD’s vast experience in collecting, using and disseminating evidence on government performance in OECD member and non-member countries. The presentation introduced key OECD instruments for supporting public governance and illustrated how the OECD uses these instruments to foster governments’ performance. They include tools such as Government at a Glance, and the Public Governance Review series. Government at a Glance offers a unique international comparison of more than 50 government indicators that cover the entire policy process from input to outcome indicators. The Public Governance Reviews consist of analyses of selected public governance issues and explore, amongst others, how the centre of government promotes and supports sound performance management, evidence-based policy making, open government, integrity, procurement, HRM and budgetary management. Authors of the presentation also stressed that exchange of good practise examples could foster learning and development of the countries.
Session A: Counterfactual impact evaluation: challenges, results and lessons. The first speaker of the session Dovilė Žvalionytė presented the results of the counterfactual impact evaluation implemented in collaboration with Lithuanian Ministry of Social Security and Labour. Evaluation funded by European Commission revealed that wage subsidies have positive impact on the employment and income after the end of the project, and vocational training does not have positive impact neither to the unemployment level, nor to the income. D. Žvalionytė also drew attention to the challenges to evaluators of how to present results of methodologically rigorous counterfactual impact evaluations. Attila Beres from Equinox Consulting presented counterfactual impact evaluation results of two types of subsidies ‒ repayable and non-repayable ‒ for small and medium sized enterprises. Evaluation carried put in two regions of Hungary confirmed conclusions of some previous research that repayable subsidies for business are more effective. A. Beres also emphasized that both types of subsidies have more significant effects in less developed regions. Representative of Latvian Ministry of Finance Normunds Strautmanis presented counterfactual evaluation carried out together with European Commission’s Centre for Research on Impact Evaluation (CRIE) on the impact of training to employees to raise the competitiveness of enterprises in Latvia as well as the essential conditions for successful evaluation. According to Strautmanis, both the contractor and evaluator should well understand the context of evaluations and applied methods, and the availability of micro data should be guaranteed in the phase of planning intervention. The last speaker of the session Giulia Santangelo from the CRIE devoted some time of the presentation to introduce the CRIE. While speaking about the evaluation by the Latvian Ministry of Finance G. Santangelo stressed that it was complicated because of the lack of important data. The importance of amount and quality of data given to evaluators for successful counterfactual impact evaluation was also emphasized in the discussion. The moderator of the session Santiago Loranca Garcia from European Commission DG EMPL drew attention to the need of quality control of counterfactual impact evaluations.
Session B: Impact evaluation of public investment on economic growth and cohesion: methods, results and lessons. The session B was opened by Jerzy Pienkowsky from European Commission DG REGIO. He drew attention to the fact that in recent years a number of papers have made use of econometric approaches to address the impact of Cohesion Policy (CP) funds on economic growth and convergence. However, only a number of studies use good quality data on CP transfers. Moreover, some important variables, such as national policies or quality of governance, are largely excluded from the regressions, and the conclusions for CP drawn by these studies are not well developed and sometimes contradictory between the studies. The second presenter of the session Biagio Perretti from Basilicata University discussed the results and impact of Cohesion Policy in 2007–2013 period on the less developed regions of the South of Italy (Mezzogiorno). Despite the EU support, in comparison to Central and Northern Italy the poor economic performance of Mezzogiorno did not decrease. Speaker emphasized that programming activities in Mezzogiorno are rather slow, and absorption of funds is dramatically delayed. According to B. Perretti, in the future programming of Cohesion Policy financial sources should be faster and based on evaluations, and implementation of the policy should be guided by the examples of good practise. The Session was concluded by the presentation of Andreas Lillig from European Commission DG AGRI and Hannes Wimmer from European Evaluation Helpdesk for Rural Development. Speakers highlighted that a core element of DG AGRI’s evaluation practice is the systematic recourse to independent external expertise. The presentation described the purpose of the Common Monitoring and Evaluation system, provided an overview of its components (common intervention logic, indicators, evaluation questions and data sources, etc.) and assessed how these components will help to overcome the evaluation challenges.
Session C: Evaluating impact of R&D investment: methods, results and lessons. There were three evaluations presented in this session. Klaudijus Maniokas from ESTEP Vilnius in his presentation focused on the evaluation results of impact of public investment on territorial cohesion in Lithuania and raised the question if EU structural support moved Lithuania to the economy generating high economic value. Presented evaluation analysed macro, mezzo, and micro level factors that have impact on the competitiveness indicators of Lithuania. Pijus Krūminas from Research and Higher Education Monitoring and Analysis Centre (MOSTA) presented evaluation of the R&D investment. The study analysed return to investment and impact of High technology development programme (HTDP) for 2011‒2013 and IntelektasLT instrument. Counterfactual analysis was used to evaluate the impact of the instrument on the enterprise employee number. The analysis was also supplemented with theory-based evaluation method. The third speaker of the session Peter Varnai from Technopolis Group presented evaluation of capacity building programme for international union of tuberculosis and lung disease financed by the UK Department for International Development (DFID). Evaluation aimed to analyse the general effectiveness and impact of the programme. The findings of the evaluation informed policy makers about the drawbacks of the programme and contributed to the design of the next five years of the programme. Discussion focused on the specific methodologies presented in evaluations and their impact on the conclusions of these evaluations. Also the impact of joining Common Market on Lithuanian economy competitiveness was discussed.
Session D: Enhancing and evaluating impact of evaluation: instruments, results and lessons. The session analysed the possibilities of increasing the impact of evaluations in the process of implementing evaluation, and tools that may increase the use of evaluations. Krišjanis Veitners from Latvian Evaluation Society based on academic studies presented comprehensive theoretical model on how to learn from evaluation process in each of its implementation stage. Based on this model K. Veitners analysed several evaluations conducted in Latvia and concluded that although the potential to learn from evaluations was big, but not all possibilities were used. The audience of the session agreed that usually the potential of evaluation is not used at a full scale. There are several reasons for that: the lack of time to go deeper into the various aspects and results of evaluations, also not suitable organizational culture, and the fact that a lot of contractors are not the users of evaluations. Technopolis Group representative from Estonia Katre Eljas-Taal presented a tool which helped to increase the quality of evaluations in Estonia. By the initiative of evaluators a short document by evaluators was prepared which generalised practise of each evaluation phase. Now both contractors and evaluators use this document and it helps to solve various questions related to the implementation of evaluation. Participants of the session agreed that such a tool is relevant in order to implement a high quality evaluation. Razvan Ionescu from the Ministry of Regional Development and Public Administration of Romania based on the relevance of recommendations and the scope of their implementation overviewed evaluation findings and their use from separate evaluations which were conducted in the last five years within the Managing Authority for the Operational Programme Administrative Capacity Development (OP ACD). Tomasz Kupiec from Kozminski University in Warsaw and EGO S.C. presented evaluation use analysis of almost 50 evaluations of three regional programmes in Poland. The analysis revealed various different aspects and factors of evaluation use its influence to the increase of evaluation impact. According to the presenter, one of the most important and necessary, but not sufficient factors to the quality and use of evaluation is dominating management position in the contracting institution which contributes to favourable culture for evaluations. Such view was supplemented by the opinion from the audience that implementation will be used if contractor has a real, not formal need to implement such evaluation.
Discussion “Evidence-based policy making and dissemination of evaluation results in the real world: challenges and solutions”. In the beginning of the discussion representatives of European Commission DG REGIO and DG EMPL overviewed challenges and requirements of evaluations in 2014‒2020. Speakers emphasized that in order to promote the use of evaluation results, verbal communication rather than large scale publicity of the report is more important. Despite that publicity of the evaluation reports and public consultancies with society are also necessary. Discussion moderator Klaudijus Maniokas generalised the contents of the conference and distinguished three groups of the questions: about the ways to use evidence; about the results of Cohesion Policy evaluation and implementation of recommendations; and about the role of international organizations and evaluation units in promotion of evidence use. Bastiaan de Laat noted that the use of evaluations could be improved not only by the raise of questions about the evaluation object, but also by questions about how to use the evaluation results. Andrea Saltelli stressed that evidence and evaluation results are not self-speaking: evaluation units must be mediators and translators – to translate evaluation results into the language of policy makers. Tony Verheijen from the World Bank noticed that for the use of evidence their quality is important as well as capabilities to properly present evidences which are not always in the line with political priorities to policy makers. Zsuzsanna Lonti from OECD presented her view that gathering data of high quality is a huge challenge, but the “golden standard” methodologies are not always needed, thus approximate evidence are sometimes enough. Kai Stryczinski emphasized that the methods of counterfactual impact evaluation are complex, but theory-based evaluations are even more complicated and also required by the European Commission. It is more difficult to implement high quality theory-based evaluation than counterfactual. It was also stressed, that in the beginning of evaluation it is necessary to synthesize evidence (not only results of evaluations). In the end, Bastiaan de Laat noted that in the context of forming evaluation culture, participation in evaluation process is not a burden any more, but an opportunity to learn.