Seminar 9: Evaluating the Performance of Competition Authorities (4 December)

For this seminar, we have a look at two recent contributions to the question of how to assess the performance of competition agencies: the OECD study is quite wide-ranging but it is used by several NCAs to benchmark their performance.  Bill Kovacic is a former commissioner of the FTC and has extensive experience using various assessment methods.  The chapter by Lodge and Wegrich gives a wider perspective of ‘regulation inside government’ – see if it adds perspectives to the other two papers.



Lodge and Wegrich Managing Regulation chapter 6 (to be distributed)

OECD New indicators of competition law and policy in 2013 for OECD and non-OECD countries ECO/WKP(2013)96 (link)

Kovacic ‘How Does Your Competition Agency measure Up?’ (2011) 7(1) European Competition Journal 25

12 comments on “Seminar 9: Evaluating the Performance of Competition Authorities (4 December)

  1. Marita says:

    Bearing in mind that it is extremely difficult (if not impossible) to compare the effectiveness of different competition agencies, I remain sceptical about the OECD’s indicators’ ability to measure well the strength of various competition regimes. They might be slightly better at comparing the scope of those regimes, since many of the questions adding up to the discrete indicators concern tools at the disposal of individual agencies. However, the indicators do not allow for the assessment of the quality of competition agencies’ work or the impact their decisions have on the market. Any such assessment will be inevitably subjective to an extent, and this is already visible at the level of selecting the indicators. It is not surprising that OECD countries scored better, since the assessment was performed by OECD standards of what constitutes good competition. By this I do not want to suggest any bad faith on the part of the OECD, just to point out that no institution holds THE answer to what constitutes the ‘best’ standard.

    At the same time I agree with Kovacic that having a clear standard of assessment would be beneficial for assessing the work of competition agencies. My worry though, is that use of the indicators will have a similar effect as counting the number of initiated/finished cases, which is to distort the aims of the agency and to negatively affect the elements of its work which are less readily quantifiable.

    I also remain sceptical about the usefulness of ‘peer review’, which is seen by Kovacic as one step towards improving the assessment of competition agencies. From experience I can tell that a peer review, at least in its formal form, will be hardly informative of the agency’s performance. Indeed, things would need to be very bad for an agency to be critical of another one in circumstances when they retain a close relationship (a problem acknowledged by Lodge and Wegrich ). At the European level, it would also involve a certain level of political awkwardness for one NCA to be openly critical of another.

  2. Christopher Johnson says:

    This week I have found myself largely agreeing with the pieces I have read. They are well explained (although, the two articles are extremely repetitive) and any assertions are sufficiently qualified when necessary. Therefore, my comment this week is rather tangential.

    Lodge and Wegrich suggest that one approach that may be adopted to overcome the problems inherent in managing government agencies is “contrived randomness” where uncertainty and unpredictability in rules and enforcement combine to prevent opportunism.

    The “contrived randomness” idea reminded me of a recent visit to Berlin. On a historical city tour I was told how, in order to prevent defections, the soldiers guarding the East German side of the wall would be put into pairs randomly before each shift. The idea of this being that the lack of knowledge of one another would act to keep both individuals in check. Although this was effective, I am told that it had a negative impact upon morale.

  3. Kayahan says:

    Any approach seeking establish a methodology for evaluation of institutional performance should be sensitive to the administrative cultures of the country subject to observation. The OECD study in this regard suffers from the ordinary weaknesses of quantitative approaches to valuate social phenomena,that it cannot elaborate on the correlations between the actual and the perception, not that the OECD argues otherwise. The study may be a valuable tool in determining general policy objectives in the higher levels of government and inter-govermental cooperation, but is is not useful in determining successes or failures of individual national systems. Kovacic is in a better position to suggest a methodology to assess the performance of the institutions he has been a part of, of which he knows the culture, politics and language (Lodge and Weigrich agrees that governmental control over public bodies is an issue that politics matter above anything). Kovacic’s proposals might not be applicable in any other jurisdiction (even his general suggestion that evaluation should be done in a manner more sensitive to results than action might be difficult to implement in countries whose public bodies cannot easily procure the expertise for detailed impact assessment), but this does not reduce the potential efficiency his methods might have in the US.

    Lodge and Weigrich’s chapter is quite helpful in analysing the pros and cons of different methods of governmental ‘regulation’ of public bodies. I think that their conclusion that the joint use of separate methods other than oversight is generally correct as using separate control mechanisms will increase the redundancy of information, reducing the possibility for false conclusions. However the individual pros and cons of the methods the authors point out could change from country to country, even from institution to institution in the same country. But this problem I think is inevitable, as probably nothing short of an ethnography of the relevant institutions is needed to precisely determine the benefits and limitations of control methods for a given institution; therefore contributions from the ex-employees of the relevant public bodies like Kovacic’s are very valuable for policy makers, as these are located in the best level of observation possible for the task at hand.

  4. Maria H says:

    I mostly agree with Kovacic’s article. His points are quite clear and relatively straightforward: we need competition agencies with a long-term focus whose effectiveness is measured by an outcome-perspective, instead of a purely activity and numbers-based approach. His suggestions for how this could be achieved, or at least improved, seem logical.

    I find, however, the way in which he builds his argument in the beginning of the article odd. To me, the shortcomings of the agencies, but more importantly, the shortcomings of the test that are currently in place are not well discussed. “Many commonly used techniques for evaluating agency performance have serious flaws.” – As far as I see it he then only mentions one technique: looking at the number of cases an agency brings forward.

  5. Maria C says:

    I have found the three readings for this session most interesting. I would like to comment on some of the ideas I have found in Lodge and Wegrich’s chapter. Like them, I agree that an optimal ‘Regulation inside Government’ should rely on a combination of different approaches. They present three alternative strategies to traditional oversight: competition, mutuality and contrived randomness. However, I have found it difficult to imagine the extent to which some of these could be applied to evaluate the performance of competition authorities.
    First of all, I find it difficult to imagine to what extent “contrived randomness”, if consisting of surprise inspections, can be useful to assess the efficiency of a competition authority, or even worse, “volatile or inscrutable standards” to scrutinize the performance of competition authorities, when the goals competition agencies pursue should be “clear” and “well-specified”, as argued in Kovacic’s paper. If this indicates the need to define with clarity what is expected from competition authorities, applying “inscrutable standards” to evaluate their performance may be counterproductive. I have found it very difficult to think of other ways to “surprise” competition authorities, given that most of their work relies on economic and legal analysis in preparation for litigation or policy advocacy, and in this sense, it is difficult to introduce certain element of ‘surprise’ in its evaluation.
    I also fail to see how useful “competitive pressures” – such as league tables – could be for the purpose of evaluating the effectiveness of competition authorities beyond ‘national pride’. On the one hand, because the functions and objectives of competition authorities – while sometimes in collision with the functions and objectives of other government agencies – are specific to them, and therefore, do not strictly ‘compete’ in the performance of its objectives with other government agencies. On the other, because the cases each national authority is entitled to control or study do not depend on competition among different national authorities, but on the definition of the geographical market and other elements foreseen in rules highly convergent.
    I have more hope, nevertheless, in ‘mutuality’, understood as ‘rules set through participative processes’. In this sense, Kovacic also highlights the need to develop a set of common criteria, arguing that “such a framework facilitates the assessment of agency performance across different eras, and international acceptance of standards promotes a deeper understanding of individual systems and permits comparisons across jurisdictions”.

  6. Theodosia says:

    I found the reading of this week material extremely useful and beneficial. I particularly enjoyed the reading of the article of W. Kovacic et al which analyses how a competition agency should function and how it should be evaluated. The article is built on the argument that ‘’brilliant theory without skillful implementation is a bad match’’. I totally agree with this point view. Indeed a competition agency should not only issue guidelines and hire brilliant competition experts. It should do more than that. In this regard the article gives a valuable insight on the variety of goals an effective and efficient agency should aim to achieve. According to the authors one central focus of attention should be the measurement of competition agency and effectiveness. Indeed, as it is maintained, without consistent, meaningful performance measures, it is difficult to make sound judgments about agency quality. In this respect, as the authors underline, a single minded focus on prosecution events may remove the agency’s attention from the application of other policy instruments that might be better suited to solving a specific competition policy problem. Such instruments might be the preparation of reports guidelines or the organization of workshops. In addition as the authors suggest the agencies should focus on the long term quality of their services. Long-term investments in capability—in human talent, institutional knowledge may strengthen the agency’s excellent brand and may also inspire citizens’ confidence in government by showing that public institutions truly “work”. In addition, according to the authors, these long term goals should be clearly defined so that transparency is ensured and public discussion is facilitated. In addition, the article suggests that competition agencies should establish networks with the academia and with research institutes focusing on competition law and economics. While I was reading the article I was wondering to what extent the Commission follows the suggestions of the authors. My impression is that the Commission invests more in achieving the deterrence effect than to increase transparency and promote public dialogue. In this regard, I consider that the Commission focuses more on its enforcement activity and less on informing consumers and market participants on how competition rules should apply. In my opinion the Commission invests less in transparency in order to enforce deterrence.

  7. Agnieszka says:

    Last Thursday the European Parliament has voted on a resolution concerning the digital market (available here:, which included the following words:

    9. Stresses the need to ensure a level playing field for companies operating in the digital single market in order for them to be able to compete; calls, therefore, on the Commission to properly enforce EU competition rules in order to prevent excessive market concentration and abuse of dominant position and to monitor competition with regard to bundled content and services;

    10. Notes that a level playing field for companies in the digital single market must be ensured in order to guarantee a vibrant digital economy in the EU; stresses that a thorough enforcement of EU competition rules in the digital single market will be determinant for the growth of the market, consumer access and choice and competitiveness in the long term; highlights the importance of affording consumers the same protection online as they enjoy in their traditional markets;

    Granted the resolution is a non-binding document and only expresses the opinion of the European legislature. Still it has been widely interpreted as being oriented against Google, with wide coverage (e.g. The Economist front page), including comments linking the ‘aggressive’ European stance to difficulties in TTIP negotiations. This goes a long way to show that for certain large scale cases, the competition authorities operate in a very complicated framework, where various interests and policy objectives interact, and these can go beyond welfare and efficiency analysis (a point which we have made in the seminar before but the case warrants its repetition). The fallout from a simple resolution goes a long way to show that competition policy does not exist in a vacuum especially in novel and globalized sectors, such as the digital market, where new questions are raised. This highlights also the difficulties which arise in trying to set benchmarks for effectiveness of competition policy, especially across jurisdictions (OECD paper).

    The case of Google and the role that competition policy is expected to play there, raises another interesting issue in the context of the Kovacic et al paper, which makes a proposal concerning the necessity of reporting on individual cases both from the perspective of a particular outcome achieved and the way a given case forms part of a general policy strategy and direction. The interplay between individual case handling and overall strategy is a difficult one, especially within a single institution when one-track mindedness often occurs and separating micro- and macro- scale approaches is very costly. Beyond questions of legitimacy and appropriateness of the Kovacic proposal within a given institutional framework, a very practical question therefore can be posed as well: how then to set up within an institution an appropriate balance between case-by-case approaches and long-term strategic planning, and how the interplay between the two elements should be evaluated. DG COMP has separate units for strategic planning and sectoral case handling – but does it work?

  8. Jonathan says:

    It is very difficult to measure the effectiveness of competition agencies, especially when there is a lack of well-defined and broadly accepted standards for determining how to evaluate a competition agency (Kovacic).

    I agree with Kovacic that counting the number of cases an agency has begun does not indicate its effectiveness because it does not tell us much about the effect of the cases and includes small insignificant cases in the number. His solution is for agencies to participate in agencies such as the International Competition Network to identify characteristics of effective performance and to establish common evaluation methodologies. Kovacic further explains that in order to measure the efficiency of an agency, you should give two grades: one to measure the agency by the present-day standards and second to assess the agencies policies that contribute to policies and analytical concepts over time. However, giving an “incomplete” grade in the present time does not really help us measure effectiveness right now. Of course in hindsight we can see how effective the agency has been, but what if we need to know the effectivity of the agency right now? I also agree with Marita with regards to the expanded reliance on peer review idea. Agencies have relationships with each other and will not like to negatively review their peer agencies. I do believe that adding improved data collection and disclosure will improve the quality of the agencies effectiveness; however, why would an agency who is not being as productive as its peers want to disclose that information?

  9. Elias says:

    Lodge and Weinrich point out that the multiplication of free standing regulatory bodies controlling governmental activity contributed to the emergence of the debate on regulating inside the government. In this regard, they mention for instance the institutions of Ombudsmen. Interestingly, the institution of the European Ombudsman which controls the compliance of EU institutions with the principle and fundamental right of “good administration” (Art. 41 of the Charter of Fundamental Rights of the European Union) , investigated during the last years in several cases against the EU Commission within the field of competition law. For instance in the Intel case, the European Ombudsman blamed the Commission for its misappropriate use of exculpatory evidence ( ). In 2014, the European Ombudsman also criticised DG Competition and its former Commissioner Almunia for its handling of a complaint alleging the grant of unlawful State aid to four Spanish Football clubs (

    Although some aspects of good administration are reflected by the OECD indicator for “probity of investigation” (e.g. procedural fairness, accountability), it is rather absent in the OECD report and the Kovacic Articel. In my view, an indicator which takes into account the principle of good administration, going beyond mere legal issues such as procedural rights of parties in competition cases, could perhaps enrich the evaluation of the performance (only in terms of effectiveness??) of competition authorities.

  10. I think that the OECD report is a very valuable tool to get an overview or do research about the competition systems that are out there. Assuming best conditions, they can give indicators regarding how agencies compare to each other.
    Also, both the excert from Lodge and Weigrich and the paper by Kovacic represent a valuable addition to the literature in terms of general agency/regulation literature (L&W) and more specific literature (Kovacic) However, I couldn’t help but notice, that I always came back to independent agency literature. (I might be biased by my LL.B thesis in that regard) While I think it can also be debated in how far different agencies can be compared, at least they should all be independent agencies. Literature such as Kovacic normally always deals with agencies that are in some way seen as independent (though as he looks at FTC and DOJ, maybe DOJ should be excluded in that regard)
    This is important, because while comparing these agencies in totally different environments is already hard enough, it is likely to even more unreliable. This is also measured by the OECD indocators, though to me it is not clear how the scores are calculated, and it is noteable that there is no separate measurements for the EU / the EU competition regime.
    While there is no hard evidence (and maybe it would even be unprovable) that policy choices / politics by the executive (such as the Commission / DG Comp ) influence the cases pursued by an agency such as DG Competition, when listening to the speeches by Commissioners and especially the ones heading DG Comp, this does not seem far fetched, as competition law was first heralded as a tool to integrate the market, and then step by step was supposedly also seen as a tool for other means. The latest resolution by the EP (as alluded to by Agnieszka above) is only the latest, worst tip of the pro-verbial iceberg.)
    This importance of independence is also found in all of today’s readings, though in different forms, ranging from keeping a balance also inside the agency (and to the outside) (Lodge & Weigrich) over the OECD measurements to the progressive approach by Kovacic, that already includes consumer protection objectives in the measurement of agency performance, which seem s only logical, but does make keeping independence, balance and measuring agency performance definitely not easier.

  11. Marcos says:

    Kovacic’s paper gives an account of how the activity of competition agencies can be measured up. What I would like to highlight in his article is the suggestion that agencies should improve their processes of data collection and disclosure. In my opinion, this is an important factor that would contribute to the quality of the evaluation of an agency activities, since the mere “reporting of aggregate measures of activity” seems to be insufficient to determine the effectiveness of the agency. After all, if knowing the number of cases initiated by the agency does not fully grasp the impact of the agency’s activity, it is necessary that more data be available in order to better assess it. Kovacic suggests then a template that would contain some of the essential information related to the enforcement of the law. A second suggestion would be to involve external actors that have some knowledge of how the agency operates (law firms, academics, etc.). This seems very interesting, considering that peer review might be seen suspiciously and even the author is very cautious when it comes to the subjectivity of peer review, given the institutional environment and how things can be perceived (bundling as pro-competitive/anticompetitive measure)

  12. Noguier Alice says:

    I really like the way the Kovacic, Hollman and Grant’s paper stresses the crucial importance of implementation of competition law by competition authorities. Indeed, what is a good law useful for if it is badly implemented?
    Competition agencies have a great power in the sense that their enforcement choices have a decisive influence on the evolution of the law. This is especially true in Europe where competition law enforcement is mainly public (by contrast this influence is smaller in the US where private enforcement is clearly dominating). This is why it is so important to assess the quality and efficiency of their activity.
    However measuring the efficiency of competition authorities activity is not easy essentially for two main reasons: the objectivity of the evaluation and its cost. Indeed it is impossible to find a completely objective criterion that could be used to assess the action of competition authorities in a very transparent manner. It is difficult to isolate the effect of authorities’ actions on the economy. Moreover, the entities in charge of the evaluation cannot be truly objective. Whether it is in house counsel, economic consultancies or law firms, the examiners will be always more or less biased. Finally, evaluating competition agencies is costly both in terms of time and money. It means that a part of the budget devoted to competition authorities have to be spent for evaluation rather than for their implementation activities.
    Nevertheless, as the authors of the paper stress it, an evaluation is needed even if it is not perfect. Multinational networks seem to be one of the best means to elaborate evaluation techniques.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s