Econometric Methods for Impact Evaluation

Training Course organized by IPC and supported by the Department of Statistics of Brasília (UnB) and the Federal Accounts Court (TCU)

Duration: 9th  April to 2nd  July, 2008

Time and programming: 9.00am – 12.00pm April: 9th, 16th, 23rd  and 30th

May: 7th, 14th, 21st and 28th June: 4th, 11th and 25th July: 2nd

 

Location: Instituto Serzedello Corrêa (ISC) – SEPN, Quadra 514, Bloco B, Lote 7, Asa Norte, Brasilia, DF, Brazil.

 

Registration: Requests should be submitted by e-mail (ipc@ipc-undp.org) until the 7th of April 2008

 

Audience: 60 representatives of IPC, IPEA, University of Brasília, Brazilian Government agencies and ministries, and UN agencies in Brasilia

  Reference material available for downloading at http://www.ipc-undp.org/evaluation/praticas/

Introduction:

In the last decades, there has been a growing literature in Economics, as well as in other Social Sciences, addressed to the quantitative empirical analysis of causal relationships. In addition, policymakers have been more motivated to evaluate the implementation of current and new policies and programmes, looking for the improvement of efficiency and efficacy of socioeconomic results. In this context, this training course intends to provide a basic knowledge on methods for the impact evaluation of policies and programmes. Its modules include discussions on how methods for estimating causal relationships differ from other statistical methods; the role of experiments in impact evaluations; problems in the internal validity of studies; and limits for extrapolating some results in general terms. Moreover, each econometric method used in normal impact analyses is described. The course also contains practical modules that show the computational application of each econometric method presented by the speakers.

 

Programme Course

 

1st Session: Causal Analysis and the Fundamental Problem of Impact Evaluation

9 April 2008

 

Lecturer: Rafael Perez Ribas, IPC

 

Basic Bibliography:

Heckman, J. J. (2008). ‘Econometric Causality,’ Cemmap Working Paper 1/08, IFS, London.

Dowd, B. and R. Town (2002). ‘Does X really cause Y?’ Academy Health, Robert Wood Johnson Foundation, HCFO program, Washington D.C. 24p.

 

Supplementary Bibliography:

Moffit, R. (2005). ‘Remarks on the Analysis of Causal Relationships in Population Research,’ Demography

42 (1): 91-108.

Heckman, J. J. and E. J. Vytlacil (2007). ‘Econometric Evaluation of Social Programs, Part I: Causal Models, Structural Models and Econometric Policy Evaluation,’ Handbook of Econometrics, v. 6B, pp. 4779-4874.

Heckman, J. J. (2000). ‘Causal Parameters and Policy Analysis in Economics: A Twentieth Century Retrospective,’ Quarterly Journal of Economics 115 (1): 45-97.

Rubin, D. B. (1974). ‘Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies,’ Journal of Educational Psychology 66 (5), 688-701.

Holland, P. W. (1986). ‘Statistics and Causal Inference: Comment: Which Ifs Have Causal Answers,’

Journal of the American Statistical Association 81 (396): 961-962.

Rubin, D. B. (1986). ‘Statistics and Causal Inference,’ Journal of the American Statistical Association 81 (396): 945-960.

Cox, D. R. (1986). ‘Statistics and Causal Inference: Comment,’ Journal of the American Statistical Association 81 (396): 963-964.

Glymour, C. (1986). ‘Statistics and Causal Inference: Comment: Statistics and Metaphysics,’ Journal of the American Statistical Association 81 (396): 964-966.

Granger, C. (1986). ‘Statistics and Causal Inference: Comment,’ Journal of the American Statistical Association 81 (396): 967-968.

Holland, P. W. (1986). ‘Statistics and Causal Inference: Rejoinder,’ Journal of the American Statistical Association 81 (396): 968-970.

Cox, D. R. (1992). ‘Causality: Some Statistical Aspects,’ Journal of the Royal Statistical Society, Series A, v. 155 (2): 291-301.

 

  • Practical Exercises on the computer
    • Presentation of Stata resources;
    • Interface of Stata for Windows;
    • Using the Stata interactively;
    • Recording exits in log files;
    • The use of do files;
    • Basic commands;
    • Presentation of databases used during the course;
    • Exercise describing what there are in the databases.

 

2nd Session: Experimental Evaluations

16 April 2008

Lecturer: Fábio Veras Soares, IPC

 

Basic Bibliography:

Duflo, E., R. Glennerster, M. Kremer (2008). ‘Using Randomization in Development Economics Research: A Toolkit,’ Handbook of Development Economics, v. 4, forthcoming.

 

Supplementary Bibliography:

LaLonde, R. J. (1986). ‘Evaluating the Econometric Evaluations of Training Programs with Experimental Data,’ American Economic Review 76 (4): 604-620.

Heckman, J. J. (1996). ‘Randomization as an Instrumental Variable,’ Review of Economics and Statistics 78 (2): 336-341.

Heckman, J. J., J. Smith, N. Clements (1997). ‘Making the Most Out of Programme Evaluations and Social Experiments: Accounting for Heterogeneity in Programme Impacts,’ Review of Economic Studies 64 (4): 487-535.

Behrman, J. and Hoddinott, J. (2001). ‘Programme Evaluation with Unobserved Heterogeneity and Selective Implementation: The Mexican ‘PROGRESA’ Impact on Child Nutrition,’ Oxford Bulletin of Economics and Statistics 67 (4): 547-569.

Moffit, R. A. (2004). ‘The Role of Randomized Field Trials in Social Science Research,’ American Behavioral Scientist 47 (5): 506-540.

Splawa-Neyman, J. (1923). ‘On the Application of Probability Theory to Agricultural Experiments. Essays on Principles. Section 9,’ Annals of Agriculture Science, pp. 1-51, translated and edited by D. M. Dabrowska and T. P. Speed from the Polish original, Statistical Science 5 (4): 465-472, 1990.

 

  • Practical Exercises on the computer
    • Descriptive analyses of data;
    • Mean-comparison and distribution-comparison tests.

 

3rd  Session: Introduction to Quasi-Experimental Methods

23 April 2008

Lecturer: Rafael Perez Ribas, IPC

 

Basic Bibliography:

Blundell, R. and M. C. Dias (2000). ‘Evaluation Methods for Non-Experimental Data,’ Fiscal Studies 21 (4): 427-468.

Ravallion, M. (2001). ‘The Mystery of the Vanishing Benefit: An Introduction to Impact Evaluation,’

World Bank Economic Review 15 (1): 115-140.

Heckman, J. J. (1990). ‘Varieties of Selection Bias,’ American Economic Review 80 (2): 313-318.

 

Supplementary Bibliography:

Rubin, D. B. (1977). ‘Assignment to Treatment Group on the Basis of a Covariate,’ Journal of Educational Statistics 2 (1): 1-26.

Heckman, J. J., R. Robb Jr. (1985). ‘Alternative Methods for Evaluating the Impact of Interventions: An Overview,’ Journal of Econometrics 30 (1-2): 239-267.

Angrist, J. D. and A. B. Krueger (1999). ‘Empirical Strategies in Labor Economics,’ Handbook of Labor Economics, v. 3, chap. 23, pp. 1277-1366.

Heckman, J. J., H. Ichimura, J. Smith, P. Todd (1998). ‘Characterizing Selection Bias Using Experimental Data,’ Econometrica 66 (5): 1017-1098.

 

  • Practical Exercises on the computer
    • Impact estimations using linear regression;
    • Impact estimations using non-linear regression (probit, logit, etc.);
    • Estimating impact heterogeneity.

 

4th Session: Difference-in-Differences Methods

30 April 2008

Lecturer: Bruno César Pino de Oliveira Araújo, IPEA

 

Basic Bibliography:

Meyer, B. D. (1994). ‘Natural and Quasi-Experiments in Economics,’ Technical Working Paper 170, NBER, Cambridge MA. Published in Journal of Business & Economic Statistics 13 (2): 151-161.

 

Supplementary Bibliography:

Ashenfelter, O. and D. Card (1985). ‘Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs,’ Review of Economics and Statistics 67 (4): 648-660.

Athey, S. and G. Imbens (2006). ‘Identification and Inference in Nonlinear Difference-in-Differences Models,’ Technical Working Paper 280, NBER, Cambridge MA. Published in Econometrica 74 (2): 431- 497.

Bertrand, M., E. Duflo, S. Mullainathan (2002). ‘How Much Should We Trust Differences-in-Differences Estimates?’ Working Paper 8841, NBER, Cambridge MA. Published in Quarterly Journal of Economics 119 (1): 249-275.

 

  • Practical Exercises on the computer
    • Estimation of differences;
    • Estimation of differences and fixed-effects models;
    • Estimation of cross-sectional difference-in-differences models;
    • Estimating impact heterogeneity using a difference-in-differences model.


 

5th  Session: Matching

7 May 2008

Lecturer: Bruno César Pino de Oliveira Araújo, IPEA

 

Basic Bibliography:

Abadie, A., D. Drukker, J. L. Herr, G. W. Imbens (2004). ‘Implementing Matching Estimators for Average Treatment Effects in Stata,’ Stata Journal 4 (3): 290-311.

 

Supplementary Bibliography:

Heckman, J. J., H. Ichimura, P. Todd (1997). ‘Matching as an Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Program,’ Review of Economic Studies 64(4): 605-654.

Abadie, A. and G. Imbens (2002). ‘Simple and Bias-Corrected Matching Estimators for Average Treatment Effects,’ Technical Working Paper 283, NBER, Cambridge MA.

Rubin, D. B. (1973). ‘Matching to Remove Bias in Observational Studies,’ Biometrics 29 (1): 159-183.

Rubin, D. B. (1979). ‘Using Multivariate Matched Sampling and Regression Adjustment to Control Bias in Observational Studies,’ Journal of American Statistical Association 74 (366): 318-328.

Quade, D. (1982). ‘Nonparametric Analysis of Covariance by Matching,’ Biometrics 38 (3): 597-611.

Ñopo, H. (2002). ‘Matching as a Tool to Decompose Wage Gaps,’ IZA Discussion Paper 981, IZA, Bonn, Germany.

 

  • Practical Exercises on the computer
    • Matching applications using ‘nnmatch’;
    • Estimating matching with difference-in-differences;
    • Tests of robustness for matching models.

 

6th  Session: Propensity Score and the PSM Method

14 May 2008

Lecturer: Bruno César Pino de Oliveira Araújo, IPEA


 

Basic Bibliography:

Becker, S. O. and A. Ichino (2002). ‘Estimation of average treatment effects based on propensity score,’

Stata Journal 2 (4): 358-377.

 

Supplementary Bibliography:

Rosenbaum, P. R. and D. B. Rubin (1983). ‘The Central Role of the Propensity Score in Observational Studies for Causal Effects,’ Biometrika 70 (1): 41-55.

Rosenbaum, P. R. and D. B. Rubin (1984). ‘Reducing Bias in Observational Studies Using Subclassification on the Propensity Score,’ Journal of the American Statistical Association 79 (387): 516- 524.

Rosenbaum, P. R. and D. B. Rubin (1985). ‘Constructing a Control Group Using Multivariate Matched Sampling Methods that Incorporate the Propensity Score,’ American Statistician 39 (1): 33-38.

Dehejia, R. H. and S. Wahba (2002). ‘Propensity Score-Matching Methods for Nonexperimental Causal Studies,’ Review of Economics and Statistics 84 (1): 151-161.

Smith, J. and P. Todd (2003). ‘Does Matching Overcome LaLonde’s Critique of Nonexperimental Estimators?’ Working Paper 20035, CIBC Human Capital and Productivity Project, University of Western Ontario. Published in Journal of Econometrics 125 (1-2): 305-353, 2005.

Dehejia, R. H. (2005). ‘Practical propensity score matching: a reply to Smith and Todd,’ Journal of Econometrics 125 (1-2): 355-364.

Smith, J. and P. Todd (2005). ‘Rejoinder,’ Journal of Econometrics 125 (1-2): 365-375.

Dehejia, R. H. (2005). ‘Does Matching Overcome LaLonde’s Critique of Nonexperimental Estimators? A Postscript,’ unpublished.

Lee, W. S. (2006). ‘Propensity Score Matching and Variations on the Balancing Test,’ Melbourne Institute of Applied Economic and Social Research, University of Melbourne, Australia.

Blundell, R. and M. C. Dias (2000). ‘Evaluation Methods for Non-Experimental Data,’ Fiscal Studies 21 (4): 427-468.

 

  • Practical Exercises on the computer
    • Estimation of the Propensity Score;
    • Tests of the balancing propriety;
    • Graphic analysis of the Propensity Score;
    • Estimation of PSM with different techniques.


 

8th  Session: Others Methods based on the Propensity Score

21 May 2008

Lecturer: Rafael Perez Ribas, IPC

 

Basic Bibliography:

Hirano, K. and G. W. Imbens (2001). ‘Estimation of Causal Effects using Propensity Score Weighting: An Application to Data on Right Heart Catheterization,’ Health Service & Outcomes Research Methodology 2 (3-4): 259-278.

 

Supplementary Bibliography:

Hirano, K., G. W. Imbens, G. Ridder (2003). ‘Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score,’ Econometrica 71 (4): 1161-1189.

Abadie, A. (2005). ‘Semiparametric Difference-in-Differences Estimators,’ Review of Economic Studies 72 (250): 1-19.

Wooldridge, J. M. (2002). ‘Inverse Probability Weighted M-Estimators for Sample Selection, Attrition, and Stratification,’ Portuguese Economic Journal 1 (2): 117-139.

Wooldridge, J. M. (2004). ‘Inverse Probability Weighted Estimation for General Missing Data Problems,’ CeMMAP Working Paper 05/04, IFS, London. Published in Journal of Econometrics 141 (2): 1281-1301, 2007.

Lemieux, T. (2002). ‘Decomposing Changes in Wage Distributions: A Unified Approach,’ Canadian Journal of Economics 35 (4): 646-688.

Firpo, S. (2004). ‘Efficient Semiparametric Estimation of Quantile Treatment Effects,’ Econometric Society 2004 North American Summer Meetings 605, Econometric Society. Published in Econometrica 75 (1): 259-

276, 2007.

 

  • Practical Exercises on the computer
    • Estimation of the semi-parametric estimator for cross-sectional data;
    • Estimation of the semi-parametric difference-in differences estimator;
    • Estimation of Propensity Score weighting regressions;
    • Estimation for subgroups of samples.

 

9th  Session: Instrumental Variables

28 May 2008

Lecturer: Guilherme Issamu Hirata, IPC

 

Basic Bibliography:

Heckman, J. J. (1997). ‘Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations,’ Journal of Human Resources 32 (3): 441-462.

Altonji, J. G., T. E. Elder, C. R. Taber (2002). ‘An Evaluation of Instrumental Variable Strategies for Estimating the Effects of Catholic Schooling,’ Working Paper 9358, NBER, Cambridge MA. Published in Journal of Human Resources 40 (4): 791-821, 2005.

 

Supplementary Bibliography:

Heckman, J. J. (1996). ‘Randomization as an Instrumental Variable,’ Review of Economics and Statistics 78 (2): 336-341.

Angrist, J., G. W. Imbens, D. B. Rubin (1993). ‘Identification of Causal Effects Using Instrumental Variables,’ Technical Paper 136, NBER, Cambridge MA. Published in Journal of the American Statistical Association 91 (434): 444-455, 1996.

Angrist, J. D. and A. B. Krueger (2001), ‘Instrumental Variables and the Search for Identification: From Supply and Demand to Natural Experiments’, Journal of Economic Perspectives 15 (4): 69-85.

Angrist, J. D. and G. W. Imbens (1999). ‘Comment on James J. Heckman, “Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations”,’ Journal of Human Resources 34 (4): 823-827.

Angrist, J. D. and G. Imbens (1995). ‘Two-Stage Least Square Estimation of Average Causal Effects in Models with Variable Treatment Intensity,’ Journal of the American Statistical Association 90 (430): 431- 442.

Imbens, G. W. and D. B. Rubin (1997). ‘Bayesian Inference for Causal Effects in Randomized Experiments with Noncompliance,’ Annals of Statistics 25 (1): 305-327.

Imbens, G. W. and J. D. Angrist (1994). ‘Identification and Estimation of Local Average Treatment Effects,’ Econometrica 62 (2): 467-475.

Ichimura, H. and C. Taber (2001). ‘Propensity-Score Matching with Instrumental Variables,’ American Economic Review 91 (2): 119-124.

 

  • Practical Exercises on the computer
    • Estimation of IV with continuous endogenous variable;
    • Estimation of IV with endogenous dummy;
    • Estimation of two-step models and variance corrections;
    • Estimation of the endogenous switching model (‘treatreg’);
    • Estimation of PSM with IV.

 

10th  Session: Regression Discontinuity Design (RDD)

4 June 2008

Lecturer: Guilherme Issamu Hirata, IPC

 

Basic Bibliography:

Imbens, G. W. and T. Lemieux (2007). ‘Regression Discontinuity Designs: Guide to Practice,’ Working Paper 13039, NBER, Cambridge MA. Published in Journal of Econometrics 127 (2): 615-635.

Van der Klaauw, W. (2002). ‘Estimating the Effect of Financial Aid Offers on College Enrollment: A Regression-Discontinuity Approach’, International Economic Review  43(4): 1249-1287.

 

Supplementary Bibliography:

Hahn, J., P. Todd, W. Van der Klaauw (2001). ‘Identification and estimation of treatment effect with a regression-discontinuity design,’ Econometrica 69 (1): 201-209.

Lee, D. S. and D. Card (2006). ‘Regression Discontinuity Inference with Specification Error,’ Technical Working Paper 322, NBER, Cambridge MA. Published in Journal of Econometrics 127 (2): 655-674.

DiNardo, J. and D. S. Lee (2004). ‘Economic impacts of unionization on private sector employers: 1984- 2001’, Working Paper 10598, NBER, Cambridge MA.

Buddelmeyer, H. and E. Skoufias (2003). ‘An Evaluation of the Performance of Regression Discontinuity Design on PROGRESA,’ IZA Discussion Paper 827, IZA, Bonn, Germany.

Almond, D. and J. J. Doyle Jr. (2008). ‘After Midnight: A Regression Discontinuity Design in Length of Postpartum Hospital Stays,’ Working Paper 13877, NBER, Cambridge MA.

 

  • Practical Exercises on the computer
    • Estimation of Linear RDD;
    • Estimation of Nonparametric RDD;
    • Selection of an optimal bandwidth;
    • Estimation of Sharp and Fuzzy RDDs.


 

11th Session: Estimation of Multiple Treatment Effects and Dosage Effects

11 June 2008

Lecturer: Fábio Veras Soares, IPC

 

Basic Bibliography:

Imai, K. and D. A. van Dyk (2003). ‘Causal Inference with General Treatment Regimes: Generalizing the Propensity Score,’ Published in Journal of the American Statistical Association 99 (467): 854-866, 2004.

Hirano, K. and G. W. Imbens (2004). ‘The Propensity Score with Continuous Treatments,’ Published in A. Gelman and X.-L. Meng, Applied Bayesian Modeling and Causal Inference from Incomplete-Data Perspectives, 2004.

 

Supplementary Bibliography:

Lechner, M. (1999). ‘Identification and Estimation of Causal Effects of Multiple Treatments Under the Conditional Independence Assumption,’ IZA Discussion Paper 91, IZA, Bonn, Germany.

Brand, J. E. and Y. Xie (2007). ‘Identification and Estimation of Causal Effects with Time-Varying Treatments and Time-Varying Outcomes,’ Published in Sociological Methodology 37 (1): 393-434, 2007.

Flores, C. A. (2005). Estimation of Dose-Response Functions and Optimal Treatment Doses with a Continuous Treatment, PhD Dissertation, University of California at Berkeley, 199p.

 

  • Practical Exercises on the computer
    • Implementation of the Propensity Score for multiple treatments;
    • Implementation of the Generalized Propensity Score (GPS);
    • Tests of the balancing propriety for GPS.

 

12th Session: Problems of Contamination in Internal Validation

25 June 2008

Lecturers: Fábio Veras Soares and Rafael Perez Ribas, IPC

 

Basic Bibliography:

Heckman, J. J., N. Hohmann, J. Smith, M. Khoo (2000). ‘Substitution and Dropout Bias is Social Experiments: A Study of an Influential Social Experiment,’ Quarterly Journal of Economics 115 (2): 651- 694.

Miguel, E. and M. Kremer (2004). ‘Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities,’ Econometrica 72 (1): 159-217.

 

Supplementary Bibliography:

Barrera-Osorio, F., M. Bertrand, L. L. Linden, F. Perez-Calle (2008). ‘Conditional Cash Transfers in Education: Design Features, Peer and Sibling Effects: Evidence from Randomized Experiment in Colombia,’ Working Paper 13890, NBER, Cambridge MA.

Heckman, J. J. (1991). ‘Randomization and Social Policy Evaluation,’ Technical Working Paper 107, NBER, Cambridge MA. Published in C. F. Manski and I. Garfinkel, Evaluating Welfare and Training Programs, Harvard University Press, Massachusetts, 1992.

 

  • Practical Exercises on the computer

-     Revision of previous practices.

 

13th  Session: Problems of External Validation (Scaling Up)

2 July 2008

Lecturer: Fábio Veras Soares e Rafael Perez Ribas, IPC

 

Basic Bibliography:

Duflo, E. (2003). ‘Scaling Up and Evaluation,’ ABCDE Annual World Bank Conference on Development Economics, Bangalore, May.

 

Supplementary Bibliography:

Heckman, J. J., L. Lochner, C. Taber (1998). ‘General-Equilibrium Treatment Effects: A Study of Tuition Policy,’ American Economic Review 88 (2): 381-386.

 

  • Practical Exercises on the computer

-     Revision of previous practices.

 

 

Bibliography of support:

Cameron, A. C. and P. K. Trivedi (2005). Microeconometrics: Methods and Application, Cambridge University Press, New York.

Wooldridge, J. M. (2002). Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge MA.

Heckman, J. J. (1999). ‘The Economics and Econometrics of Active Labor Market Programs,’ Handbook of Labor Economics, v. 3, chap. 31, pp. 1865-2097.

Baker, J. L. (2000). Evaluating the Impact of Development Projects on Poverty: A Handbook for Practitioners, World Bank, Washington D.C., 225p.

 

Bibliography to other issues not addressed in the course:

Hotz, V. J., C. H. Mullin, S. G. Sanders (1997). ‘Bounding Causal Effects Using Data From a Contaminated Natural Experiment: Analysis the Effects of Teenage Childbearing,’ Review of Economic Studies 64 (4): 575-603.

Manski, C. F. (1990). ‘Nonparametric Bounds on Treatment Effects,’ American Economic Review 80 (2): 319-323.

Bourguignon, F., F. H. G. Ferreira, P. G. Leite (2002). ‘Ex-ante Evaluation of Conditional Cash Transfer Programs: The Case of Bolsa Escola,’ William Davidson Working Paper 516, University of Michigan Business School.

ADB (2007). Poverty Impact Analysis: Selected Tools and Applications, Parts 1 and 2.

 

Link to NBER (Imbens & Wooldridge) course: http://www.nber.org/minicourse3.html