February 15, 2017

Fixed Effects vs Difference-in-Differences

TL;DR: When you have longitudinal data, you should use fixed effects or ANCOVA rather than difference-in-differences since a difference-in-difference specification will spit out incorrect variance estimates. If the data is from a randomized trial, ANCOVA is probably a better bet. Trying to understand when to use fixed effects and when to use difference-in-differences (DiD), in the past, always made me feel like an idiot. It seemed like I was missing something really obvious that everyone else was getting. Read more

August 31, 2016

Web Scraping 101

More and more organizations are publishing their data on the web. This is great, but often websites don’t offer an option to download a clean and complete dataset from the site. In this situation, you have two options. First, you (or some unlucky intern) can hunker down and spend a week wearing out the ‘c’ and ‘v’ keys on your keyboard as you cut and paste ad nauseam from the website to an Excel spreadsheet. Read more

July 4, 2016

Multiple Hypothesis Testing

layout: post title: “Multiple Hypothesis Testing” date: 2016-07-04 10:40:48 -0400 categories: jekyll update This week, I volunteered to read and summarize one of the articles for IDinsigh’s tech team’s book club. The topic for this week is multiple hypothesis testing and the article I volunteered to summarize is “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects” by Michael Anderson. Read more

April 9, 2014

About Me

I am an independent consultant and researcher with a background in impact evaluations. In my free time, I like climbing things.

January 1, 0001

Response to Blattman's Post on Why What Works Is The Wrong Question

Last week, Chris Blattman published a long blog post titled “Why ‘what works?’ is the wrong question: evaluating ideas not programs.” In the blog post, which was adapted from a talk he gave at DFID, Blattman argues that a) impact evaluations should focus on deeper, theory-driven questions rather than just whether a program works or not and b) researchers should design impact evaluations to allow for generalizability by paying attention to context and running multiple evaluations in multiple contexts. Read more

© Doug Johnson 2020. Site design by Emir Ribic

Powered by Hugo & Kiss.