Search form

Contributor 
Stephen B. Fawcett

Print Resources

Campbell,  D.T., and J.C. Stanley.  Experimental and Quasi-Experimental Designs for Research.  Chicago: Rand McNally, 1963, 1966.

Fawcett, et. al. (2008). Community Toolbox Curriculum Module 12: Evaluating the initiative. Work Group for Community Health and Development. University of Kansas. Community Tool Box Curriculum

Roscoe, John T. Fundamental Research Statistics for the Behavioral Sciences.  New York: Holt, Rinehart, and Winston, 1969.

Shadish, William, Thomas Cook, and Donald Campbell. Experimental and Quasi-experimental Designs for Generalized Causal Inference. Houghton Mifflin College Div, 2002.

Online Resources 

A Second Look at Research in Natural Settings is a web-version of a PowerPoint presentation by Graziano and Raulin.

Research Methods is a text by Dr. Christopher L. Heffner that focuses on the basics of research design and the critical analysis of professional research in the social sciences from developing a theory, selecting subjects, and testing subjects to performing statistical analysis and writing the research report.

Bridging the Gap: The role of monitoring and evaluation in Evidence-based policy-making is a document provided by UNICEF that aims to improve relevance, efficiency and effectiveness of policy reforms by enhancing the use of monitoring and evaluation.

Interrupted Time Series Quasi-Experiments,” is an essay by Gene Glass, from Arizona State University, on time series experiments, distinction between experimental and quasi-experimental approaches, etc.

Research Methods Knowledge Base is a comprehensive web-based textbook that provides useful, comprehensive, relatively simple explanations of how statistics work and how and when specific statistical operations are used and help to interpret data.

The Magenta Book - Guidance for Evaluation provides an in-depth look at evaluation. Part A is designed for policy makers. It sets out what evaluation is, and what the benefits of good evaluation are. It explains in simple terms the requirements for good evaluation, and some straightforward steps that policy makers can take to make a good evaluation of their intervention more feasible. Part B is more technical, and is aimed at analysts and interested policy makers. It discusses in more detail the key steps to follow when planning and undertaking an evaluation and how to answer evaluation research questions using different evaluation research designs. It also discusses approaches to the interpretation and assimilation of evaluation evidence.