Cambridge Handbook Of Experimental Political Science

Transcription

Cambridge Handbook ofExperimental Political ScienceEdited byJames N. DruckmanDonald P. GreenJames H. KuklinskiArthur Lupia

Table of ContentsContributorsList of TablesList of Figures1. Experimentation in Political ScienceJames N. Druckman, Donald P. Green, James H. Kuklinski, and Arthur LupiaI. Designing Experiments2. Experiments: An Introduction to Core ConceptsJames N. Druckman, Donald P. Green, James H. Kuklinski, and Arthur Lupia3. Internal and External ValidityRose McDermott4. Students as Experimental Participants: A Defense of the “Narrow Data Base”James N. Druckman and Cindy D. Kam5. Economics vs. Psychology Experiments: Stylization, Incentives, and DeceptionEric S. DicksonII. The Development of Experiments in Political Science6. Laboratory Experiments in Political ScienceShanto Iyengar7. Experiments and Game Theory’s Value to Political ScienceJohn H. Aldrich and Arthur Lupia8. The Logic and Design of the Survey Experiment: An Autobiography of a MethodologicalInnovationPaul M. Sniderman9. Field Experiments in Political ScienceAlan S. GerberIII. Decision Making10. Attitude Change Experiments in Political ScienceAllyson L. Holbrook11. Conscious and Unconscious Information Processing with Implications for ExperimentalPolitical ScienceMilton Lodge, Charles Taber, and Brad Verhulsti

12. Political KnowledgeCheryl Boudreau and Arthur LupiaIV. Vote Choice, Candidate Evaluations, and Turnout13. Candidate Impressions and EvaluationsKathleen M. McGraw14. Media and PoliticsThomas E. Nelson, Sarah M. Bryner, and Dustin M. Carnahan15. Candidate AdvertisementsShana Kushner Gadarian and Richard R. Lau16. Voter MobilizationMelissa R. Michelson and David W. NickersonV. Interpersonal Relations17. Trust and Social ExchangeRick K. Wilson and Catherine C. Eckel18. An Experimental Approach to Citizen DeliberationChristopher F. Karpowitz and Tali Mendelberg19. Social Networks and Political ContextDavid W. NickersonVI. Identity, Ethnicity, and Politics20. Candidate Gender and Experimental Political ScienceKathleen Dolan and Kira Sanbonmatsu21. Racial Identity and Experimental MethodologyDarren Davis22. The Determinants and Political Consequences of PrejudiceVincent L. Hutchings and Spencer Piston23. Politics from the Perspective of Minority PopulationsDennis Chong and Jane JunnVII. Institutions and Behavior24. Experimental Contributions to Collective-Action TheoryEric Coleman and Elinor Ostrom25. Legislative Voting and CyclingGary Millerii

26. Electoral Systems and Strategic Voting (Laboratory Election Experiments)Rebecca B. Morton and Kenneth C. Williams27. Experimental Research on Democracy and DevelopmentAna L. De La O and Leonard WantchekonVIII. Elite Bargaining28. Coalition ExperimentsDaniel Diermeier29. Negotiation and MediationDaniel Druckman30. The Experiment and Foreign Policy Decision MakingMargaret G. Hermann and Binnur Ozkececi-TanerIX. Advanced Experimental Methods31. Treatment EffectsBrian J. Gaines and James H. Kuklinski32. Making Effects Manifest in Randomized ExperimentsJake Bowers33. Design and Analysis of Experiments in Multilevel PopulationsBetsy Sinclair34. Analyzing the Downstream Effects of Randomized ExperimentsRachel Milstein Sondheimer35. Mediation Analysis Is Harder than It LooksJohn G. Bullock and Shang E. HaAfterword36. Campbell’s GhostDonald R. KinderIndexiii

ContributorsJohn H. Aldrich, Duke UniversityCheryl Boudreau, University of California, DavisJake Bowers, University of Illinois at Urbana-ChampaignSarah M. Bryner, The Ohio State UniversityJohn G. Bullock, Yale UniversityDustin M. Carnahan, The Ohio State UniversityDennis Chong, Northwestern UniversityEric Coleman, Florida State UniversityDarren Davis, University of Notre DameAna L. De La O, Yale UniversityEric S. Dickson, New York UniversityDaniel Diermeier, Northwestern UniversityKathleen Dolan, University of Wisconsin, MilwaukeeDaniel Druckman, George Mason UniversityJames N. Druckman, Northwestern UniversityCatherine C. Eckel, University of Texas at DallasShana Kushner Gadarian, University of California, BerkeleyBrian J. Gaines, University of Illinois at Urbana-ChampaignAlan S. Gerber, Yale UniversityDonald P. Green, Yale UniversityShang E. Ha, Brooklyn College, City University of New YorkMargaret G. Hermann, Syracuse UniversityAllyson L. Holbrook, University of Illinois at ChicagoVincent L. Hutchings, University of MichiganShanto Iyengar, Stanford UniversityJane Junn, University of Southern CaliforniaCindy D. Kam, Vanderbilt UniversityChristopher F. Karpowitz, Brigham Young UniversityDonald R. Kinder, University of MichiganJames H. Kuklinski, University of Illinois at Urbana-ChampaignRichard R. Lau, Rutgers UniversityMilton Lodge, Stony Brook UniversityArthur Lupia, University of MichiganRose McDermott, Brown UniversityKathleen M. McGraw, The Ohio State UniversityTali Mendelberg, Princeton UniversityMelissa R. Michelson, California State University, East BayGary Miller, Washington University in St. LouisRebecca B. Morton, New York UniversityThomas E. Nelson, The Ohio State UniversityDavid W. Nickerson, University of Notre DameElinor Ostrom, Indiana UniversityBinnur Ozkececi-Taner, Hamline UniversitySpencer Piston, University of Michiganiv

Kira Sanbonmatsu, Rutgers UniversityBetsy Sinclair, University of ChicagoPaul M. Sniderman, Stanford UniversityRachel Milstein Sondheimer, United States Military AcademyCharles Taber, Stony Brook UniversityBrad Verhulst, Stony Brook UniversityLeonard Wantchekon, New York UniversityKenneth C. Williams, Michigan State UniversityRick K. Wilson, Rice Universityv

List of Tables4-1. Sampling distribution of bT, single treatment effect9-1. Approximate Cost of Adding One Vote to Candidate Vote Margin9-2. Voter Mobilization Experiments Prior to 1998 New Haven Experiment11-1. A Schematic Figure of the Racial IAT using pleasant and unpleasant words and EuroAmerican and Afro-American Stereotype words25-1. Testing the Uncovered Set with Previous Majority Rule Experiments - Bianco et al. (2006)26-1. Forsythe et al (1993) Payoff Schedule26-2. Dasgupta et al. (2008) Payoff Schedule32-1. Balance tests for covariates adjusted for blocking in the blocked thirty two city study32-2. Balance tests for covariates in the blocked 8 city study32-3. Balance tests for covariates adjusted for covariates by post-stratification in the blockedthirty two city study33-1. ITT Effects34-1. Classification of Target Population in Downstream Analysis of Educational Interventionvi

List of Figures1-1. Experimental Articles in the APSR4-1. Sampling distribution of bT, single treatment effect4-2. Sampling distribution of bT, heterogeneous treatment effects4-3. Sampling distributions of bT and bTZ, heterogeneous treatment effects6-1. Race of Suspect Manipulation6-2. The Facial Similarity Manipulation9-1. Graphical Representation of Treatment Effects with Noncompliance10-1. Pretest-Posttest Control Group Design (Campbell and Stanley 1963, 13)10-2. Pretest-Posttest Multiple Experimental Condition Design10-3. Posttest-Only Control Group Design10-4. Posttest-Only Multiple Experimental Group Design11-1. Spreading Activation in a Sequential Priming Paradigm for Short and Long SOA15-1. Methodological Consequences of Differences Between Observational and ExperimentalStudies of Candidate Advertisements21-1. Example of Experimental Design for Racial Identity24-1. A Prisoner’s Dilemma Game24-2. Contributions in a Public-Goods Game24-3. A Common-Pool Resource Game25-1. Outcomes of Majority Rule Experiments without a Core - Fiorina and Plott (1978)25-2. Majority Rule with Issue-by-Issue Voting - McKelvey and Ordeshook (1984)25-3. The Effect of Backward and Forward Agendas - Wilson (2008b)25-4. The Effect of Monopoly Agenda Setting - Wilson (2008b)25-5a. A Sample Majority-Rule Trajectory for Configuration 1 - Bianco et al. (2008)25-5b. A Sample Majority-Rule Trajectory for Configuration 2 - Bianco et al. (2008)25-6a. The Uncovered Set and Outcomes for Configuration 1 - Bianco et al. (2008)25-6b. The Uncovered Set and Outcomes for Configuration 2 - Bianco et al. (2008)25-7. Senatorial Ideal Points and Proposed Amendments for the Civil Rights Act of 1964 - Jeonget al. (2009)26-1. Median Voter Theorem28-1. Potential Coalitions and their Respective Payoffs32-1. The efficiency of paired and unpaired designs in simulated turnout data32-2. Graphical assessment of balance on distributions of baseline turnout for the thirty two cityexperiment data32-3. Post-stratification adjusted confidence intervals for the difference in turnout betweentreated and control cities in the thirty two-city turnout experiment32-4. Covariance-adjustment in a simple random experiment32-5. Covariance-adjustment in a blocked random experiment32-6. Covariance-adjusted confidence intervals for the difference in turnout between treated andcontrol cities in the thirty-two-city turnout experiment data33-1. Multilevel Experiment Design36-1. Number of Articles Featuring Experiments Published in The American Political Science Review,1906-2009vii

AcknowledgementsThis volume has its origins in the American Political Science Review’s special 2006 centennialissue celebrating the evolution of the study of politics. For that issue, we proposed a paper thattraced the history of experiments within political science. The journal’s editor, Lee Sigelman,responded to our proposal for the issue with a mix of skepticism – for example, asking about theprominence of experiments in the discipline – and encouragement. We moved forward andeventually published a paper in the special issue, and there is no doubt it was much better than itwould have been absent Lee’s constant constructive guidance. Indeed, Lee, who himselfconducted some remarkably innovative experiments, pushed us to think about what makespolitical science experiments unique relative to the other psychological and social sciences. Itwas this type of prodding that led us to conceive of this Handbook. Sadly, Lee did not live to seethe completion of the Handbook, but we hope it approaches the high standards he always set. Weknow we are not alone in saying that Lee is greatly missed.Our first task in developing the Handbook was to generate a list of topics and possible authors;we were overwhelmed by the positive responses to our invitations to contribute. While we leaveit to the reader to assess the value of the book, we can say the experience of assembling thisvolume could not have been more enjoyable and instructive, thanks to authors.Nearly all of the authors attended a conference held at Northwestern University (in Evanston, IL,USA) on May 28th-29th, 2009. We were extremely fortunate to have an exceptionally able groupof discussants take the lead in presenting and commenting on the chapters; we deeply appreciatethe time and insights they provided. The discussants included: Kevin Arceneaux, Ted Brader,Ray Duch, Kevin Esterling, Diana Mutz, Mike Neblo, Eric Oliver, Randy Stevenson, NickValentino, and Lynn Vavreck. Don Kinder played a special role at the conference offering hisoverall assessment at the end of the proceedings. A version of these thoughts appears as thevolume’s Afterword.We also owe thanks to the more than thirty graduate students who attended the conference, metwith faculty, and offered their perspectives. These students (many of whom became Professorsbefore the publication of the volume) included Lene Aarøe, Emily Alvarez, Christy Aroopala,Bernd Beber, Toby Bolsen, Kim Dionne, Katie Donovan, Ryan Enos, Brian Falb, MarkFredrickson, Fernando Garcia, Ben Gaskins, Seth Goldman, Daniel Hidalgo, Samara Klar,Yanna Krupnikov, Thomas Leeper, Adam Levine, Peter Loewen, Kristin Michelitch, DanielMyers, Jennifer Ogg Anderson, Spencer Piston, Josh Robison, Jon Rogowski, Mark Schneider,Geoff Sheagley, Alex Theodoridis, Catarina Thomson, Dustin Tingley, Brad Verhulst, and AbbyWood.We thank a number of others who attended the conference and offered important comments,including, but not limited to David Austen-Smith, Traci Burch, Fay Cook, Jeremy Freese, JerryGoldman, Peter Miller, Eugenia Mitchelstein, Ben Page, Jenn Richeson, Anne Sartori, VictorShih, and Salvador Vazquez del Meracdo.The conference would not have been possible without the exceptional contributions of a numberof individuals. Of particular note are the many staff members of Northwestern’s Institute forviii

Policy Research. We thank the Institute’s director, Fay Cook, for supporting the conference, andwe are indebted to Patricia Reese for overseeing countless logistics. We also thank Eric Betzold,Arlene Dattels, Sarah Levy, Michael Weis, and Bev Zack. A number of Northwestern’s politicalscience Ph.D. students also donated their time to ensure a successful event – including EmilyAlvarez, Toby Bolsen, Brian Falb, Samara Klar, Thomas Leeper, and Josh Robison. We alsothank Nicole, Jake, and Sam Druckman for their patience and help in ensuring everything at theconference was in place.Of course the conference and the production of the volume could not have been possible withoutgenerous financial support and we gratefully acknowledge National Science Foundation (SES0851285), Northwestern University’s Weinberg College of Arts and Sciences, and the Institutefor Policy Research.Following the conference, authors engaged in substantial revisions and, along the way, a numberof others provided instructive comments – including Cengiz Erisen, Jeff Guse, David Llanos, andthe anonymous press reviewers. We also thank the participants in Druckman’s graduateexperimental class who read a draft of the volume and commented on each chapter; theseimpressive students included: Emily Alvarez, Toby Bolsen, Brian Falb, Samara Klar, ThomasLeeper, Rachel Moskowitz, Taryn Nelson, Christoph Nguyen, Josh Robison, and Xin Sun. Wehave no doubt that countless others offered advice (of which we, as the editors, are not directlyaware), and we thank them for their contributions. A special acknowledgement is due to SamaraKlar and Thomas Leeper who probably have read the chapters more than anyone else, andwithout fail, have offered helpful advice and skillful coordination. Finally, it was a pleasureworking with Eric Crahan and Jason Przybylski at Cambridge University Press.We view this Handbook as a testament to the work of many scholars (a number of whom areauthors in this volume) who set the stage for experimental approaches in political science. Whilewe cannot be sure what many of them will think of the volume, we do hope it successfullyaddresses a question raised by an editor’s (Druckman’s) son who was 7 when he asked “why ispolitical ‘science’ a ‘science’ since it doesn’t do things that science does, like run experiments?”--James N. Druckman, Donald P. Green, James H. Kuklinski, and Arthur Lupiaix

Introduction

1. Experimentation in Political ScienceJames N. Druckman, Donald P. Green, James H. Kuklinski and Arthur Lupia iIn his 1909 American Political Science Association presidential address, A. LawrenceLowell advised the fledgling discipline against following the model of the natural sciences: “Weare limited by the impossibility of experiment. Politics is an observational, not an experimentalscience ” (Lowell 1910, 7). The lopsided ratio of observational to experimental studies inpolitical science, over the one hundred years since Lowell’s statement, arguably affirms hisassessment. The next hundred years are likely to be different. The number and influence ofexperimental studies are growing rapidly as political scientists discover ways of usingexperimental techniques to illuminate political phenomena.The growing interest in experimentation reflects the increasing value that the disciplineplaces on causal inference and empirically-guided theoretical refinement. Experiments facilitatecausal inference through the transparency and content of their procedures, most notably therandom assignment of observations (a.k.a., subjects or experimental participants) to treatmentand control groups. Experiments also guide theoretical development by providing a means forpinpointing the effects of institutional rules, preference configurations, and other contextualfactors that might be difficult to assess using other forms of inference. Most of all, experimentsguide theory by providing stubborn facts – that is to say, reliable information about cause andeffect that inspires and constrains theory.Experiments bring new opportunities for inference along with new methodologicalchallenges. The goal of the Cambridge Handbook of Experimental Political Science is to helpscholars more effectively pursue experimental opportunities while better understanding the1

challenges. To accomplish this goal, the Handbook offers a review of basic definitions andconcepts, compares experiments with other forms of inference in political science, reviews thecontributions of experimental research, and presents important methodological issues. It is ourhope that discussing these topics in a single volume will help facilitate the growth anddevelopment of experimentation in political science.1. The Evolution and Influence of Experiments in Political ScienceSocial scientists answer questions about social phenomena by constructing theories,deriving hypotheses, and evaluating these hypotheses by empirical or conceptual means. Oneway to evaluate hypotheses is to intervene deliberately in the social process under investigation.An important class of interventions is known as experiments. An experiment is a deliberate testof a causal proposition, typically with random assignment to conditions. ii Investigators designexperiments to evaluate the causal impacts of potentially informative explanatory variables.While scientists have conducted experiments for hundreds of years, modernexperimentation made its debut in the 1920s and 1930s. It was then that, for the first time, socialscientists began to use random assignment in order to allocate subjects to control and treatmentgroups. iii One can find examples of experiments in political science as early as the 1940s and1950s. The first experimental paper in the American Political Science Review (APSR) appearedin 1956 (Eldersveld 1956). iv In that study, the author randomly assigned potential voters to acontrol group that received no messages, or to treatment groups that received messagesencouraging them to vote via personal contact (which included phone calls or personal visits) orvia a mailing. The study showed that more voters in the personal contact treatment groups turnedout to vote than those in either the control group or the mailing group; personal contact caused arelative increase in turnout. A short time after Eldersveld’s study, an active research program2

using experiments to study international conflict resolution began (e.g., Mahoney and Druckman1975; Guetzkow and Valadez 1981), and, later, a periodic but now extinct journal, TheExperimental Study of Politics, began publication (also see Brody and Brownstein 1975).These examples are best seen as exceptions, however. For much of the discipline’shistory, experiments remained on the periphery. In his widely-cited methodological paper from1971, Lijphart (1971) states, “The experimental method is the most nearly ideal method forscientific explanation, but unfortunately it can only rarely be used in political science because ofpractical and ethical impediments” (684). In their oft-used methods text, King, Keohane, andVerba (1994) provide virtually no discussion of experimentation, stating only that experimentsare helpful in so far as they “provide a useful model for understanding certain aspects of nonexperimental design” (125).A major change in the status of experiments in political science occurred during the lastdecades of the twentieth century. Evidence of the change is visible in Figure 1-1. This figurecomes from a content analysis of the discipline’s widely-regarded flagship journal, the APSR.The figure shows a sharp increase, in recent years, in the number of articles using a randomassignment experiment. In fact, more than half of the 71 experimental articles that appeared inthe APSR during its first 103 years were published after 1992. Other signs of the rise ofexperiments include the many graduate programs now offering courses on experimentation,National Science Foundation support for experimental infrastructure, and the proliferation ofsurvey experiments in both private and publicly supported studies. v[Figure 1-1 here]Experimental approaches have not been confined to single subfields or approaches.Instead, political scientists have employed experiments across fields, and have drawn on and3

developed a notable range of experimental methods. These sources of diversity make a unifyingHandbook particularly appealing for the purpose of facilitating coordination and communicationacross varied projects.2. Diversity of ApplicationsPolitical scientists have implemented experiments for various purposes to address avariety of issues. Roth (1995) identifies three non-exclusive roles that experiments can play, anda cursory review makes clear that political scientists employ them in all three ways. First, Rothdescribes “searching for facts,” where the goal is to “isolate the cause of some observedregularity, by varying details of the way the experiments were conducted. Such experiments arepart of the dialogue that experimenters carry on with one another” (22). These types ofexperiments often complement observational research (e.g., work not employing randomassignment) by arbitrating between conflicting results derived from observational data.“Searching for facts” describes many experimental studies that attempt to estimate themagnitudes of causal parameters, such as the influence of racial attitudes on policy preferences(Gilens 1996) or the price-elasticity of demand for public and private goods (Green 1992).A second role entails “speaking to theorists,” where the goal is “to test the predictions [orthe assumptions] of well articulated formal theories [or other types of theories]. Suchexperiments are intended to feed back into the theoretical literature – i.e., they are part of adialogue between experimenters and theorists” (Roth 1995, 22). The many political scienceexperiments that assess the validity of claims made by formal modelers epitomize this type ofcorrespondence (e.g., Ostrom, Walker, and Gardner 1992; Morton 1993; Fréchette, Kagel, andLehrer 2003). vi The third usage is “whispering in the ears of princes,” which facilitates “thedialogue between experimenters and policy-makers [The] experimental environment is4

designed to resemble closely, in certain respects, the naturally occurring environment that is thefocus of interest for the policy purposes at hand” (Roth 1995, 22). Cover and Brumberg’s (1982)field experiment examining the effects of mail from members of the U.S. Congress on theirconstituents' opinions exemplifies an experiment that whispers in the ears of legislative“princes.”Although political scientists might share rationales for experimentation with otherscientists, their attention to focal aspects of politically relevant contexts distinguishes theirefforts. This distinction parallels the use of other modes of inference by political scientists. AsDruckman and Lupia (2006) argue, “[c]ontext, not methodology, is what unites our discipline.Political science is united by the desire to understand, explain, and predict important aspects ofcontexts where individual and collective actions are intimately and continuously bound” (109).The environment in which an experiment takes place is thus of particular importance to politicalscientists.And, while it might surprise some, political scientists have implemented experiments in awide range of contexts. Examples can be found in every subfield. Applications to Americanpolitics include not only topics such as media effects (e.g., Iyengar and Kinder 1987),mobilization (e.g., Gerber and Green 2000), and voting (e.g., Lodge, McGraw, and Stroh 1989),but also studies of congressional and bureaucratic rules (e.g., Eavey and Miller 1984; Miller,Hammond, and Kile 1996). The field of international relations, in some ways, lays claim to oneof the longest ongoing experimental traditions with its many studies of foreign policy decisionmaking (e.g., Geva and Mintz 1997) and international negotiations (e.g., D. Druckman 1994).Related work in comparative politics explores coalition bargaining (e.g., Riker 1967; Fréchette etal. 2003) and electoral systems (e.g., Morton and Williams 1999); and recently, scholars have5

turned to experiments to study democratization and development (Wantchekon 2003), culture(Henrich et al. 2004) and identity (e.g., Sniderman, Hagendoorn, and Prior 2004; Habyarimana etal. 2007). Political theory studies include explorations into justice (Frohlich and Oppenheimer1992) and deliberation (Simon and Sulkin 2001).Political scientists employ experiments across subfields and for a range of purposes. Atthe same time, many scholars remain unaware of this range of activity, which limits the extent towhich experimental political scientists have learned from one another. For example, scholarsstudying coalition formation and international negotiations experimentally can benefit fromtalking to one another, yet there is little sign of engagement between the respective contributorsto these literatures. Similarly, there are few signs of collaboration amongst experimental scholarswho study different kinds of decision-making (e.g., foreign policy decision-making and votingdecisions). Of equal importance, scholars within specific fields who have not used experimentsmay be unaware of when and how experiments can be effective. A goal of this Handbook is toprovide interested scholars with an efficient and effective way to learn about a broad range ofexperimental applications, how these applications complement and supplement non-experimentalwork, and the opportunities and challenges inherent in each type of application.3. Diversity of Experimental MethodsThe most apparent source of variation in political science experiments is where they areconducted. To date, most experiments have been implemented in one of three contexts:laboratories, surveys, and the field. These types of experiments differ in terms of whereparticipants receive the stimuli (e.g., messages encouraging them to vote), with that exposuretaking place, respectively, in a controlled setting, in the course of a phone, in-person, or web-6

based survey, or in a naturally occurring setting such as the voter’s home (e.g., in the course ofeveryday life, and often without the participants’ knowledge). viiEach type of experiment presents methodological challenges. For example, scholars havelong bemoaned the artificial settings of campus-based laboratory experiments and thewidespread use of student-aged subjects. While experimentalists from other disciplines haveexamined implications of running experiments “on campus,” this literature is not often cited bypolitical scientists (e.g., Dipboye and Flanagan 1979; Kardes 1996; Kühberger 1998; Levitt andList 2007). Some political scientists claim that the problems of campus-based experiments can beovercome by conducting experiments on representative samples. This may be true. However, theconditions under which such changes produce more valid results have not been broadlyexamined (see, e.g., Greenberg 1987). viiiSurvey experiments, while not relying on campus-based “convenience samples,” alsoraise questions about external validity. Many survey experiments, for example, expose subjectsto phenomena they might have also encountered prior to participating in an experiment, whichcan complicate causal inference (Gaines, Kuklinski, and Quirk 2007).Field experiments are seen as a way to overcome the artificiality of other types ofexperiments. In the field, however, there can be less control over what experimental stimulisubjects observe. It may also be more difficult to get people to participate due to an inability torecruit subjects or to subjects’ unwillingness to participate as instructed once they are recruited.Besides where they are conducted, another source of diversity in political scienceexperiments is the extent to which they follow experimental norms in neighboring disciplines,such as psychology and economics. This diversity is notable because psychological andeconomic approaches to experimentation differ from each other. For example, where7

psychological experiments often include some form of deception, economists consider it taboo.Psychologists rarely pay subjects for specific actions they undertake during an experiment.Economists, on the other hand, often require such payments (Smith 1976). Indeed, the inauguralissue of Experimental Economics stated that submissions that used deception or did not payparticipants for their actions would not be accepted for publication. ixFor psychologists and economists, differences in experimental traditions reflectdifferences in their dominant paradigms. Since most political scientists seek first and foremost toinform political science debates, norms about what constitutes a valid experiment in economicsor psychology are not always applicable. So, for any kind of experiment, an important questionto ask is: which experimental method is appropriate?The current debate about this question focuses on more than the validity of the inferencesthat different experimental approaches can produce. Cost is also an issue. Survey and fieldexperiments, for example, can be expensive. Some scholars question whether the added cost ofsuch endeavors (compared to, say, campus-based laboratory experiments) is justifiable. Suchdebates are leading more scholars to evaluate the conditions under which particular types ofexperiments are cost-effective. With the evolution of these debates has come the question ofwhether the immediate costs of fieldi

Apr 08, 1973 · II. The Development of Experiments in Political Science 6. Laboratory Experiments in Political Science Shanto Iyengar 7. Experiments and Game Theory’s Value to Political Science John H. Aldrich and Arthur Lupia 8. The Logic and Design of the Survey Experiment: An Autobiog