Effective Literacy And English Language Instruction For .

Transcription

IESIES PRACTICEPRACTICE GUIDEGUIDEWHAT WORKS CLEARINGHOUSEEffective Literacy andEnglish Language Instructionfor English Learnersin the Elementary GradesNCEE 2007-4011U.S. DEPARTMENT OF EDUCATION

The Institute of Education Sciences (IES) publishes practice guides in educationto bring the best available evidence and expertise to bear on the types of systemicchallenges that cannot currently be addressed by single interventions or programs.Authors of practice guides seldom conduct the types of systematic literature searchesthat are the backbone of a meta-analysis, though they take advantage of such workwhen it is already published. Instead, they use their expertise to identify the mostimportant research with respect to their recommendations, augmented by a searchof recent publications to assure that the research citations are up-to-date.One unique feature of IES-sponsored practice guides is that they are subjected torigorous external peer review through the same office that is responsible for independent review of other IES publications. A critical task of the peer reviewers of apractice guide is to determine whether the evidence cited in support of particularrecommendations is up-to-date and that studies of similar or better quality thatpoint in a different direction have not been ignored. Because practice guides dependon the expertise of their authors and their group decisionmaking, the content of apractice guide is not and should not be viewed as a set of recommendations that inevery case depends on and flows inevitably from scientific research.The goal of this practice guide is to formulate specific and coherent evidence-basedrecommendations for use by educators addressing a multifaceted challenge thatlacks developed or evaluated packaged approaches. The challenge is effective literacy instruction for English learners in the elementary grades. The guide providespractical and coherent information on critical topics related to literacy instructionfor English learners.

IES PRACTICE GUIDEEffective Literacy andEnglish Language Instructionfor English Learnersin the Elementary GradesDecember 2007(Format revised)Russell Gersten (Chair)RG RESEARCH GROUP AND UNIVERSITYOFScott K. BakerPACIFIC INSTITUTESUNIVERSITYFORRESEARCHANDTimothy ShanahanUNIVERSITY OF ILLINOIS AT CHICAGOSylvia Linan-ThompsonTHE UNIVERSITY OF TEXAS AT AUSTINPenny CollinsRobin ScarcellaUNIVERSITY OF CALIFORNIAAT IRVINENCEE 2007-4011U.S. DEPARTMENT OF EDUCATIONOREGONOFOREGON

This report was prepared for the National Center for Education Evaluation and RegionalAssistance, Institute of Education Sciences under Contract ED-02-CO-0022 by the WhatWorks Clearinghouse, a project of a joint venture of the American Institutes for Research and The Campbell Collaboration, and Contract ED-05-CO-0026 by Optimal Solutions Group, LLC.DisclaimerThe opinions and positions expressed in this practice guide are the authors’ and do notnecessarily represent the opinions and positions of the Institute of Education Sciencesor the United States Department of Education. This practice guide should be reviewedand applied according to the specific needs of the educators and education agency usingit and with full realization that it represents only one approach that might be taken,based on the research that was available at the time of publication. This practice guideshould be used as a tool to assist in decision-making rather than as a “cookbook.” Anyreferences within the document to specific education products are illustrative and donot imply endorsement of these products to the exclusion of other products that arenot referenced.U.S. Department of EducationMargaret SpellingsSecretaryInstitute of Education SciencesGrover J. WhitehurstDirectorNational Center for Education Evaluation and Regional AssistancePhoebe CottinghamCommissionerDecember 2007(The content is the same as the July 2007 version, but the format has been revised forthis version.)This report is in the public domain. While permission to reprint this publication is notnecessary, the citation should be:Gersten, R., Baker, S.K., Shanahan, T., Linan-Thompson, S., Collins, P., & Scarcella, R. (2007).Effective Literacy and English Language Instruction for English Learners in the Ele mentaryGrades: A Practice Guide (NCEE 2007-4011). Washington, DC: National Center for EducationEvaluation and Regional Assistance, Institute of Education Sciences, U.S. Department ofEducation. Retrieved from ides.This report is available on the IES web site at http://ies.ed.gov/ncee and ides.Alternate FormatsOn request, this publication can be made available in alternate formats, such as Braille,large print, audio tape, or computer diskette. For more information, call the AlternateFormat Center at (202) 205-8113.

EFFECTIVE LITERACY AND ENGLISH LANGUAGE INSTRUCTION FOR ENGLISH LEARNERS IN THE ELEMENTARY GRADESContentsPreamble from the Institute of Education Sciences About the authors vviiDisclosure of potential conflicts of interest ixIntroduction 1The What Works Clearinghouse standards and their relevance to this guide Effective instruction for English learners 34Overview 4Scope of the practice guide 4Checklist for carrying out the recommendations 7Recommendation 1. Screen for reading problems and monitor progress 9Recommendation 2. Provide intensive small-group reading interventions 15Recommendation 3. Provide extensive and varied vocabulary instruction 19Recommendation 4. Develop academic English 23Recommendation 5. Schedule regular peer‑assisted learning opportunities 28Appendix. Technical information on the studies 31Recommendation 1. Screen for reading problems and monitor progress 31Recommendation 2. Provide intensive small-group reading interventions 32Recommendation 3. Provide extensive and varied vocabulary instruction 33Recommendation 4. Develop academic English 35Recommendation 5. Schedule regular peer-assisted learning opportunities 36References 38( iii )

EFFECTIVE LITERACY AND ENGLISH LANGUAGE INSTRUCTION FOR ENGLISH LEARNERS IN THE ELEMENTARY GRADESList of tablesTable 1. Institute of Education Sciences Levels of Evidence 2Table 2. Recommendations and corresponding level of evidence to support each 6( iv )

Preamble fromthe Institute ofEducation Sciencesthat do not involve randomization, and thebottom level from the opinions of respectedauthorities. Levels of evidence can also beconstructed around the value of particulartypes of studies for other goals, such as thereliability and validity of assessments.What is a practice guide?The health care professions have embraceda mechanism for assembling and communicating evidence-based advice to practitioners about care for specific clinical conditions. Variously called practice guidelines,treatment protocols, critical pathways, bestpractice guides, or simply practice guides,these documents are systematically developed recommendations about the course ofcare for frequently encountered problems,ranging from physical conditions such asfoot ulcers to psychosocial conditions suchas adolescent development.1Practice guides are similar to the productsof expert consensus panels in reflecting theviews of those serving on the panel and thesocial decisions that come into play as thepositions of individual panel members areforged into statements that all are willing toendorse. However, practice guides are generated under three constraints that typicallydo not apply to consensus panels. The first isthat a practice guide consists of a list of discrete recommendations that are intended tobe actionable. The second is that those recommendations taken together are intendedto be a coherent approach to a multifacetedproblem. The third, which is most important,is that each recommendation is explicitlyconnected to the level of evidence supportingit, with the level represented by a grade (forexample, high, moderate, or low).The levels of evidence, or grades, are usuallyconstructed around the value of particulartypes of studies for drawing causal conclusions about what works. Thus, one typicallyfinds that the top level of evidence is drawnfrom a body of randomized controlled trials,the middle level from well designed studies1. Field & Lohr (1990).(v)Practice guides can also be distinguishedfrom systematic reviews or meta-analyses,which use statistical methods to summarizethe results of studies obtained from a rulebased search of the literature. Authors ofpractice guides seldom conduct the typesof systematic literature searches that arethe backbone of a meta-analysis, thoughthey take advantage of such work when itis already published. Instead, they use theirexpertise to identify the most important research with respect to their recommendations, augmented by a search of recent publications to assure that the research citationsare up-to-date. Further, the characterizationof the quality and direction of the evidenceunderlying a recommendation in a practiceguide relies less on a tight set of rules andstatistical algorithms and more on the judgment of the authors than would be the casein a high-quality meta-analysis. Anotherdistinction is that a practice guide, becauseit aims for a comprehensive and coherentapproach, operates with more numerousand more contextualized statements of whatworks than does a typical meta-analysis.Thus, practice guides sit somewhere between consensus reports and meta-analysesin the degree to which systematic processesare used for locating relevant research andcharacterizing its meaning. Practice guidesare more like consensus panel reports thanmeta-analyses in the breadth and complexity of the topics they address. Practiceguides are different from both consensusreports and meta-analyses in providingadvice at the level of specific action stepsalong a pathway that represents a more orless coherent and comprehensive approachto a multifaceted problem.

Preamble from the Institute of Education SciencesPractice guides in education at theInstitute of Education Sciencesthat they are the authors and thus responsible for the final product.The Institute of Education Sciences (IES) publishes practice guides in education to bringthe best available evidence and expertise tobear on the types of systemic challenges thatcannot currently be addressed by single interventions or programs. Although IES has takenadvantage of the history of practice guidesin health care to provide models of how toproceed in education, education is differentfrom health care in ways that may requirethat practice guides in education have somewhat different designs. Even within healthcare, where practice guides now number inthe thousands, there is no single template inuse. Rather, one finds descriptions of general design features that permit substantialvariation in the realization of practice guidesacross subspecialties and panels of experts.2Accordingly, the templates for IES practiceguides may vary across practice guides andchange over time and with experience.One unique feature of IES-sponsored practiceguides is that they are subjected to rigorousexternal peer review through the same officethat is responsible for independent review ofother IES publications. A critical task of thepeer reviewers of a practice guide is to determine whether the evidence cited in supportof particular recommendations is up-to-dateand that studies of similar or better qualitythat point in a different direction have notbeen ignored. Peer reviewers also are askedto evaluate whether the evidence grades assigned to particular recommendations bythe practice guide authors are appropriate. Apractice guide is revised as necessary to meetthe concerns of external peer reviews andgain the approval of the standards and reviewstaff at IES. The external peer review is carriedout independent of the office and staff withinIES that instigated the practice guide.The steps involved in producing an IESsponsored practice guide are, first, to select a topic, informed by formal surveys ofpractitioners and requests. Next is to recruita panel chair who has a national reputationand up-to-date expertise in the topic. Third,the chair, working with IES, selects a smallnumber of panelists to coauthor the practiceguide. These are people the chair believescan work well together and have the requisite expertise to be a convincing source ofrecommendations. IES recommends that atone least one of the panelists be a practitioner with experience relevant to the topicbeing addressed. The chair and the panelists are provided a general template for apractice guide along the lines of the information provided here. The practice guidepanel works under a short deadline of six tonine months to produce a draft document.It interacts with and receives feedback fromstaff at IES during the development of thepractice guide, but its members understandBecause practice guides depend on the expertise of their authors and their groupdecisionmaking, the content of a practiceguide is not and should not be viewed as aset of recommendations that in every casedepends on and flows inevitably from scientific research. It is not only possible but alsolikely that two teams of recognized expertsworking independently to produce a practice guide on the same topic would generateproducts that differ in important respects.Thus, consumers of practice guides need tounderstand that they are, in effect, gettingthe advice of consultants. These consultantsshould, on average, provide substantiallybetter advice than an individual school district might obtain on its own because theauthors are national authorities who haveto achieve consensus among themselves,justify their recommendations with supporting evidence, and undergo rigorous independent peer review of their product.Institute of Education Sciences2. American Psychological Association (2002).( vi )

About the authorsDr. Russell Gersten is executive directorof Instructional Research Group, a nonprofit educational research institute, aswell as professor emeritus in the College ofEducation at the University of Oregon. Hecurrently serves as principal investigatorfor the What Works Clearinghouse on thetopic of instructional research on Englishlanguage learners. He is currently principal investigator of two large Institute ofEducation Sciences projects involving randomized trials in the areas of Reading Firstprofessional development and readingcomprehension research. His main areasof expertise are instructional research onEnglish learners, mathematics instruction, reading comprehension research,and evaluation methodology. In 2002 Dr.Gersten received the Distinguished Special Education Researcher Award fromthe American Educational Research Association’s Special Education ResearchDivision. Dr. Gersten has more than 150publications in scientific journals, such asReview of Educational Research, AmericanEducational Research Journal, Reading Research Quarterly, Educational Leadership,and Exceptional Children.Dr. Scott Baker is the director of PacificInstitutes for Research in Eugene, Oregon. He specializes in early literacy measurement and instruction in reading andmathematics. Dr. Baker is co-principalinvestigator on two grants funded by theInstitute of Education Sciences, and he isthe co director of the Oregon Reading FirstCenter. Dr. Baker’s scholarly contributionsinclude conceptual, qualitative, and quantitative publications on a range of topicsrelated to students at risk for school difficulties and students who are Englishlearners.Dr. Timothy Shanahan is professor ofurban education at the University of Illinois at Chicago (UIC) and director of theUIC Center for Literacy. He was presidentof the International Reading Associationuntil May 2007. He was executive directorof the Chicago Reading Initiative, a public school improvement project serving437,000 children, in 2001–02. He receivedthe Albert J. Harris Award for outstandingresearch on reading disability from the International Reading Association. Dr. Shanahan served on the White House Assembly on Reading and the National ReadingPanel, a group convened by the NationalInstitute of Child Health and Human Development at the request of Congress toevaluate research on successful methodsof teaching reading. He has written or edited six books, including MultidisciplinaryPerspectives on Literacy, and more than100 articles and research studies. Dr.Shanahan’s research focuses on the relationship of reading and writing, schoolimprovement, the assessment of readingability, and family literacy. He chairedthe National Literacy Panel on LanguageMinority Children and Youth and the National Early Literacy Panel.Dr. Sylvia Linan-Thompson is an associate professor, Fellow in the Mollie V. DavisProfessorship in Learning Disabilities atThe University of Texas at Austin, anddirector of the Vaughn Gross Center forReading and Language Arts. She is associate director of the National Research andDevelopment Center on English LanguageLearners, which is examining the effect ofinstructional practices that enhance vocabulary and comprehension for middleschool English learners in content areas.She has developed and examined readinginterventions for struggling readers whoare monolingual English speakers, Englishlearners, and bilingual students acquiringSpanish literacy.Dr. Penny Collins (formerly Chiappe)is an assistant professor in the Department of Education at the University ofCalifornia, Irvine. Her research examines the development of reading skillsfor children from linguistically diverse( vii )

About the authorsbackgrounds and the early identificationof children at risk for reading difficulties.She is involved in projects on effectiveinstructional interventions to promoteacademic success for English learnersin elementary, middle, and secondaryschools. Dr. Collins is on the editorialboards of Journal of Learning Disabilitiesand Educational Psychology. Her work hasappeared in Applied Psycholinguistics,Journal of Educational Psychology, Journal of Experimental Child Psychology, andScientific Studies of Reading.Dr. Robin Scarcella is a professor in theSchool of Humanities at the University ofCalifornia, Irvine, where she also directsthe Program of Academic English/ESL. Shehas taught English as a second languagein California’s elementary and secondary schools and colleges. She has writtenmany research articles, appearing in suchjournals as The TESOL Quarterly and Studies in Second Language Acquisition, as wellas in books. Her most recent volume, Accelerating Academic English, was publishedby the University of California.( viii )

Disclosure of potentialconflicts of interestPractice guide panels are composed of individuals who are nationally recognizedexperts on the topics about which they arerendering recommendations. IES expectsthat such experts will be involved professionally in a variety of matters that relateto their work as a panel. Panel membersare asked to disclose their professionalinvolvements and to institute deliberativeprocesses that encourage critical examination the views of panel members as theyrelate to the content of the practice guide.The potential influence of panel members’professional engagements is further mutedby the requirement that they ground theirrecommendations in evidence that is documented in the practice guide. In addition,the practice guide is subjected to independent external peer review prior to publication, with particular focus on whether theevidence related to the recommendationsin the practice guide has been has beenappropriately presented.The professional engagements reportedby each panel members that appear mostclosely associated with the panel recommendations are noted below.Dr. Gersten, the panel chair, is a co- authorof a forthcoming Houghton Mifflin K-6reading series that includes material related to English learners. The readingseries is not referenced in the practiceguide.Dr. Baker has an author agreement withCambium Learning to produce an instructional module for English learners. Thismodule is not written and is not referencedin the practice guide.Dr. Linan-Thompson was one of the primary researchers on intervention studiesthat used Proactive Reading curriculum,and she developed the ESL adaptationsfor the intervention. Linan-Thompson coauthored the research reports that are described in the guide.Dr. Shanahan receives royalties on various curricula designed for elementary andmiddle school reading instruction, including Harcourt Achieve Elements of ReadingFluency (Grades 1-3); Macmillan McGraw-HillTreasures (Grades K-6); and AGS Glove-Pearson AMP (Grades 6-8). None of these products, though widely used, are aimed specifically at the English learner instructionalmarket (the focus of this practice guide).Macmillan publishes a separate programaimed at the English learner population.Shanahan is not involved in that program.Dr. Scarcella provides on-going teacherprofessional development services on academic vocabulary through the Universityof California Professional DevelopmentInstitutes that are authorized by the California State Board of Education.( ix )

IntroductionThe goal of this practice guide is to formulate specific and coherent evidence-basedrecommendations for use by educatorsaddressing a multifaceted challenge thatlacks developed or evaluated packaged approaches. The challenge is effective literacy instruction for English learners in theelementary grades. At one level, the targetaudience is a broad spectrum of schoolpractitioners—administrators, curriculumspecialists, coaches, staff developmentspecialists, and teachers. At another level,a more specific objective is to reach district-level administrators with a practiceguide that will help them develop practiceand policy options for their schools. Theguide includes specific recommendationsfor district administrators and indicatesthe quality of the evidence that supportsthese recommendations.Our expectation is that a superintendentor curriculum director could use this practice guide to help make decisions aboutpolicy involving literacy instruction forEnglish learners in the elementary grades.For example, we include recommendations on curriculum selection, sensibleassessments for monitoring progress,and reasonable expectations for studentachievement and growth. The guide provides practical and coherent informationon critical topics related to literacy instruction for English learners.We, the authors, are a small group with expertise on various dimensions of this topic.Several of us are also experts in researchmethodology. The range of evidence weconsidered in developing this document isvast, from expert analyses of curricula andprograms, to case studies of seemingly effective classrooms and schools, to trendsin the National Assessment of EducationalProgress data, to correlational studies andlongitudinal studies of patterns of typicaldevelopment. For questions about whatworks best, high-quality experimental andquasi-experimental studies, such as thosemeeting the criteria of the What WorksClearinghouse, have a privileged position(www.whatworks.ed.gov). In all cases wepay particular attention to patterns of findings that are replicated across studies.Although we draw on evidence about theeffectiveness of specific programs andpractices, we use this information to makebroader points about improving practice.In this document we have tried to take afinding from research or a practice recommended by experts and describe how theuse of this practice or recommendationmight actually unfold in school settings.In other words we aim to provide sufficientdetail so that a curriculum director wouldhave a clear sense of the steps necessaryto make use of the recommendation.A unique feature of practice guides isthe explicit and clear delineation of the quality—as well as quantity—of evidencethat supports each claim. To do this, weadapted a semistructured hierarchy suggested by the Institute of Education Sciences. This classification system uses boththe quality and quantity of available evidence to help determine the strength of theevidence base in which each recommendedpractice is grounded (see table 1).Strong refers to consistent and generalizable evidence that an approach or practicecauses better outcomes for English learners or that an assessment is reliable andvalid. Moderate refers either to evidencefrom studies that allow strong causal conclusions but cannot be generalized withassurance to the population on which a recommendation is focused (perhaps becausethe findings have not been sufficiently replicated) or to evidence from studies that aregeneralizable but have more causal ambiguity than offered by experimental designs(such as statistical models of correlationaldata or group comparison designs whereequivalence of the groups at pretest is uncertain). For the assessments, moderate(1)

IntroductionTable 1. Institute of Education Sciences Levels of EvidenceStrongIn general, characterization of the evidence for a recommendation as strong requires both studies withhigh internal validity (i.e., studies whose designs can support causal conclusions), as well as studies withhigh external validity (i.e., studies that in total include enough of the range of participants and settingson which the recommendation is focused to support the conclusion that the results can be generalizedto those participants and settings). Strong evidence for this practice guide is operationalized as: A systematic review of research that generally meets the standards of the What Works Clearinghouse (see http://ies.ed.gov/ncee/wwc/) and supports the effectiveness of a program, practice, orapproach with no contradictory evidence of similar quality; OR Several well-designed, randomized, controlled trials or well-designed quasi-experiments that generally meet the standards of the What Works Clearinghouse and support the effectiveness of a program, practice, or approach, with no contradictory evidence of similar quality; OR One large, well-designed, randomized, controlled, multisite trial that meets the standards of theWhat Works Clearinghouse and supports the effectiveness of a program, practice, or approach, withno contradictory evidence of similar quality; OR For assessments, evidence of reliability and validity that meets the Standards for Educational andPsychological Testing.ModerateIn general, characterization of the evidence for a recommendation as moderate requires studies withhigh internal validity but moderate external validity, or studies with high external validity but moderateinternal validity. In other words, moderate evidence is derived from studies that support strong causalconclusions but where generalization is uncertain, or studies that support the generality of a relationshipbut where the causality is uncertain. Moderate evidence for this practice guide is operationalized as: Experiments or quasi-experiments generally meeting the standards of the What Works Clearinghouse and supporting the effectiveness of a program, practice, or approach with small sample sizesand/or other conditions of implementation or analysis that limit generalizability, and no contraryevidence; OR Comparison group studies that do not demonstrate equivalence of groups at pretest and thereforedo not meet the standards of the What Works Clearinghouse but that (a) consistently show enhancedoutcomes for participants experiencing a particular program, practice, or approach and (b) have nomajor flaws related to internal validity other than lack of demonstrated equivalence at pretest (e.g.,only one teacher or one class per condition, unequal amounts of instructional time, highly biasedoutcome measures); OR Correlational research with strong statistical controls for selection bias and for discerning influenceof endogenous factors and no contrary evidence; OR For assessments, evidence of reliability that meets the Standards for Educational and PsychologicalTesting but with evidence of validity from samples not adequately representative of the populationon which the recommendation is focused.LowIn general, characterization of the evidence for a recommendation as low means that the recommendation is based on expert opinion derived from strong findings or theories in related areasand/or expert opinion buttressed by direct evidence that does not rise to the moderate or stronglevels. Low evidence is operationalized as evidence not meeting the standards for the moderateor high levels.Source: American Educational Research Association, American Psychological Association, and National Councilon Measurement in Education (1999).(2)

Introductionrefers to high-quality studies from a smallnumber of samples that are not representative of the whole population. Low refersto expert opinion based on reasonable extrapolations from research and theory onother topics and evidence from studies thatdo not meet the standards for moderate orstrong evidence.The What Works Clearinghousestandards and theirrelevance to this guideIn terms of the levels of evidence indicatedin table 1, we rely on the What Works Clearinghouse (WWC) Evidence Standards toassess the quality of evidence supportingeducational programs and practices. TheWWC addresses evidence for the causalvalidity of instructional programs andpractices according to WWC Standards. Information about these standards is available at s.html. The technical quality of each study is rated andplaced into one of three categories:(a) Meets Evidence Standards for randomized controlled trials and regressiondiscontinuity studies that provide thestrongest evidence of causal validity;(b) Meets Evidence Standards with Reservations for all quasi-experimental studieswith no design flaws and randomizedcontrolled trials that have problemswith randomization, attrition, or disruption; and(c) Does Not Meet Evidence Screens fo

Recommendation 4. Develop academic English . 23. Recommendation 5. Schedule regular peer -assisted learning opportunities . 28. Appendix. Technical information on the studies . 31. Recommendation 1. Screen for reading problems and monitor progress . 31. Recommendation 2. Provide intensive