# Mathematics Instruction

Mathematics Instruction for Students with Learning Disabilities: A Meta-Analysis of Instructional Components Author(s): Russell Gersten, David J. Chard, Madhavi Jayanthi, Scott K. Baker, Paul Morphy and Jonathan Flojo Source: Review of Educational Research , Sep., 2009, Vol. 79, No. 3 (Sep., 2009), pp. 12021242 Published by: American Educational Research Association Stable URL: https://www.jstor.org/stable/40469093 JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at https://about.jstor.org/terms American Educational Research Association is collaborating with JSTOR to digitize, preserve and extend access to Review of Educational Research This content downloaded from 208.95.48.49 on Sat, 29 May 2021 13:47:33 UTC All use subject to https://about.jstor.org/terms Review of Educational Research September 2009, Vol. 79, No. 3, pp. 1202-1242 DOJ: 10.3102/0034654309334431 © 2009 AERA. http://rer.aera.net Mathematics Instruction for Students With Learning Disabilities: A Meta-Analysis of Instructional Components Russell Gersten Instructional Research Group David J. Chard Southern Methodist University MadhaviJayanthi Instructional Research Group Scott K. Baker Pacific Institutes for Research and University of Oregon Paul Morphy Vanderbilt University Jonathan Flojo University of California at Irvine The purpose of this meta-analysis was to synthesize findings from 42 interventions (randomized control trials and quasi-experimental studies) on instructional approaches that enhance the mathematics proficiency of students with learning disabilities. We examined the impact of four categories of instructional components: (a) approaches to instruction and/or curriculum design, (b) formative assessment data and feedback to teachers on students’ mathematics performance, (c) formative data and feedback to students with LD on their performance, and (d) peer-assisted mathematics instruction. All instructional components except for student feedback with goal-setting and peer-assisted learning within a class resulted in significant mean effects ranging from 0.21 to 1.56. We also examined the effectiveness of these components conditionally, using hierarchical multiple regressions. Two instructional components provided practically and statistically important increases in effect size-teaching students to use heuristics and explicit instruction. Limitations of the study, suggestions for future research, and applications for improvement of current practice are discussed. KEYWORDS: mathematical education, special education, meta-analysis. Current prevalence estimates for students with learning disabilities (LD) and deficits in mathematics competencies typically range from 5% to 7% of the school-age 1202 This content downloaded from 208.95.48.49 on Sat, 29 May 2021 13:47:33 UTC All use subject to https://about.jstor.org/terms Math Meta-Analysis population (Geary, 2003; Gross-Tsur, Manor, & Shalev, 1996; L. S. Fuchs et al., 2005; Ostad, 1998). When juxtaposed with the well-documented inadequate mathematics performance of students with learning disabilities (Bryant, Bryant, & Hammill, 2000; Cawley, Parmar, Yan, & Miller, 1998; Geary, 2003), these estimates highlight the need for effective mathematics instruction based on empirically validated strategies and techniques. Until recently, mathematics instruction was often treated as an afterthought in the field of instructional research on students with learning disabilities. A recent review of the ERIC literature base (Gersten, Clarke, & Mazzocco, 2007) found that the ratio of studies on reading disabilities to mathematics disabilities and difficulties was 5:1 for the decade 1996-2005. This was a dramatic improvement over the ratio of 16:1 in the prior decade. During the past 5 years, two important bodies of research have emerged and helped crystallize mathematics instruction for students with learning disabilities. The first, which is descriptive, focuses on student characteristics that appear to underlie learning disabilities in mathematics. The second, which is experimental and the focus of this meta-analysis, addresses instructional interventions for students with learning disabilities. We chose to conduct a meta-analysis on interventions with students with learning disabilities and to sort studies by major types of instructional variables rather than conduct a historical, narrative review of the various intervention studies. Although three recent research syntheses (Kroesbergen & Van Luit, 2003; Swanson & Hoskyn, 1998; Xin & Jitendra, 1999) involving meta-analytic procedures target aspects of instruction for students experiencing mathematics difficulties, major questions remain unanswered. Swanson and Hoskyn (1998) investigated the effects of a vast array of interventions on the performance of adolescents with LD in areas related to academics, social skills, or cognitive functioning. They conducted a meta-analysis of experimental intervention research on students with LD. Their results highlight the beneficial impact of cognitive strategies and direct instruction models in many academic domains, including mathematics. Swanson and Hoskyn (1998) organized studies based on whether there was a measurable outcome in a target area and whether some type of treatment was used to influence performance. Swanson and Hoskyn were able to calculate the effectiveness of interventions on mathematics achievement for students with LD but did not address whether the treatment was an explicit focus of the study. A study investigating a behavior modification intervention for example might examine impacts on both reading and mathematics performance. The link between the intervention and math achievement would be made even though the focus of the intervention was not to improve mathematics achievement per se. Thus, the Swanson and Hoskyn meta-analysis only indirectly investigated the effectiveness of mathematics interventions for students with LD. The other two relevant syntheses conducted so far investigated math interventions directly (i.e., math intervention was the independent variable) but focused on a broader sample of participants experiencing difficulties in mathematics. Xin and Jitendra (1999) conducted a meta-analysis on word problem solving for students with high incidence disabilities (i.e., students with learning 1203 This content downloaded from 208.95.48.49 on Sat, 29 May 2021 13:47:33 UTC All use subject to https://about.jstor.org/terms Gersten et al. disabilities, mild mental retardation, and emotional disturbance) as well as students without disabilities who were at risk for mathematics difficulties. Xin and Jitendra examined the impacts associated with four instructional techniquesrepresentation techniques (diagramming), computer-assisted instruction, strategy training, and “other” (i.e., no instruction like attention only, use of calculators, or instruction not included in other categories like keyword or problem sequence). They included both group design and single-subject studies in their meta-analysis; the former were analyzed using standard mean change, whereas the latter were analyzed using percentage of nonoverlapping data (PDN). For group design studies, they found computer-assisted instruction to be most effective and representation techniques and strategy training superior to “other.” Kroesbergen and Van Luit (2003) conducted a meta-analysis of mathematics interventions for elementary students with special needs (students at risk, students with learning disabilities, and low-achieving students). They examined interventions in the areas of preparatory mathematics, basic skills, and problem-solving strategies. They found interventions in the area of basic skills to be most effective. In terms of method of instruction for each intervention, direct instruction and selfinstruction were found to be more effective than mediated instruction. Like Xin and Jitendra (1999), Kroesbergen and Van Luit included both single-subject and group design studies in their meta-analysis; however, they did not analyze data from these studies separately. We have reservations about the findings as the dataanalytic procedures used led to inflated effect sizes in single-subject studies (Busse, Kratochwill, & Elliott, 1995). Neither of the two meta-analyses (i.e., Kroesbergen & Van Luit, 2003; Xin & Jitendra, 1999) focused specifically on students with learning disabilities. We believe there is relevant empirical support for a research synthesis that focuses on mathematical interventions conducted for students with learning disabilities. Our reasoning was most strongly supported by a study by D. Fuchs, Fuchs, Mathes, and Lipsey (2000), who conducted a meta-analysis in reading to explore whether students with LD could be reliably distinguished from students who were struggling in reading but were not identified as having a learning disability. D. Fuchs et al. found that low-achieving students with LD performed significantly lower than students without LD. The average effect size differentiating these groups was 0.61 standard deviation units (Cohen’s a), indicating that the achievement gap between the two groups was substantial. Given this evidence of differentiated performance between students with LD and low-achieving students without LD, we felt it was necessary to synthesize mathematical interventions conducted with students with LD specifically. Our intent was to analyze and synthesize research using parametric statistical procedures (i.e., calculating effect sizes using Hedges g). Calculating the effect sizes (Hedges g) for studies with single-subject designs would result in extremely inflated effect size scores (Busse et al., 1995) and any mean effect size calculations would be impossible. Because there is no known statistical procedure for valid combination of single-subject and group design studies, we limited our metaanalysis (as do most researchers) to those utilizing randomized control trials (RCTs) or high-quality quasi-experimental designs (QEDs). 1204 This content downloaded from 208.95.48.49 on Sat, 29 May 2021 13:47:33 UTC All use subject to https://about.jstor.org/terms Purpose of the Meta-Analysis The purpose of this study was to synthesize RCTs and quasi-experimental research on instructional approaches that enhance the mathematics performance of school-age students with learning disabilities. We only included RCTs and quasiexperimental designs in which there was at least one treatment and one comparison group, evidence of pretest comparability for QEDs, and sufficient data with which to calculate effect sizes. Method Selection of Studies: Literature Review In this study, we defined mathematical interventions as instructional practices and activities designed to enhance the mathematics achievement of students with LD. We reviewed all studies published from January 1971 to August 2007 that focused on mathematics interventions to improve the mathematics proficiency of school-age students with LD. Two searches for relevant studies were undertaken. The first was from 1971 to 1999. The second extended the time period to August 2007. The 1971 to 1999 search began with a literature review using the ERIC and PSYCHINFO databases. The following combinations of descriptors were used: mathematics achievement , mathematics education, mathematics research, elementary education, secondary education, learning disabilities, and learning problems. We also conducted a systematic search of Dissertation Abstracts International and examined the bibliographies of research reviews on instructional intervention research involving students with learning disabilities (i.e., Kroesbergen & Van Luit, 2003; Maccini & Hughes, 1997; Mastropieri, Scruggs, & Shiah, 1991; Miller, Butler, & Lee, 1998; Swanson & Hoskyn, 1998; Swanson, Hoskyn, & Lee, 1999) for studies that may not have been retrieved from the computerized searches. Finally, we conducted a manual search of major journals in special, remedial, and elementary education (Journal of Special Education, Exceptional Children, Journal of Educational Psychology, Journal of Learning Disabilities, Learning Disability Quarterly, Remedial & Special Education, and Learning Disabilities Research & Practice) to locate relevant studies. This search resulted in the identification of 579 studies. Of this total, 194 studies were selected for further review based on analysis of the title, keywords, and abstracts. Of these 194 studies, 30 (15%) met our criteria for inclusion in the meta-analysis. We updated the search to August 2007 using a similar but streamlined procedure. For this search, we used the terms mathematics and LD or arithmetic and LD. We also excluded dissertations from the search. The second search resulted in an additional pool of 494 potential studies. We narrowed this set of studies to 38 by reviewing the title, keyword, and abstracts. Of the 38 studies, 14 (37%) met the criteria for inclusion in the meta-analysis. Thus, the two searches resulted in a total of 44 research studies. During the first search, two of the authors determined if a study met the criteria for inclusion using a consensus model; any disagreements were reconciled. During the second round, this determination was made by the senior author. Another author independently examined 13 of the 38 studies (approximately one third). 1205 This content downloaded from 208.95.48.49 on Sat, 29 May 2021 13:47:33 UTC All use subject to https://about.jstor.org/terms Gersten et al. Interrater reliability based on whether to include the study was 84.6% (calculated by taking the number of agreements and dividing by the number of agreements plus disagreements multiplied by 100). The authors initially disagreed on the inclusion of2 of the 13 studies; however, after discussion they reached consensus. To ensure that studies from both searches were included using the same criteria, we randomly selected 20% of the studies (N = 9) and calculated interrater reliability. A research assistant not involved in this project determined if each study should be included based on the inclusion criteria. All 9 studies met criteria for inclusion; interrater reliability was 100%. Criteria for Inclusion Three criteria were used to determine study inclusion. Focus of the Study First, the focus of the study had to be an evaluation of the effectiveness of a well-defined method (or methods) for improving mathematics proficiency. This could be done in the following ways: (a) Specific curricula or teaching approaches were used to improve mathematics instruction (e.g., teacher use of”think-aloud” learning strategies, use ofreal-world examples), (b) various classroom organizational or activity structures were used (e.g., peer-assisted learning), or (c) formativestudent assessment data were used toenhance instruction (e.g., curriculum-based measurement data, goal setting with students using formative data). Studies that only examined the effect of test-taking strategies on math test scores, taught students computer-programming logic, or focused on computer-assisted instruction (i.e., technology) were not included. We felt that computer-assisted instruction would be more appropriate for a meta-analysis in the area of technology. Design of the Study Second, studies were included if strong claims of causal inference could be made, namely, randomized controlled trials or quasi-experimental designs. We noted if the study was a RCT1 or a QED based on the presence of random assignment to the intervention conditions. Studies using single-case designs were not included because they are difficult to analyze using meta-analysis. Quasiexperiments were included if students were pretested on a relevant mathematics measure and one of the following three conditions were met: (a) Researchers in the original study adjusted posttest performance using appropriate analysis of covariance (ANCOVA) techniques, (b) authors provided pretest data so that effect sizes could be calculated using the Wortman and Bryant (1985) procedure, or (c) if posttest scores could not be adjusted statistically for pretest performance differences, there was documentation showing that no significant differences (

Purchase answer to see full attachment