Talking Ed: SMU Experts Connecting Science and Practice
Bite for Bite

EVALUATING THE NATION’S REPORT CARD

Since 1969 the National Assessment of Educational Progress – known as “the Nation’s Report Card” – has provided the only continuing measurement of what American schoolchildren know and can do. Students report information on everything from how many magazine subscriptions their households receive to how many books they read to how much TV they watch. Such background data may help determine why school programs pass or fail.

The report’s current format, however, barely scratches the surface of information available from the survey, says an SMU statistician who is refining the analysis of what schools are doing well and what doesn’t work. “There’s a lot of data that isn’t being examined closely” simply because it doesn’t bear directly on the Report Card’s primary assessment, says Lynne Stokes, professor of statistical science in Dedman College of Humanities and Sciences and an expert in surveys and sampling.

The Nation’s Report Card obtains an average overall estimate for different groups, not a measurement on individual students, Stokes says. “It doesn’t pin down everything a student in a given school or state knows. But all together, the data provides a solid estimate of who has either basic or proficient grasp of academic skills.”

To give precise results for states and some urban districts – as well as for individual demographic groups such as African Americans, whites, and Hispanics – NAEP samples differentially. Such a sample may include more minority students than are represented in an area’s population so the survey will include enough of that population to create an accurate statewide estimate.

“It’s a nonrandom sample, because some people are given a higher chance of being selected than others,” Stokes says. “In an overall analysis, if you give all these data sets equal weight, you’ll have too many of one survey group and not enough of another.” For example, to gain a reliable sample of students from sparsely populated Wyoming, the Report Card selects a greater number of participants than Wyoming residents represent proportionally in the U.S. population. Assigning equal weights to Wyoming’s data and that of, say, Texas will produce an inaccurate picture of American children overall.

To give each data set its proper weight, Stokes and SMU statistician Ian Harris are using a relatively new method called multilevel analysis, which “can help us learn how to measure things like proficiency variation from school to school, or what school characteristics are associated with higher or lower scores,” she says. “Do schools in states that have high-stakes testing do better than schools that don’t? Is the gap between blacks and whites narrowing in states that have certain kinds of policies more than in others? If you’re trying to reduce a gap, you have to understand why the gap is there. That’s what we’re working to answer.”

Stokes, who received her M.S. and Ph.D. degrees in mathematical statistics from the University of North Carolina-Chapel Hill, joined SMU in 2001 and serves as associate editor of Survey Methodology and the Journal of the American Statistical Association, of which she also is chair-elect of the Council on Sections. She and Harris are conducting their Nation’s Report Card research under a two-year grant from the U.S. Department of Education’s NAEP Secondary Analysis Program.

From SMU Research 2006 and reprinted with permission. For more information about SMU Research, contact editor Susan White.

New SAT Spells Trouble for Students with Dyslexia

If students with dyslexia have not been retested within the last three to five years, the College Board, which administers the SAT, will not accommodate with extra time on the new SAT.

“This is going to cause a real crisis for families,” says Karen Vickery, director of SMU’s Learning Therapy Program. “Most students with learning problems are tested only once during their school years, generally in the early grades. In addition, most school districts will not cover the costs of retesting.”