Is a consolation prize better than no prize at all? That’s the question American educators might ponder with this week’s release of the PISA 2012 problem-solving-assessment results. In its very first iteration, the computer-based test was administered to a subsample of the students assessed in PISA’s core subjects: just over 6,000 U.S. students took the core PISA tests, and 1,273 of those also took this problem-solving test (see the U.S. snapshot). In all, about 85,000 fifteen-year-olds in forty-four countries participated. It tested students’ creative problem-solving skills with real-life problems, such as “an unfamiliar vending machine or a malfunctioning electronic device.” (This is no April Fool’s day joke.) This is quite different from problem-solving questions found on the core assessments, which are more academic in nature. The results were closely correlated with math, science, and reading scores: Singapore, South Korea, and Japan topped the list; the U.S. was just above the OECD average; and Finland, Canada, and Australia were in between. The one potential bright spot for the U.S.—and the source of its consolation prize—is our “relative performance,” defined as the difference between the observed problem-solving score and the expected score, based on PISA core-subject scores. The U.S. had the fourth-highest relative performance and was one of only nine countries that had statistically significant higher-than-expected problem solving scores. The trophy, however, is made of cheap plastic. According to PISA, in countries with low overall performance—like the U.S.—these higher scores might indicate that schools are leaving students with unrealized potential. What a shame.
SOURCE: OECD, PISA 2012 Results: Creative Problem Solving (OECD, April 2014).