This is a guest post by Tom Vander Ark that was originally posted on EdReformer.
Fordham is launching a series of working papers on digital learning.? Rick Hess makes an important contribution with the first paper focused on quality.
For the hundred experts that contributed to Digital Learning Now this was the thorniest issue.? To a person they expressed interest in quality but wrestled with limitations and barriers of input driven approaches common to education.? The final report points to outcome oriented approaches but doesn't provide much detail.
Hess makes a solid contribution by outlining input-oriented, outcome-driven and market-based approaches to promoting quality.? He makes clear the shortcomings of applying input controls to digital learning.?? Teacher certification strategies don't seem to add much value and attempts to certify teachers in online and blended learning strategies would remain hopelessly out of date with best practice.? ?Applying a textbook review processes to dynamic and adaptive digital content libraries would damper innovation, limit access and do little to assure quality.
Hess is more hopeful about outcome-driven approaches.? But, as?Cisco's John Behrens told me Friday, we're still operating from a data poverty mindset.? I think John would find the outcome section of the paper an example of attempting to use old testing strategies to measure new learning experiences.? ?An example of a data poverty mindset is relying on one multiple choice test for a variety of purposes.? To get outcome accountability right, we need to start from a data abundance mindset?assuming that there will be more and better data soon?and build dynamic systems updated ever year as new data sources and new correlation strategies are developed.
Hess points to merit badges as an example of a promising learning certification strategy.? There will be two strategies to move badging beyond matriculation management to quality control:
1) a badge system could fit within an end of course system (where end of course exams are available on demand or frequently scheduled). ?A semester course may include 6-12 badges signifying mastery of subskills.? This would be the simplest and fastest approach but students are subjected to?duplicative testing.
2) a badge system (or any comprehensive instruction/assessment system) could replace end of course exams for districts/networks that petition the state. ?Because a badge system would provide about 100 times more formative and benchmark data than traditional end of course exams, it should be possible to produce sufficient comparability.
The later approach could lead to an assessment marketplace where a state (or region) has several approved assessment frameworks. As next generation platform ecosystems emerge, each will develop a comprehensive assessment system, will support individual progress models, will have some kind of a recognition system (i.e., badge or dashboard), and will have sophisticated student profiles.? Instead of 50 different thin state testing systems, we may see several dozen rich assessment systems emerge. ?And once most schools shift to personal digital learning?with lots of embedded assessment, states can shift to adaptive assessment and matrix sampling strategies for lighter weight and cheaper quality monitoring.
The paper also reviews a variety of market driven approaches to quality including user rating systems now common to rate restaurants, music, and books.? Market and learning driven quality schemes will require a portable student record. ?When a full?motivational profile drives a smart recommendation engine, students will be empowered to make better learning choices in a policy environment with choice to the course/unit and fractional funding.
Both outcome accountability and user driven quality measures will be based on core-aligned micro assessments.? They are months away not years away and coming from two sources.? First, most new content incorporates assessment either embedded (eg learning games like MangaHigh.com) or end of unit (as common with K12, Connections, Apex, Compass, e2020, etc).? Second, new assessment systems, like those announced by Pearson, HMH, and McGraw, have Core-aligned sequences of online math and English assessment. ?Some of these are benchmark assessments and some are adaptive.
Tagging schemes like the recently announced CC-AEP are bringing some uniformity on the content side. ?What would help next is:
- ? definition of a common portable electronic student record
- ? a couple great Core-aligned gradebooks that (in addition to the portable record) store and track all standards-aligned feedback and learning artifacts
- ? a couple examples of family managed learner profiles that combined gradebook data and feedback from informal learning sources
- ? R&D on motivational profiles (using learner profile data)
- ? R&D on recommendation engines
Public/private partnerships and market shaping mechanisms like prizes would help on the last three or four items.
Finally, rather than relying solely on a year end multiple choice test, outcome accountability systems will require new ways to compare big data sets and new definitions of comparability.? As noted above, personal digital learning demands a new definition of comparability not as sameness (tactical comparability), but as correlated data sets (strategic comparability). ?The same test on the same day made sense when we dealt with a small n (a few dozen questions) but with giant n we can use data mining strategies and sampling techniques to monitor and compare academic progress.
Practically speaking, states should continue to authorize statewide online learning providers based on their track record of producing academic results.? They should adopt growth models to measure valued-added.? With the help of the RttT funded state assessment consortia, states should implement assessment frameworks that evolve over time to incorporate more data as it becomes widely available.? Performance contracting and outcome monitoring will be key to quality learning, not a new round of input barriers.
For more:
Tom Vander Ark is the founding blogger of EdReformer, CEO of Open Education Solutions, and a partner in Learn Capital a early-stage learning venture fund. ?Tom is a former public school superintendent, grant-maker, and business executive. ?He chairs iNACOL and is a director of several nonprofits including LA's Promise and Strive for College.