There has been increasing attention on skills that are considered important for succeeding in school, work, and in life: so-called "21st century skills". An extensive amount of literature, both technical and non-technical, suggests that these skills, such as problem-solving and resilience, are important in predicting future outcomes at school and in the workplace. It seems clear that such skills are important for individuals to master and, therefore, they are important to assess. 

However, though a substantial body of research has interrogated the relationships between certain skills and outcomes, the constructs of interest are often measured by traditional means of assessment: self-reported questionnaire responses. But this sort of reporting on skills and abilities is prone to bias from at least two sources. First, respondents might score themselves higher on skills and abilities that are valued culturally (cultural desirability). Second, respondents’ answers might be influenced by perceptions of others’ skill level around them (reference bias). Both of these issues may result in misleading responses on self-report items. 

How, then, can we measure such skills? Stealth assessment, in which assessment participants are unknowingly scored on skills and abilities while undertaking intentionally structured tasks, may be a response to the sorts of bias described above. By this assertion, a more authentic inventory of desired skills may be obtained when participants are asked to exhibit them in a highly structured environment and when data collection is unobtrusive (it does not interrupt the structured task). 

This makes intuitive sense, at least from a conceptual point of view. Of the 21st century skills that are considered important for success in school, work, and in life, some can be labelled cognitive, some non-cognitive, and some can be labelled cross-cutting (skills that include both cognitive and non-cognitive aspects). Across these three types of 21st century skills, some skills are amenable to (and probably best measured by) “traditional” paper-and-pencil or computer-adaptive testing while others might not be so amenable. For example, it seems quite evident that cognitive ability in reading or mathematics can be reliably and validly measured through traditional means of assessment, by presenting domain-specific tasks and scoring individuals based on their performance. Even skills (or self-perceptions) such as self-efficacy (the belief in one’s capacity to execute behaviors in order to succeed in certain tasks) are probably amenable to simply asking individuals about their self-perceptions. On the other hand, it is less evident that traditional assessments capture reliable and valid evidence on non-cognitive or cross-cutting skills that are inherently exhibited. That is, skills that are latent until they are enacted for the purpose of accomplishing a task in a given situation, such as teamwork or resilience, may be good candidates for highly structured simulations and stealth assessment. 

RTI International has been working to fill this gap by developing CurrentMobile, a suite of game-based assessment modules that measure skills that are critical for participation in school, work, and life. The assessment modules allow users to exhibit (rather than self-report) a set of skills within authentic scenarios of home and work situations. Data are collected by the games and can be combined with other cognitive data (such as reading and mathematics assessments) in order to produce a holistic view of applied skills and competencies held by young people. The assessment utilizes tablet technology, and is designed for use with youth in peri-urban settings in middle- and low-income contexts. 

The assessment modules were pilot tested in Rabat, Morocco during the May of 2016. We recruited 100 young people (15 years old) to join one of 10 assessment sessions held over the course of a week. During assessment sessions, participants played both game modules, completed a mathematics and reading assessment, and responded to a questionnaire on many of the same employability skills measured by the game modules. Prior to participation in the assessment, all participants were required to have their parents fill out a parent version of the employability skills questionnaire and return it to the session (response rate for the parent questionnaire was 100%). The purpose of the pilot was to test whether the games functioned as intended in a live setting, whether participants understood the games and could use the technology, to identify any bugs within the game that needed to be fixed before further testing or use, and to test scoring approaches based on data obtained in the field. 

Currently, the modules attempt to elicit evidence on several facets of problem-solving, task completion, and time management. The modules are continuing to be tested and refined, and additional modules will be added over time.

To learn more about the work of the Education Technologies and Training team, see a list of publications here.  

To read more by Lee Nordstrum, click here

About the Expert

Lee Nordstrum's picture
Lee Nordstrum was a Research Education Analyst in the International Education Division of RTI International’s International Development Group until April 2018. He holds a PhD and an MPhil in Education Research from the University of Cambridge, England. Much research has focused on education finance and school fees, improvement science in education, and value-added teacher evaluation mechanisms.