From the cognitive reasoning sample test
What does the "pattern-recognition matrix" cognitive-reasoning question test?
Note on framing: This is the cog_sample_4 item-level explainer for the AIEH cognitive-reasoning sample-test family. The item is a matrix-style abstract-reasoning probe in the lineage of Raven’s Progressive Matrices (Raven 1938; Raven, Raven & Court 2003), the most-cited applied measure of fluid intelligence (Gf) in the cognitive-abilities literature.
This item presents a 3x3 grid of abstract figures with the bottom-right cell missing. Each row and each column exhibits a transformation pattern: the figures change in shape, fill, rotation, count, or position according to a rule that is consistent across the row and column. The candidate must identify the missing figure that completes both the row pattern and the column pattern simultaneously. The scenario probes abstract pattern recognition under multi-rule constraint — the canonical operationalization of fluid intelligence (Gf) in the Carroll three-stratum taxonomy.
What this question tests
Matrix-reasoning items target a specific cognitive construct: the ability to infer abstract rules from limited examples and apply them under simultaneous multiple-constraint pressure. The skill is not about memorized patterns or domain knowledge — by design, the figures are abstract and the rules are novel — but about the on-the-fly rule-induction capacity that defines fluid intelligence. Strong respondents identify multiple candidate rules per row and column, evaluate which rule generates a consistent prediction across all observed cells, and select the missing figure that satisfies the joint constraint.
The construct’s predictive validity for job performance is well-documented. Schmidt & Hunter’s 1998 meta-analysis identified general mental ability (g) — of which Gf is a primary stratum-II component — as the single strongest predictor of job performance across occupational levels, with corrected validity coefficients of approximately 0.65 for complex professional and managerial roles and 0.40-0.50 for medium-complexity roles. Matrix-reasoning items are the most-validated applied probe of Gf because they isolate the rule-induction skill from language-comprehension and acquired-knowledge confounds that contaminate verbal reasoning items.
The skill matters in modern work because most knowledge- work tasks involve novel-rule application: a new tool’s behavior, a new dataset’s structure, a new business process’s constraints, a new framework’s API conventions. Workers who score high on matrix reasoning typically acquire mastery of new-rule systems faster than workers who score low, even controlling for prior domain experience. This pattern — sometimes called transfer learning capacity in the educational-psychology literature — is what matrix items operationalize as a hiring signal.
Why this is the right answer (concrete worked example)
The correct answer is the figure that satisfies the joint row-rule and column-rule simultaneously. The reasoning process proceeds in three structured steps that strong respondents apply reflexively.
A worked illustration: suppose the matrix presents shapes that vary by shape category (circle, square, triangle) across columns and by fill density (empty, half-filled, fully-filled) across rows. Row 1 reads: empty-circle, empty-square, empty-triangle. Row 2 reads: half-filled- circle, half-filled-square, half-filled-triangle. Row 3 reads: fully-filled-circle, fully-filled-square, [?].
Step 1: identify the row rule. Each row holds fill density constant while shape category varies through circle → square → triangle. Row 3’s fill density is “fully filled” throughout.
Step 2: identify the column rule. Each column holds shape category constant while fill density varies through empty → half-filled → fully-filled. Column 3’s shape category is “triangle” throughout.
Step 3: apply both rules to the missing cell. Row 3, Column 3 must be fully-filled (from the row rule) AND a triangle (from the column rule). The correct answer is a fully-filled triangle.
The reasoning generalizes to harder matrix items where the rules involve multiple simultaneous transformations (rotation + count + fill, or shape + size + position) or where the rules are not pure row/column rules but diagonal or compositional patterns. Strong respondents apply the same three-step structure: identify candidate rules, test each against observed cells for consistency, generate the prediction that satisfies the maximal consistent rule set.
The item discriminates strong from weak respondents not on whether the right rule is identifiable in principle — both populations can usually verbalize the rule when shown the answer — but on whether the candidate can generate the rule under time pressure with no guidance. The generation gap is the Gf signal.
What the wrong answers reveal
Matrix-reasoning distractors are constructed to map to specific failure modes:
- A figure that satisfies the row rule but violates the column rule. A respondent who picks this distractor has identified one rule but failed to check it against the other. The pattern signals partial Gf competence: rule-identification is intact but multi-constraint-checking is incomplete.
- A figure that satisfies the column rule but violates the row rule. The mirror failure of the previous pattern: the respondent has identified the column rule but not the row rule. Both single-rule failures indicate that the respondent attempted the item but did not complete the joint-constraint evaluation.
- A figure that matches a salient surface feature from Row 3 (e.g., a fully-filled circle). A respondent who picks this distractor has pattern-matched on superficial similarity rather than rule-application. The pattern signals weak Gf: the respondent is reasoning by analogy-to-adjacent-cell rather than by rule-induction.
- A figure that does not appear in the matrix at all (e.g., an empty pentagon). A respondent who picks this distractor has either misread the question or has no consistent rule-induction process. The pattern signals either careless reading or fundamental Gf failure on this item.
How the sample test scores you
In the AIEH 5-item cognitive-reasoning sample test, this item contributes one of five datapoints aggregated into the single cognitive_reasoning score via the W3.2 normalize-by-count threshold. Binary scoring per item: full credit for the joint-constraint-satisfying answer, zero credit for any distractor.
Data Notice: Sample-test results are directional indicators only. Matrix-reasoning items are among the most-validated cognitive-ability probes in the psychometric literature, but a 5-item sample is too short to reliably distinguish moderate from strong Gf performance. For a verified Skills Passport credential, take the full 50-item assessment.
The full assessment probes deductive reasoning, inductive reasoning, probabilistic reasoning, base-rate reasoning, causal reasoning, and matrix-style abstract reasoning across 50 items. See the scoring methodology for how cognitive-reasoning scores map onto the AIEH 300–850 Skills Passport scale, and the cognitive ability in hiring overview for context on Gf’s predictive validity for job performance.
Related concepts
- Raven’s Progressive Matrices. The canonical matrix- reasoning instrument, developed by John C. Raven in 1938 and refined through multiple revisions. Raven’s matrices are the most-cited applied measure of fluid intelligence and the prototype for matrix-style items across modern cognitive-ability assessments.
- Carroll’s three-stratum theory. The dominant contemporary cognitive-abilities taxonomy. Fluid reasoning (Gf) sits at stratum II (broad abilities) alongside crystallized intelligence (Gc), processing speed (Gs), and short-term memory (Gsm). Matrix items are the highest-loading Gf indicators in factor- analytic studies.
- Working-memory capacity. A cognitive construct closely related to Gf — strong respondents on matrix items typically also score high on working-memory tasks because rule-induction requires holding multiple candidate rules in active memory. The relationship is empirically strong but theoretically distinct.
- The Flynn effect. The well-documented phenomenon of rising IQ test scores across generations, particularly pronounced on matrix-style abstract- reasoning items. The effect indicates that matrix performance is not purely fixed by genetics but responds to environmental factors including formal education and exposure to abstract problem-solving.
- Schmidt & Hunter 1998 cognitive-ability validity. The meta-analysis establishing cognitive ability’s predictive validity for job performance. Matrix items are a high-fidelity probe of the underlying Gf construct that drives much of the validity coefficient.
For context on cognitive-ability validity in hiring, see the cognitive ability in hiring overview, the skills-based hiring evidence page, and the assess page for the full assessment workflow.
Sources
- Carroll, J. B. (1993). Human Cognitive Abilities: A Survey of Factor-Analytic Studies. Cambridge University Press.
- Raven, J. C. (1938). Progressive Matrices: A Perceptual Test of Intelligence. H. K. Lewis.
- Raven, J., Raven, J. C., & Court, J. H. (2003). Manual for Raven’s Progressive Matrices and Vocabulary Scales. Pearson Assessment.
- Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology. Psychological Bulletin, 124(2), 262–274.