Processing math: 100%
Lange, R., & Rindermann, H. (2026). Reading sub-competence correlations: Does the reading-specific factor represent a specific learning or metacognitive strategy? Evidence from 74 countries and regions in PISA 2009. Intelligence & Cognitive Abilities. https://doi.org/10.65550/001c.155776
Download all (1)
  • Figure C1. Alternative factor models explaining the correlations among reading literacy subscales, and mathematics and science total scores

Abstract

A study of PISA data from 2009 revealed significant positive and high latent correlations between reading sub-competences, as well as between these sub-competences and mathematical and/or science competences, in nearly all of the 74 countries and federal states (.39 ≤ rlatent ≤ .96; overall mean: ¯¯rlatent = .83). These high to very strong latent correlations indicate a weak empirical discriminability of the PISA competences. Moreover, as GDP per capita increases across countries and federal states, the latent correlations between these competences and/or subscales tend to become stronger.
Consistent with previous studies, these latent correlations could be statistically attributed to a general factor (PISA-g) and a reading-specific factor (RspecF) within a bi-factor model for each country and federal state. The standardized loadings of the reading competence subscales on PISA-g were stronger than those on RspecF (mean: ˉλPISA-g = .82 vs. ˉλRspecF = .47). A close examination of the content of the PISA tasks and their cognitive demands (e.g., information acquisition, reasoning) indicates that PISA-g is more similar to a general cognitive ability. Furthermore, PISA-g demonstrated a consistent correlation pattern with intelligence-related student characteristics (e.g., parental education) in the majority of the countries and federal states examined. Contrary to expectations, the reading-related student characteristics (e.g., learning or metacognitive strategies) did not correlate significantly higher with RspecF than with PISA-g in the majority of cases, if not all. The precise nature of RspecF remains ambiguous, as no student characteristics correlated strongly with it while simultaneously exhibiting non-significant correlations with PISA-g. A comparison showed slightly higher correlations of the reading-related characteristics with the reading competence than with mathematics and science in most cases. Consequently, the discriminant validity of the PISA competences was found to be low. The results suggest that PISA primarily assesses a general cognitive ability.

1. The relevance of a general factor and a reading-specific factor

The Programme for International Student Assessment (PISA) “assesses the extent to which students near the end of compulsory education have acquired some of the knowledge and skills that are essential for full participation in modern societies, with a focus on reading, mathematics and science” (OECD, 2010d, p. 18). In previous PISA studies, strong latent correlations have been identified between the mathematics, science, and reading literacy scales (e.g., .81 ≤ rlatent ≤ .90 in PISA 2015; OECD, 2017, p. 247) as well as between the reading subscales (e.g., .88 ≤ rlatent ≤ .94 in PISA 2000; Artelt & Schlagmüller, 2004, p. 174, German data set). A multitude of factors may influence the correlations between the PISA domains (reading, mathematics, and science) or their subdomains (i.e., subscales). These include, for instance, the similarity of cognitive demands across PISA items (Rindermann & Baumeister, 2015), substantial variability in student competences (Prenzel et al., 2001), and test-taking behavior such as test motivation or guessing (Borger et al., 2025; He et al., 2025; Michaelides et al., 2024). Additionally, the use of latent rather than manifest correlations is a contributing factor (OECD, 2012). “Latent correlations [rlatent] are unbiased estimates of the true correlation between the underlying latent variables. As such they are not attenuated by the unreliability of the measures and will generally be higher than the typical product moment correlations [rmanifest] that have not been disattenuated for unreliability” (OECD, 2012, p. 194). For instance, rlatent is .12 to .24 units greater than rmanifest for correlations between reading literacy subscales and between competence domains (reading, mathematics, science) on total scale level, based on German PISA 2000 data (see Artelt & Schlagmüller, 2004, pp. 171, 174).

Past nested-factor model analyses have shown that individual differences in and correlations between the total scales, subscales, and items of the three dimensions—reading, mathematics, and science—can be attributed to a general factor (g) (e.g., Brunner, 2008; Brunner et al., 2013; Pokropek, Marks, & Borgonovi, 2022; Pokropek, Marks, Borgonovi, et al., 2022). Moreover, previous research has indicated that nested-factor models provide superior model fit compared with one-factor models. In nested-factor models, one or more independent competence-specific factors (e.g., reading, mathematics, or science) are postulated in conjunction with a general factor. The general factor (g) is variously referred to as, for example, reasoning ability (Brunner, 2008), general student achievement (Brunner et al., 2013), intelligence (Rindermann, 2007), general cognitive abilities (Kampa et al., 2021), or general academic ability (Pokropek, Marks, Borgonovi, et al., 2022). The designation of this factor as reasoning ability or intelligence can be supported by the following research results:

  1. PISA items include, to some extent, cognitive demands relevant to intelligence, such as comprehension, reasoning, and problem-solving (Baumert et al., 2009; Rindermann & Baumeister, 2015).

  2. Significant, positive, and moderate to strong latent or manifest correlations have been reported between the PISA competences (i.e., reading, mathematics, and science) and intelligence test results, as well as between the latter and reading or mathematics sub-competences, based on German, Polish, and Latin American PISA data (e.g., .42 < rlatent or manifest ≤ .86; see Brunner, 2008; Flores-Mendoza et al., 2021; Kampa et al., 2021; Knoche & Lind, 2004; Kriegbaum & Spinath, 2016; Leutner et al., 2005, 2006; Pokropek, Marks, & Borgonovi, 2022; Rajchert et al., 2014; Wirth et al., 2005).

  3. Substantial to high loadings (λ) have been observed for intelligence test items from Ravens Standard Progressive Matrices (Jaworowska et al., 2000) and for the PISA 2009 reading, mathematics, and science items on a general factor (e.g., .08 ≤ λ ≤ .81, mean λ = .50; see Pokropek, Marks, & Borgonovi, 2022) as well as for the intelligence subtests word and figure analogies from the Cognitive Ability Test (Heller & Perleth, 2000) and the PISA 2000 mathematics and reading literacy subscales (e.g., .48 ≤ λ ≤ .75; see Brunner, 2008).

  4. A similar pattern of correlations has been identified for the general factor (g) and intelligence with various student characteristics (e.g., reading enjoyment, number of books at home; see Brunner, 2006, 2008; Pokropek, Marks, & Borgonovi, 2022). However, the research conducted by Pokropek, Marks, and Borgonovi (2022) concerning gender and weekly learning time in the humanities subject area exhibited inconsistencies.

The correlational relationships between g and parental education or delayed schooling as intelligence-relevant variables (e.g., Rindermann, 2018) have yet to be examined.

An assumed reading-specific factor is one that exclusively refers to the domain of reading literacy. Therefore, it should correlate closely with reading-related student characteristics (Pokropek, Marks, & Borgonovi, 2022). In previous analyses, correlations were identified between a reading-specific factor and several reading-related student variables, including reading enjoyment, verbal self-concept, and German grade (e.g., Brunner, 2006, 2008; Pokropek, Marks, & Borgonovi, 2022). However, the precise nature of this factor has remained unclear, given the weak to moderate strength of these correlations. Nevertheless, its correlations with the use of memorization, elaboration, and control strategies, as well as with students’ ability to accurately assess the usefulness of effective strategies for dealing with textual information, have not yet been investigated. These strategies, especially those dealing with text, may be regarded as reading-specific student characteristics, as they are designed to facilitate the learning and comprehension of texts (OECD, 2010c).

The objective of this study is to provide a more comprehensive description of the general PISA factor (i.e., PISA-g) and the reading-specific factor. To this end, a correlation analysis will be conducted between these factors and intelligence-related and reading-specific student characteristics, respectively. The PISA 2009 data set is particularly well suited to this purpose because it includes reading-related variables such as memorization, elaboration, and control strategies—variables that are not available in more recent PISA cycles, including PISA 2018 and 2022.

2. Research questions and analytical approach

The following research questions will guide our study:

  1. Are there similarities in terms of task content and cognitive demands among the reading literacy subscales and the mathematics and science competence total scales?

  2. How strong are the latent correlations among these competences and subscales?

  3. To what extent can these latent correlations be attributed to a general factor (PISA-g) or to a factor specific to reading?

  4. What do a general factor and a reading-specific factor represent?

In order to address the aforementioned research questions, the subsequent analytical approach was employed: The theoretical conception of the three competences—reading, mathematics, and science—is presented herein, along with a description of exemplary item demands. Subsequently, comparisons will be made between these competences (reading: subscales) with respect to possible similar task requirements. In order to theoretically derive and describe a general factor explaining the correlations between the competences and sub-competences in PISA 2009, the demands of the PISA items are compared with two comprehensive definitions of intelligence: one by Gottfredson (1997a, 1997b) and the other by Rindermann (2018) as well as with the demands of an intelligence test used in the Polish national extension of PISA 2009 (Pokropek, Marks, & Borgonovi, 2022). Next, a nested-factor model and a one-factor model, based on the reading competence subscales, mathematics competence, and science competence, are computed. The model fit of each will be compared to determine the most appropriate factor model for subsequent analyses. In order to develop or refine a conceptualization of PISA-g and a reading-specific factor, their correlational relationships with student characteristics relevant to intelligence or specific to reading are examined. Consequently, an examination is conducted to ascertain the reading specificity of the student characteristics associated with reading.

3. Item demands and examples of their similarities across different domains in PISA 2009

3.1. Exemplary (item) requirements and similarities among reading literacy subscales

Accessing and retrieving involves going to the information space provided and navigating in that space to locate and retrieve one or more distinct pieces of information” (OECD, 2010b, p. 35). For the items related to access and retrieval, the information provided must be recognized in the question. This information is matched, either literally or synonymously, with the other information provided, for example, in a text, table, chart, graph, timetable, or a combination thereof (OECD, 2010b). In addition, the information must be compared with information already available in memory. When searching for synonymous content, mental categorization processes may be employed to identify the information being requested (OECD, 2010d). For example, in Question 3 regarding Balloon, students were asked the following: “Vijaypat Singhania used technologies found in two other types of transport. Which types of transport?” (OECD, 2010d, p. 99). However, the term “transport” was not explicitly present in the corresponding figure, but rather the statements “Aluminium construction, like airplanes” (OECD, 2010d, p. 99) and “Vijaypat Singhania wore a space suit during the trip” (OECD, 2010d, p. 99). Therefore, students had to recognize that these two statements implicitly referred to two types of transportation (OECD, 2010d). The information sought may not be explicitly included in the text; rather, it must be inferred (Artelt et al., 2001). The extent of necessary inferences depends on the explicitness of the (semantic) correspondence between the given pieces of information in the question and those provided in the text (Adams & Wu, 2003). Accessing and retrieving primarily requires understanding at the sentence level (OECD, 2019). However, depending on the difficulty of the Access and Retrieve items, understanding larger parts of the text may be necessary (Artelt et al., 2001). According to Schnotz and Dutke (2004), coherence formation is a mental process that enables the comprehension of consecutive sentences. For specific Access and Retrieve items (e.g., Questions 2 and 3 on Brushing Your Teeth in Table B1 in Appendix B; OECD, 2010d, p. 92), local coherence formation may therefore be necessary (Baumert et al., 2009; Schnotz & Dutke, 2004). Generating semantic connections between sentences or larger text parts involves drawing conclusions (reasoning) and using knowledge of various kinds (Baumert et al., 2009). The same applies to Access and Retrieve items that require the comprehension of depictive representations (Schnotz, 2014; Schnotz & Dutke, 2004), such as diagrams and graphs (OECD, 2010b, 2012). More difficult access and retrieval tasks may require the use of “knowledge of text structures and features” (OECD, 2010d, p. 59).

Integrating and interpreting involves processing what is read to make internal sense of a text. … Integrating focuses on demonstrating an understanding of the coherence of the text. It can range from recognising local coherence between a couple of adjacent sentences, to understanding the relationship between several paragraphs, to recognising connections across multiple texts” (OECD, 2010b, p. 36). These relationships include “problem-solution, cause-effect, category-example, equivalency, compare-contrast, and understanding whole-part relationships” (OECD, 2010d, p. 61). For example, comparing means finding similarities between information, while contrasting focuses on identifying the differences between them (OECD, 1999b). Interpreting refers to “the process of making meaning from something that is not stated” (OECD, 2010d, p. 61). For example, it involves recognizing a not explicitly mentioned relationship or deducing (inferring) “the connotation of a phrase or a sentence” (OECD, 2010d, p. 61). Items in the Integrate and Interpret subscale may require a range of cognitive processes, including: identifying similarities or differences by comparing or contrasting information; drawing conclusions or making inferences; and understanding texts, diagrams, tables, graphs, or their components (Baumert et al., 2009; OECD, 2010b, 2010d; Schnotz & Dutke, 2004). They may also involve abstract thinking, such as engaging with abstract texts (OECD, 2010b) or generating abstract categories for interpretation (see proficiency level 6 of Integrate and Interpret, OECD, 2010d, p. 63) as well as generalizing subtle nuances in language (OECD, 2010b).

Reflecting and evaluating involves drawing upon knowledge, ideas or attitudes beyond the text in order to relate the information provided within the text to one’s own conceptual and experiential frames of reference” (OECD, 2010b, p. 37). Reflecting focuses on drawing upon one’s own experiences or knowledge in order to make comparisons, contrasts, or hypotheses (OECD, 2010b). Evaluating is the formation of a judgment based on formal or substantive knowledge of the world or on personal experience (OECD, 2010d). The more extensive and in-depth the assumed understanding of a text must be for reflection and evaluation, the more cognitively demanding the task becomes (Adams & Wu, 2003). As the Reflect and Evaluate items can also refer to diagrams, graphs, tables, and other representations (OECD, 2010b, 2012), understanding them may be necessary to solve the corresponding items. This involves the coherence formation process associated with reasoning such as drawing inferences and conclusions, and applying different types of knowledge (Baumert et al., 2009; Schnotz & Dutke, 2004). In order to reflect on and evaluate the form of a text, it is relevant to have “knowledge of text structure, the style typical of different kinds of texts and register” (OECD, 2010d, p. 67).

According to the PISA 2009 Assessment Framework, the reading competence subscales are considered to be interdependent: Retrieval serves as a prerequisite for the interpretation and integration of information, and interpretation, in turn, is required for subsequent reflection and evaluation (OECD, 2010b). This implies that a reading item may also necessitate characteristics of another subscale (see Table B1). Thus, it measures this subscale to a certain extent as well. For example, in Question 3 of The Play’s The Thing (see Table B1), students were asked, “What were the characters in the play doing just before the curtain went up?” (OECD, 2010d, p. 108). In order to answer this question, it is first necessary to locate the relevant passage of text. Subsequently, integration and interpretation can be applied to arrive at the correct response (OECD, 2010d).

Certain Access and Retrieve items may also require elements of Integration and Interpretation, particularly when establishing semantic coherence between two adjacent sentences is necessary to identify the relevant information (see Table B1: Questions 2 and 3 regarding Brushing Your Teeth; OECD, 2010d, p. 92). Furthermore, specific Reflect and Evaluate items may be solved by accessing and retrieving information or by integrating and interpreting it. For example, in Question 4 concerning Brushing Your Teeth (see Table B1), the following was asked: “Why is a pen mentioned in the text?” (OECD, 2010d, p. 92). The answer to this question could be found by accessing and retrieving the relevant information in the given text (OECD, 2010d). In Question 4 regarding Balloon, it was asked “What is the purpose of including a drawing of a jumbo jet in this text?” (OECD, 2010d, p. 100). The jumbo jet was utilized as a reference point for the determination of the achieved height of the balloon (OECD, 2010d). To identify the intended purpose, it was also possible to integrate and interpret the non-continuous text information by means of an altitude comparison between the jumbo jet and the balloon.

Comparisons and abstract thinking may be required by Integrate and Interpret items as well as by Reflect and Evaluate items. Abstract thinking, in terms of categorizing information, and comparisons also play a role in the Access and Retrieve items when the given task information must be compared semantically with the information provided in the text (e.g., Question 3 regarding Balloon). Furthermore, the ability to draw conclusions and inferences (reasoning), as well as the comprehension—to various degrees—of the text, its parts, and other representations (e.g., diagrams, graphs, and tables), are necessary for reading items from different reading competence subscales. In summary, the description of the reading literacy subscales, their conceptual interdependence, and sample reading items (e.g., OECD, 2010d, pp. 91–111) imply partly similar demands (e.g., reasoning, abstract thinking, comparing, and understanding) across reading items from different reading literacy subscales. This may favor correlations between these subscales in PISA 2009.

3.2. Mathematics and science literacy requirements and exemplary commonalities with reading literacy subscales

In PISA 2009, the mathematical competence (mathematical literacy) was defined as “the capacity of an individual to formulate, employ and interpret mathematics in a variety of contexts. It includes reasoning mathematically and using mathematical concepts, procedures, facts and tools to describe, explain and predict phenomena” (OECD, 2010d, p. 23). By successfully participating in the mathematization process of a problem, students must retrieve and/or apply knowledge from different mathematical content areas (i.e., Space and Shape, Change and Relationship, Quantity, and Uncertainty). In addition, the following eight postulated mathematical abilities are employed: mathematical thinking and reasoning; argumentation; communication; modeling; problem posing and solving; representation; the use of symbolic, formal, and technical language and operations; and the use of aids and tools (see Table B2 in Appendix B). The PISA mathematics tasks were designed to engage one or more of those mathematical abilities (OECD, 2004). However, they were not assessed separately as subscales but were instead regarded collectively as constituting the PISA mathematical competence (OECD, 2010b). The PISA 2009 mathematics items require, for instance, working with and understanding different representations of mathematical situations and objects (e.g., text, diagrams, graphs, charts, tables, and algebraic representations; OECD, 2010a); problem solving (OECD, 2010b); drawing conclusions and inferences (i.e., reasoning; Baumert et al., 2009; see also Jakubowski, 2013); insight and generalization (OECD, 2010b), spatial reasoning (e.g., Jakubowski, 2013; see Space and Shape in OECD, 2010b); planning and implementing solution strategies (see the Reflection cluster in OECD, 2010b); and recalling and applying knowledge from different mathematical content areas (OECD, 2010b).

In PISA 2009, the science competence (scientific literacy) referred to "the extent to which an individual possesses scientific knowledge and uses that knowledge to identify questions, acquire new knowledge, explain scientific phenomena and draw evidence-based conclusions about science-related issues; … " (OECD, 2010d, p. 23). The PISA 2009 science items were taken from the PISA 2006 science literacy subscales Identifying Scientific Issues, Explaining Scientific Phenomena, and Using Scientific Evidence (OECD, 2009a, 2012). A depiction of the key characteristics of these subscales can be found in Table B3 in Appendix B. The PISA 2009 science items may necessitate, for example, recalling and applying knowledge about science (i.e., scientific inquiry and explanations) and of science (i.e., physical, living, technological, and earth/space systems); inductive reasoning (i.e., “reasoning from detailed facts to general principles”, OECD, 2010b, p. 137) and deductive reasoning (i.e., “reasoning from the general to the particular”, OECD, 2010b, p. 137); integrated and critical thinking; transforming representations (e.g., “data to table, tables to graphs”, OECD, 2010b, p. 137); generating and communicating data-based explanations and arguments; applying mathematical skills, knowledge, and processes; generalization and gaining insight to form conclusions, judgments, and explanations; abstract thinking (e.g., for working with abstract concepts, models, and ideas; OECD, 2010b, 2010d); and comprehending texts, graphs, diagrams, tables, photographs, or combinations thereof (OECD, 2010b; Schnotz & Dutke, 2004).

A comparison of item requirements across domains has yielded the following exemplary similarities:

Reading-related reflecting and evaluating may play a role in certain mathematical and science items if, for instance, evaluating evidence (e.g., data) or conclusions, or drawing primarily on prior knowledge for giving explanations and arguments is required (see levels 4–6 of Using Scientific Evidence, and levels 1–6 of Explaining Phenomena Scientifically, OECD, 2009a; see mathematical abilities of argumentation and modeling; OECD, 2010a; see proficiency level 5 of Quantity, levels 4–6 of Space and Shape, levels 5–6 of Relationships and Change, and level 5 of Uncertainty; OECD, 2005).

The reading-related accessing and retrieving, as well as integrating and interpreting, may be relevant for specific mathematical and science items. These two can be applied to these items for identifying solution-related information that is included explicitly or implicitly in the given information material. For example, certain mathematics and science items involve locating and retrieving (i.e., extracting) relevant information from various representations, such as texts, graphs, tables, or diagrams (see level 2 of Change and Relationships, levels 2–3 and 5 of Quantity, and level 5 of Uncertainty; OECD, 2005; see levels 1 and 3–4 of Using Scientific Evidence; OECD, 2009a). With regard to the integration and interpretation of information, specific mathematical items necessitate the comprehension and interpretation of text for the formulation of a mathematical model (see proficiency level 4 of Quantity in OECD, 2005); for the solution of a geometrical problem (see level 4 of Space and Shape; OECD, 2005); or for the calculation of probabilities (see levels 4–5 of Uncertainty; OECD, 2005). Furthermore, for specific mathematics items, it is also necessary to understand or interpret diagrams, tables, graphs, and other representations (see levels 1–3 and 5 of Quantity, levels 3–4 of Change and Relationships, and levels 3 and 5 of Uncertainty; OECD, 2005; see also Jakubowski, 2013; see mathematical abilities representation and communication in OECD, 2010a).

Certain science items may also require reading-related integration and interpretation to identify relationships (e.g., cause–effect relationships) and for recognizing the manipulated variable or the change (see levels 1 and 3 of Explaining Phenomena Scientifically, and levels 1–5 of Identifying Scientific Issues in OECD, 2009a). Moreover, understanding and interpreting diagrams, tables, or graphs is necessary for specific science items. These items, for instance, demand data-based explanations or arguments, comparing bar heights in a diagram or columns of a table, or identifying trends (i.e., patterns) in data sets (see levels 1–5 of Explaining Scientific Phenomena; OECD, 2009a). For this purpose, comparing numbers, understanding them, recognizing numerical patterns from the mathematical content area Quantity (OECD, 2010b) may be necessary. Thus, a certain degree of mathematical competence is essential for such science items. This requirement similarly applies to specific reading items that involve interpreting numbers in diagrams, tables, or graphs.

In summary, the reading, mathematics, and science items refer to different representations of information, such as texts, diagrams, graphs, and tables, in partly similar ways. In this regard, the mathematics and science items presuppose a certain degree of reading literacy, as it encompasses understanding, reflecting on, engaging with, and using such representations (OECD, 2010b). The PISA 2009 reading, mathematics, and science items may impose some similar demands, such as reasoning (e.g., drawing conclusions and inferences); identifying similarities and differences between information by comparing or contrasting; working with and understanding different representations (e.g., text, diagrams, tables, and graphs); and abstract thinking. Insight and generalization are also required for certain mathematics and science items. Consequently, these overlapping requirements may contribute to higher correlations between the reading literacy subscales, mathematical literacy, and science literacy.

4. Derivation of a general factor and a reading-specific factor, and their relationships with student characteristics

The PISA item demands, as delineated in Sections 3.1 and 3.2, can be compared with the definitions of intelligence to identify differences and similarities. Gottfredson (1997a, 1997b) described the construct of intelligence as follows:

Intelligence is a very general mental capability that … involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do” (Gottfredson, 1997a, p. 13). “These sorts of mental processes—contrasting, abstracting, inferring, finding salient similarities and differences—are the building blocks of intelligence as manifested in reasoning, problem solving, and grasping new concepts with facility” (Gottfredson, 1997b, p. 96).

Rindermann (2018, p. 43) provided a comprehensive definition of intelligence, offering detailed descriptions of core components (i.e., problem solving, reasoning, abstract thinking, and understanding) mentioned by Gottfredson (1997a, 1997b) as well as additional characteristics such as “…the ability to change cognitive perspectives, to make plans and use foresight” (see Appendix A1). However, the definition of intelligence is not uniform in the literature (Gottfredson, 1997b). Consequently, depending on the definition adopted, the overlap between the cognitive demands of PISA items and intelligence may vary.

With regard to the aforementioned definitions of intelligence, the PISA 2009 items exhibit intelligence-related requirements such as reasoning (e.g., drawing conclusions and inferences), identifying similarities and differences by comparing or contrasting, abstract thinking, understanding, problem-solving, and the planning of solution strategies (see Sections 3.1–3.2). This conclusion is further supported by the findings of Pokropek, Marks, and Borgonovi (2022). In their study, the PISA competences of Polish students were assessed using PISA 2009 items, alongside intelligence as measured by Raven’s Standard Progressive Matrices (Jaworowska et al., 2000). In this intelligence test, visual elements (i.e., abstract figures) are presented in a matrix arrangement, with one field left blank. Students must identify the underlying construction principle of the matrix and select the appropriate element to fill the empty field from several given options. Raven’s intelligence test items require non-verbal reasoning, inductive or analogical spatial thinking, visual comparisons, and abstraction or identification of rules based on similarities and differences between the graphical elements (Heller et al., 1998; H. W. Krohne & Hock, 2015; Pokropek, Marks, & Borgonovi, 2022). Substantial standardized factor loadings (e.g., λ > .50; Urban & Mayerl, 2014) of the PISA 2009 reading, science, and mathematics items, as well as of the Raven’s items, were demonstrated on a general factor (.08 ≤ λ ≤ .81, mean λ = .50; Pokropek, Marks, & Borgonovi, 2022). Thus, the correlations among these items can be partially attributed to a general cognitive factor (i.e., PISA-g). The cognitive processes relevant to intelligence (e.g., abstraction, reasoning, comprehension) involve the information contained in the PISA items and the application of the students’ available knowledge (Rindermann, 2018; see Sections 3.1–3.2). Therefore, the general cognitive factor encompasses intelligence and the intelligent use of given information and available knowledge (Rindermann, 2018).

In accordance with previous research (see Section 1), it is assumed that, for each country or federal state participating in PISA 2009, the latent correlations between the reading literacy subscales can be additionally attributed to a reading-specific factor (RspecF) within a nested-factor model. This factor should be associated with reading-specific student characteristics. In the subsequent section, the expected associations of PISA-g and RspecF with intelligence-related and reading-related student variables are described.

Parents’ education may reflect their intelligence to a certain extent (Steinmayr et al., 2010), which they pass on to their children through genes (Haworth et al., 2008). In addition to genetic influences, environmental influences and the interaction of both, also play a role in favoring a correlation between parental education and children’s intelligence (Steinmayr et al., 2010). For example, educated parents may create a cognitively stimulating learning environment for their children through their educational and cultural practices that can support children’s intelligence development (Rindermann, 2018). Past studies have shown significant, positive correlative relationships between parents’ educational level and their children’s intelligence (e.g., r = .36, p < .001, Rindermann & Ceci, 2018; r = .31, p < .05, Schaffner et al., 2004; r = .42 (mother) and r = .45 (father), p < .001, Ganzach, 2014; see also Lemos et al., 2011). A higher number of books at home indicates more frequent reading by children and/or their parents (Rindermann & Ceci, 2018). Continuous reading of books may promote intelligence (Schaffner, 2009), and in turn, higher intelligence can also facilitate reading (Peng et al., 2019; Peng & Kievit, 2020). In previous studies, significant positive associations were found between the number of books and the intelligence of children or adolescents (e.g., r = .40, p < .01, Brunner, 2008; r = .32, p < .05, Schaffner et al., 2004; r = .25, p < .001, Rindermann & Ceci, 2018). If the general PISA factor (PISA-g) represents an intelligence-like construct, then, like intelligence itself, it should also show a significant positive correlational relationship with father’s or mother’s education and the number of books at home. Since these student characteristics are associated with reading-related activities and attitudes (e.g., McElvany et al., 2009), significant positive correlations between them and the reading-specific factor are expected.

School attendance supports students’ intelligence development (e.g., Ceci, 1991; Ritchie & Tucker-Drob, 2018) and knowledge acquisition (Rindermann, 2011). However, students’ educational progress—and consequently, their development of intelligence and knowledge—may be delayed by factors such as class repetition (e.g., Ehmke et al., 2008; Jimerson et al., 1997; J. A. Krohne et al., 2004), delayed school enrollment, and absenteeism from class (e.g., Ceci, 1991). As a result, students may be at different grade levels, have received varying amounts of cumulative school support, and differ in their level of intelligence. Moreover, delayed enrollment and grade repetition are consequences of lower intelligence and weaker student performance. Students who repeat grades tend to be low achievers and generally demonstrate lower intelligence than non-repeaters (e.g., dIntelligence: repeater vs. non-repeater = –0.32 to –0.36, based on German PISA data; see Ehmke et al., 2008, 2010). Accordingly, PISA-g—representing an intelligence-like ability—is expected to correlate negatively with delayed schooling. Furthermore, delayed schooling may also be associated with a lagged acquisition of reading-related skills, knowledge, and attitudes within the school context. Consequently, a negative correlation with the reading-specific factor is anticipated.

Reading enjoyment may foster more frequent and extensive reading, which in turn enhances text comprehension (Artelt et al., 2010; Möller & Schiefele, 2004) and, thereby, contributes to higher intelligence (Schaffner, 2009). At the same time, higher intelligence may also facilitate text comprehension and promote reading enjoyment through the perception of one’s own competence (Artelt et al., 2010). Previous studies have identified a significant and positive correlation between reading enjoyment and students’ intelligence (e.g., r = .29, p < .01, Brunner, 2008; r = .16, p ≤ .001, Pokropek, Marks, & Borgonovi, 2022). Reading enjoyment may also be associated with reading-specific abilities and knowledge (Artelt et al., 2010; Pokropek, Marks, & Borgonovi, 2022). In this context, a significant and positive correlation was observed between the students’ reading enjoyment and the reading-specific factor (e.g., r = .20, p < .01, Brunner, 2008; r = .13, p <.01, Pokropek, Marks, & Borgonovi, 2022). A positive correlation between the factors (PISA-g and RspecF) and the enjoyment of reading is expected.

Memorisation strategies refer to the memorisation of texts and contents in all their details and repeated reading. … Elaboration strategies refer to the transfer of new information to prior knowledge, out-of-school context and personal experiences. … Control strategies mean to formulate control questions about the purpose of a task or a text and its main concepts. It also means to self-supervise current study activities, particularly whether the reading material was understood” (OECD, 2010c, p. 48). Elaboration and control strategies are generally classified as deep learning strategies, whereas memorization is typically regarded as a surface-level approach to learning. The correlation between the use of these reading-related learning strategies (OECD, 2010c) and both PISA-g and RspecF will be examined.

It is not possible to draw conclusions about the appropriateness of the chosen learning strategy or the quality of its execution based on reports of its use (Artelt et al., 2010). This limitation was one of the reasons why PISA 2009 assessed students’ knowledge about appropriate strategies for understanding and learning from texts (Artelt et al., 2010). For this purpose, students’ awareness (i.e., the correct assessment of the usefulness) of effective strategies for understanding and remembering text information, as well as for summarizing it, were taken into account (Artelt et al., 2010; OECD, 2010c). The correlations between these student characteristics and the factors PISA-g and RspecF will be examined.

In summary, PISA-g and RspecF are expected to exhibit significant positive correlations with maternal and paternal education, the number of books at home, and enjoyment of reading. Moreover, significant negative correlations with delayed schooling are anticipated. In addition, significant correlations are predicted between the factors and the use of memorization, elaboration, and control strategies, as well as with students’ correct assessment of the usefulness of effective strategies for understanding and remembering text information and for summarizing it. The five aforementioned variables, in conjunction with the number of books at home and the enjoyment of reading, all of which deal with text, are considered reading-specific student characteristics. Consequently, they are expected to show significantly stronger correlations with RspecF than with PISA-g.

5. Method

5.1. Sample, test design and the competence modeling procedure

PISA’s target population comprises students who are approximately 15 years old at the time of the assessment and attend at least seventh grade or higher (OECD, 2012). For 73 of the 74 countries and federal states participating in PISA 2009, an approximately representative sample of schools, followed by students within those schools, was randomly selected from the target population. Russia employed a three-stage sampling design, with geographical areas, schools, and students constituting the first, second, and third stages, respectively (OECD, 2012). The final student sample sizes across countries and federal states ranged from 329 (Liechtenstein) to 38,250 (Mexico) (OECD, 2012). In Costa Rica, Georgia, Himachal Pradesh and Tamil Nadu (two Indian states), Malaysia, Malta, Mauritius, Miranda (a Venezuelan state), Moldova, and the United Arab Emirates, the PISA study was conducted in 2010 (Walker, 2011).

In PISA 2009, students were randomly assigned to one of 13 test booklets. All students completed a portion of the reading items. Depending on the test booklet, a subset of the mathematics and/or science items, or only the reading items, was also administered. As reading literacy was the major domain in PISA 2009, it was assessed with a sufficient number of items to allow for the estimation of not only an overall reading score but also of subscale scores (OECD, 2010b, 2012). The reading literacy subscales, as well as the overall mathematics and science scores, were estimated using a multi-step modeling procedure that took into account various factors, such as item difficulty, students’ item responses, and background information (OECD, 2012). For each student, a set of plausible values was generated for reading sub-competences, mathematical literacy, and scientific literacy. PISA uses plausible values because they yield more reliable estimates of population parameters than single test scores. To ensure unbiased estimation, it is necessary to utilize all five plausible values and the final student weight in the analyses. Plausible values (PVs) can be regarded as multiple imputations. Consequently, a statistical parameter (e.g., a mean value) must be calculated for each of the five PVs. These five parameter estimates are then averaged to provide an accurate estimate for the subpopulation or population (OECD, 2012). Furthermore, the incorporation of the 80 replication weights is imperative for obtaining precise standard error estimates (OECD, 2009b).

5.2. Measures, handling missing data, and statistical comparison of correlations

For each country and federal state, five plausible values were available for each reading competence subscale as well as for the total scales of mathematical competence and science competence per student (OECD, 2012). Parental education was measured by the number of years of schooling completed, which was estimated based on the highest reported level of the International Standard Classification of Education (ISCED; OECD, 1999a) for each parent (OECD, 2012): “[He/she] did not complete ISCED level 1” (OECD, 2012, p. 359) = 3 years; ISCED 1 (primary education) = 4 years; ISCED 2 (lower secondary) = 10 years; ISCED 3A/B/C (upper secondary/vocational or pre-vocational upper secondary) or ISCED 4 (non-tertiary post-secondary) = 13 years; ISCED 5B (vocational tertiary) = 15 years; and ISCED 5A/6 (theoretically oriented tertiary/post-graduate) = 18 years. To assess the number of books at home, respondents were given several response options (OECD, 2010b). The numerical values assigned to the categories are as follows: 1 = 0–10 books, 2 = 11–25 books, 3 = 26–100 books, 4 = 101–200 books, 5 = 201–500 books, and 6 = more than 500 books. Students’ delayed schooling was operationalized as the difference, in years, between their age and grade level. This difference may increase as a result of later enrollment and grade repetitions.

The assessment of reading enjoyment was conducted using a set of 11 items, the detailed descriptions of which can be found in Appendix A2. The students provided responses to these items using a four-point rating scale (1 = strongly disagree, 2 = disagree, 3 = agree, and 4 = strongly agree). Students’ responses to the negatively phrased items were reverse coded. Subsequently, item response theory (IRT) modeling was employed to facilitate item scaling. Consequently, weighted likelihood estimation (WLE; Warm, 1989) scores were obtained, representing students’ scores on the index of reading enjoyment. These WLE scores were standardized using the OECD mean and the corresponding standard deviation. Higher positive standardized scores are indicative of higher levels of reading enjoyment relative to the OECD mean (OECD, 2012).

The scale for the use of control strategies was generated based on five items. With respect to the scales concerning the utilization of elaboration strategies and memorization strategies, a total of four items were employed for each scale (OECD, 2012). Item descriptions can be found in Appendix A2. A four-point rating scale (1 = almost never, 2 = sometimes, 3 = often, and 4 = almost always) was provided for responses to all the items specific to the strategies mentioned above. The WLE scores of the students were obtained for each scale related to the aforementioned strategies and subsequently standardized using the respective OECD mean and standard deviation. Consequently, higher positive standardized scores on these scales indicate a more frequent use of control, elaboration, and memorization strategies compared to the corresponding OECD mean (OECD, 2012).

The scale for assessing the usefulness of effective strategies for understanding and remembering text information, as well as for writing a text summary, was developed based on students’ judgments of the usefulness of the given strategies (OECD, 2010c). For assessing the usefulness of these strategies, a six-point rating scale was employed, ranging from 1 (not useful at all) to 6 (very useful). Students’ responses were expected to align with the rank order defined by reading experts. The students’ final scores were standardized using the OECD mean and the corresponding standard deviation (OECD, 2012). Therefore, standardized scores greater than zero indicate a more accurate assessment of the usefulness of effective strategies for understanding and remembering text information (or for writing a text summary) by students compared to the OECD mean. Additional details about the specific strategies and the two corresponding scales are provided in Appendix A2.

Missing values were identified for the 74 countries and federal states, weighted in percent, for the following variables: delayed schooling (range: 0.00% to 4.00%); the number of books at home (0.08% to 12.11%); enjoyment of reading (0.44% to 9.34%); the use of memorization strategies (0.07% to 10.63%), control strategies (0.11% to 10.73%), and elaboration strategies (0.11% to 11.79%); the assessment of the usefulness of effective strategies for understanding and remembering text information (0.70% to 23.97%) and for writing a text summary (0.70% to 25.79%); and maternal education (0.23% to 18.35%) and paternal education (0.38% to 21.90%). No missing values were observed for the reading literacy subscales, mathematical literacy, and science literacy.

The factor models and correlations were estimated using the maximum likelihood estimation method. To address the issue of missing values on student characteristics, the full information maximum likelihood (FIML) estimation method was employed to calculate the correlations between factors or competences and these student characteristics. The computations were conducted using Mplus (Version 7.4; Muthén & Muthén, 2015), which also takes into account PISA’s requirements for unbiased analyses (i.e., the final student weight, replicate weights, and all plausible values; OECD, 2009b).

The statistical comparison of correlations was performed by utilizing the web front-end of Comparing Correlations (cocor; Diedenhofen & Musch, 2015), which can be accessed via http://comparingcorrelations.org. The test statistic for comparing correlations was calculated according to the method proposed by Meng et al. (1992). When the negative direction of a correlation is taken into account, a statistical comparison may, for example, result in a weaker negative or a positive correlation coefficient being tested as significantly stronger than a stronger negative correlation coefficient. However, it is important to note that the strength of a correlation is exclusively determined by the magnitude of the correlation coefficient, irrespective of its direction. Therefore, the absolute values of the correlation coefficients were employed in the statistical comparison of correlations with negative or opposite signs.

6. Results

6.1. The correlations between competences and/or subscales, and evaluating factor models

In 74 countries and federal states, moderate to very strong, positive, and significant latent correlations were shown between the reading competence subscales and the total scales of mathematical and science competence (.39 ≤ r ≤ .96, each p < .001, two-tailed test; see Table 1 for overall results and Table B4 in Appendix B for country- and state-specific results). According to Cohen (1988), |r| ≈ .10, .30, and .50 are regarded as the bottom thresholds of small, moderate, and large correlations, respectively. In these countries and federal states, nearly all latent correlations were of a relevant size, indicating favorable conditions for extracting factors in the nested-factor model and the one-factor model (i.e., r > .50; Kline, 2012). However, in Azerbaijan, the latent correlations between the total scales of mathematics and science, and between the reading subscales and mathematics, ranged from .39 to .48. These values are below .50, suggesting inadequate conditions for factor extraction.

Table 1.The latent correlations between the reading literacy subscales and the total scales of mathematical and scientific literacy
Statistic Math with
Science
Math with reading subscales Science with
reading subscales
Intercorrelations between
reading subscales
AcRe InIn ReEv AcRe InIn ReEv AcRe*InIn AcRe*ReEv InIn*ReEv
Overall meana .86 .74 .76 .73 .77 .80 .77 .91 .87 .92
Median .86 .74 .77 .73 .78 .80 .78 .92 .89 .92
Min. .48 .39 .43 .39 .61 .63 .59 .81 .71 .81
Max. .92 .81 .83 .81 .86 .87 .85 .96 .95 .96

Note. N = 74 countries and federal states. The latent correlations between the variables were estimated for each country and federal state using the five weighted plausible values of the students. The computations were conducted within the framework of a structural equation model, employing the maximum likelihood estimation method. For each country and federal state, the root mean square error of approximation (RMSEA) and the standardized root mean square residual (SRMR) were both found to be zero. Three reading literacy subscales (Access and Retrieve; Integrate and Interpret; Reflect and Evaluate) and the overall scales for mathematics and science were utilized for computing correlations between them. All correlations were found to be significant (p < .001, two-tailed test).
a The mean correlations between the reading subscales and mathematics or science, as well as between the last two, were computed as follows: The correlation coefficients were transformed into z-values using Fisher’s r-to-z transformation and weighted by the final student weights of the countries and federal states. Subsequently, the overall weighted z-means were transformed back to r.

Exploratory analyses revealed that the correlations between the reading competence subscales and the total scales of mathematical and science competence exhibited a tendency to be stronger in countries and federal states with higher gross domestic product per capita (GDPpc) and weaker in those with lower GDPpc. The relationships between these Fisher’s z-transformed correlations and the GDPpc were found to vary in strength, ranging from a weak correlation (rFishers z[r(AcRe*InIn)]*GDPpc = .17, p > .05, two-tailed test) to a strong correlation (rFishers z[r(Science*ReEv)]*GDPpc = .48, p < .05; see Table B5 in Appendix B). All correlations were stronger when using the natural logarithm of GDP per capita compared with GDPpc (e.g., rFishers z[r(AcRe*InIn)]*GDPpc = .17 vs. rFishers z[r(AcRe*InIn)]*GDPpc = .33; overall mean correlation: ¯¯rGDPpc = .31 vs. ¯¯rln(GDPpc) = .51). The relationship between GDPpc and the strength of correlations between the reading subscales, the mathematical competence, and/or the science competence is better described by a non-linear model than a linear model. In comparison with linear regression, the quadratic, cubic, and logarithmic regressions yielded higher explained variance when Fisher’s z-transformed correlations were regressed on GDPpc. However, when the natural logarithm of GDPpc was employed, the linear model was also appropriate, as it accounted for a proportion of variance similar to that of the non-linear models (see R² values in Table B5 in Appendix B).

In each of the 74 countries and federal states, the single-factor model (see Figure C1 in Appendix C) exhibited a poor fit to the data, as evidenced by the root mean square error of approximation (RMSEA), which varied between .10 and .34 (see Table 2 and Table B6). This range exceeded the .05 threshold, indicating a poor fit between the model and the observed data (e.g., Urban & Mayerl, 2014). Furthermore, the lower and upper bounds of the 90% confidence intervals for the RMSEA (90% CIRMSEA) went beyond the lower limit cutoff (LL90%CI = .05) and upper limit cutoff (UL90%CI = .08) proposed by Urban and Mayerl (2014). The fit measures of the one-factor models improved when a correlation between the residuals of mathematical competence and science competence was additionally specified (see Table 2 and Figure C1). Consequently, the RMSEA values ranged from .00 to .11 for these factor models in the 74 countries and federal states under consideration (see RMSEA for 1F adj. in Table 2 and Table B6). The standardized root mean square residuals (SRMR) varied from .001 to .017, in comparison to the range of .03 to .06 observed for the one-factor model without residual correlation. Lower SRMR (or RMSEA) values are indicative of a superior model fit, with zero representing a perfect fit (Urban & Mayerl, 2014).

Table 2.Fit indices for the one-factor models (1F), the one-factor models adjusted (1F adj.), and the bi-factor models (BiF)
Statistic AIC (BIC) Change in AIC (BIC) RMSEA [90% CIRMSEA] SRMR
1F 1F adj. BiF 1F–1F adj. 1F–BiF 1F adj.–BiF 1F 1F adj. BiF 1F 1F adj. BiF
Overall mean 527,361
(527,443)
523,210
(523,320)
523,134
(523,310)
4,152
(4,112)
4,227
(4,133)
79a
(86)a
.22 .04 .02 .04 .005 .003
Median 270,521
(270,618)
268,852
(268,956)
268,753
(268,871)
2,221
(2,214)
2,269
(2,249)
38
(26)
.22 .04 .02 .04 .004 .001
Min. 17,760
(17,817)
17,618
(17,679)
17,621
(17,690)
142
(138)
139
(127)
–57
(–677)
.10 .00 .00 .03 .001 .000
Max. 2,045,482
(2,045,610)
2,032,735
(2,032,872)
2,032,395
(2,033,549)
16,565
(15,556)
16,819
(15,794)
340
(255)
.34 .11 .10 .06 .017 .05

Note. N = 74 countries and federal states. The model fit indices of the one-factor model, the adjusted one-factor model, and the bi-factor model are presented. The model calculation was performed separately for each country and federal state using students’ five weighted plausible values and the maximum likelihood estimation method. The overall mean values for the model fit measures were calculated across all countries and federal states and weighted by the final student weights.
a The differences in AIC and BIC values between the adjusted one-factor model and the bi-factor model comprised both positive and negative values. Therefore, the mean absolute difference was calculated by taking the absolute values of these differences.

The nested-factor models (i.e., the bi-factor model; see Figure C1) obtained a good to perfect fit in 68 countries and federal states, with RMSEA values ranging from .00 to .05 (see Table B6), which were at or below the recommended threshold of .05. For five of the six remaining countries and states, the RMSEA value was .06, while for Liechtenstein it was .10. An RMSEA value below .08 is considered indicative of acceptable model fit (Urban & Mayerl, 2014). Across all countries and federal states, the SRMR values of the bi-factor models ranged from .00 to .05, exhibiting a good to perfect model fit. An exception emerged for the Korean and Shanghai PISA data. The nested-factor models showed negative residual variance for the reading subscale Integrate and Interpret in these countries, indicating that these models were not appropriate. Consequently, this residual variance was set to zero to enable further analysis. Additionally, a starting value for the variance of PISA-g and values for fixed parameters (i.e., intercepts, variances, and residual variances) derived from prior factor model estimations were employed. Subsequent bi-factor models yielded adequate fits, with RMSEA values of .05 for Korea and .04 for Shanghai. However, a further adjusted one-factor model, incorporating correlations between the residuals of mathematics and science competence, as well as between Access and Retrieve, and Reflect and Evaluate, provided a better fit to the Korean and Shanghai PISA data (e.g., Korea: RMSEA1F adj further vs. BiF = .03 vs. .05; see Table B6). For instance, the difference between this factor model and the bi-factor model exceeded 10 points (e.g., Urban & Mayerl, 2014) in terms of Akaike information criterion (AIC) or Bayesian information criterion (BIC), favoring the further adjusted one-factor model (e.g., Korea: AIC1F adj. further vs. BiF = –426, BIC1F adj. further vs. BiF = –326).

Across all countries and federal states, the bi-factor factor model and the adjusted one-factor model provided an enhanced fit compared with the unadjusted single-factor model. This was evidenced by non-overlapping RMSEA confidence intervals and substantial improvements in AIC and BIC, exceeding 10 points (Urban & Mayerl, 2014). In 58 (or 46) out of 74 cases, respectively, the bi-factor model’s AIC (or BIC) was at least 11 points lower than that of the adjusted one-factor model, suggesting a superior fit (see positive changes in AIC or BIC in Table B6). In the remaining cases (excluding Korea), no significant differences in AIC values were observed between these two models. With respect to BIC, the adjusted one-factor model showed a better fit than the bi-factor model only in Kazakhstan, Liechtenstein, and Mexico. Overall, the bi-factor model exhibited superior fit in the majority of cases and was therefore selected for subsequent analyses of PISA-g and RspecF.

In all countries and federal states—except Azerbaijan, with respect to mathematical competence and the reading subscale Reflect and Evaluate—the reading literacy subscales, mathematics literacy, and science literacy total scales were found to measure the same latent construct, namely the general factor PISA-g, thereby demonstrating convergent construct validity (e.g., Kline, 2016). This was indicated by their standardized loadings (λ) on this factor, all of which were at least .70 (Hair et al., 2018). When a less restrictive factor loading threshold of .50 (e.g., Urban & Mayerl, 2014) was applied, construct validity was also supported for mathematical competence and Reflect and Evaluate in Azerbaijan. The overall mean of the standardized factor loadings for the science literacy total scale, the mathematical literacy total scale, and the reading literacy subscales on PISA-g was .87 (λMath on PISA-g = [.57, .94], λScience on PISA-g = [.85, .99], λReading subscales on PISA-g = [.69, .90]; see Table 3). In 69, 37, and 59 out of 74 countries and states, respectively, the standardized factor loadings of the reading subscales Access and Retrieve, Integrate and Interpret, and Reflect and Evaluate on the reading-specific factor (RspecF) were below .50 (see Table B7 in Appendix B). In these cases, the reading subscales failed to demonstrate adequate convergent validity for RspecF. The grand mean of the standardized factor loadings for the reading literacy subscales on this factor was .47 (λAcRe on RspecF = [.33, .63]; λInIn on RspecF = [.39, .65], λReEv on RspecF = [.36, .56]; see Table 3).

Table 3.Factor loadings and explained variance for reading literacy subscales, mathematical literacy and scientific literacy in the bi-factor model
Statistic Factor loadings (λ) on the general factor (PISA-g) by … Factor loadings on the reading-specific factor (RspecF) by … Percentage of variance explained by
PISA-g for … PISA-g/RspecF, and in total, for ...
Math Science AcRe InIn ReEv AcRe InIn ReEv Math Science AcRe InIn ReEv
Mean .91 .96 .81 .84 .81 .44 .50 .46 82.81 92.16 65.61⁠/⁠19.36⁠/⁠84.97 70.56⁠/⁠25.00⁠/⁠95.56 65.61⁠/⁠21.16⁠/⁠86.77
Median .91 .96 .82 .84 .81 .45 .50 .46 82.81 92.16 67.24⁠/⁠20.25⁠/⁠88.53 70.56⁠/⁠24.51⁠/⁠96.05 65.61⁠/⁠21.16⁠/⁠89.07
Min. .57 .85 .71 .75 .69 .33 .39 .36 32.49 72.25 50.41⁠/⁠10.89⁠/⁠70.93 56.25⁠/⁠15.21⁠/⁠84.34 47.61⁠/⁠12.96⁠/⁠71.46
Max. .94 .99 .90 .90 .88 .63 .65 .56 88.36 98.01 81.00⁠/⁠39.69⁠/⁠97.78 81.00⁠/⁠42.25⁠/⁠99.97 77.44⁠/⁠31.36⁠/⁠95.12

Note. N = 74 countries and federal states. The terms Math and Science refer to the total scales of mathematical and scientific literacy, respectively. The reading literacy subscales are as follows: Access and Retrieve (AcRe), Integrate and Interpret (InIn), and Reflect and Evaluate (ReEv). The means for standardized factor loadings of mathematics, science, and the reading subscales, as well as the means of explained variance, were computed with the final student weights per country and federal state. In the case of the standardized factor loadings, they were transformed to z-values using Fisher’s r-to-z transformation and weighted by the final student weights of the countries and federal states. Subsequently, the overall z-means for factor loadings were transformed back to λ.

Discriminant (or divergent) validity is established when different scales measure distinct constructs (Kline, 2016). The standardized factor loadings should be minimal (i.e., less than .30; Carroll, 1993; McDonald, 1999) on the other construct (i.e., factor) (Urban & Mayerl, 2014). Within the nested-factor models, science competence and mathematical competence exhibited discriminant validity with respect to RspecF. This is because their standardized factor loadings on the reading-specific factor were zero, as specified by the model design. In all countries and federal states, the reading competence subscales did not meet satisfactory discriminant validity, as their factor loadings on both factors were greater than .30 (see Table B7).

The extent to which PISA-g and RspecF explained the observed variance in PISA competences or reading sub-competences varied by country and federal state. PISA-g predicted between 32.49% and 88.36% of the variance in mathematical literacy, between 72.25% and 98.01% of the variance in scientific literacy, and between 47.61% and 81.00% of the variance in the reading literacy subscales (see Table 3). The percentage of variance accounted for by RspecF across the reading subscales ranged from 10.89% to 42.25%. When both factors were considered, the range of variance explained in the reading literacy subscales was between 70.93% and 99.97%. In the bi-factor models for Korean and Singapore PISA data, the residual variances of the reading subscale Integrate and Interpret were fixed to zero. This resulted in both factors explaining 100% of the variance. However, due to the squaring of the rounded factor loadings, the actual percentages are 99.73% and 99.97%, respectively (see Table B7). PISA-g was identified as the primary contributor to the observed variances. For instance, the mean standardized factor loading of the reading literacy subscales on PISA-g was .82, whereas the mean factor loading on RspecF was .47.

6.2. Testing hypotheses and examining the reading competence specificity of student characteristics

There were significant positive correlations between PISA-g (or RspecF) and the number of books at home, reading enjoyment, and paternal or maternal education in 73 (or 50), 72 (or 74), 74 (or 25), and 74 (or 23) out of 74 countries and federal states, respectively (p ≤ .05, one-tailed test; see Table 4). In these cases, the results substantiated the corresponding hypotheses (see Section 4.2). The non-significant and significant correlations of PISA-g or RspecF with the number of books and parental education ranged from weak to strong or from negligible to weak (e.g., rBooks*PISA-g = [.06, .56] vs. rBooks*RspecF  = [–.07, .13]; ¯rBooks*PISA-g = .34 vs. |ˉr|Books*RspecF  = .06; see Table 4). According to Cohen (1988), |r| ≈ .10, .30, and .50 are regarded as the lower thresholds of small, moderate, and large correlations, respectively. All correlations between the factors and reading enjoyment ranged from negligible to strong for PISA-g and from weak to moderate for RspecF (i.e., rJoyRead*PISA-g = [–.03, .44] vs. rJoyRead*RspecF  = [.08, .36], |¯r|JoyRead*PISA-g = .26 vs. ¯rJoyRead*RspecF  = .21). In a total of 67 (or 53) countries and federal states, respectively, the correlational links between PISA-g (or RspecF) and delayed schooling were significantly negative, thereby supporting the relevant hypotheses (p ≤ .05, one-tailed test). The correlations between this student characteristic and the factors, irrespective of their statistical significance, ranged from negligible to moderate for RspecF and up to strong for PISA-g (i.e., rSchool delay*RspecF = [–.22, .03] vs. rSchool delay*PISA-g = [–.68, .05]; absolute mean correlations: |¯r|School delay*PISA-g = .32 vs. |¯r|School delay*RspecF = .10).

Table 4.Correlations of the general PISA factor and the reading-specific factor with intelligence and reading-related student characteristics as well as their comparisons
Country The reading-specific factor (RspecF) and the general PISA factor (PISA-g) correlated with Model fit
Number Usefulness of strategies for: Use of reading strategies: Reading Years of schooling: RMSEA/
of books
at home
writing summary understanding/
remembering
control elaboration memori-zation enjoyment delayed father mother SRMR
Albania .11(>).36 .05(>).37 .05(>).41 .15(>).29 .04(>).15 .13>.07 .22(>).29 –.03<–.15 .01<.21 .04<.17 .02/.007
Argentina .05(>).41 .06(>).43 .03(>).34 .11(>).21 –.05(>).05 .08(>)–.11 .08(>).18 –.13<–.44 .01<.29 .02<.33 .03/.019
Australia .04(>).38 .20(>).44 .16(>).39 .14(>).36 –.02(>).13 .10>.07 .28(>).44 –.03<–.16 .04<.29 .02<.25 .03/.007
Austria .13(>).50 .20(>).46 .22(>).40 .04(>).18 –.12>.09 .02(>)–.12 .29(>).35 –.04<–.32 .04<.27 .06<.28 .05/.013
Azerbaijan .10(>).26 –.05(>).22 –.01(>).30 .11(>).19 .09(>).12 .10>.07 .14(>).17 .02<–.16 .10<.19 .09<.17 .02/.010
Belgium .06(>).43 .21(>).52 .17(>).49 .14(>).27 –.11>.05 .10(>)–.21 .32(>).31 –.09<–.56 .06<.27 .02<.29 .04/.011
Brazil .02(>).26 .10(>).38 .09(>).35 .15(>).23 .07>.01 .13>.06 .17(>).16 –.15<–.51 .01<.26 .02<.28 .02/.007
Bulgaria .07(>).44 .10(>).45 .06(>).41 .11(>).16 –.05(>).11 .09>.01 .11(>).28 –.04<–.20 .02<.30 –.00<.38 .03/.009
Canada .06(>).37 .18(>).37 .13(>).29 .18(>).27 –.03(>).05 .13>.03 .31(>).35 –.06<.23 .03<.22 .01<.20 .03/.009
Chile .06(>).36 .12(>).42 .10(>).43 .11(>).27 –.04(>).10 .08>.01 .19(>).23 .13<.39 .06<.35 .09<.34 .03/.009
Colombia .03(>).37 .10(>).47 .07(>).45 .03(>).10 –.03(>).03 –.06(>)–.13 .09(>).09 .17<.50 .02<.29 .04<.34 .02/.007
Costa Rica .09(>).35 .11(>).43 .07(>).30 .09(>).12 .04>–.00 .07(>)–.10 .17>.08 .16<–.55 .05<.27 .02<.32 .03/.010
Croatia .08(>).34 .23(>).43 .17(>).38 .08(>).14 .11>.04 .07(>).06 .28(>).27 .09<.18 .05<.21 .04<.24 .04/.012
Czech Rep. .10(>).43 .21(>).49 .15(>).40 .08(>).29 .07(>).18 .07(>)–.13 .32(>).35 .10<.36 –.01<.20 –.04<.20 .03/.010
Denmark .09(>).37 .25(>).39 .21(>).38 .15(>).16 .01(>).12 .07(>).15 .24(>).41 .12<.21 .05<.26 .06<.25 .03/.009
Estonia .04(>).31 .21(>).40 .15(>).37 .10(>).14 –.03(>).13 .08(>)–.13 .33(>).36 .10<–.32 .04<.10 .05<.14 .04/.040
Finland .10(>).34 .27(>).42 .21(>).37 .16(>).24 .04(>).15 .15>–.06 .30(>).43 –.02<–.20 .05<.16 .05<.21 .04/.011
France .05(>).53 .18(>).45 .15(>).40 .13(>).38 –.04(>).09 .09>.05 .25(>).39 –.09<–.47 –.00<.30 –.02<.33 .04/.010
Georgia .08(>).35 .04(>).40 .05(>).40 .12(>).21 .04(>).20 .11(>).08 .19(>).33 –.07<–.18 .02<.25 .04<.25 .03/.011
Germany .09(>).49 .19(>).50 .18(>).45 .15(>).20 –.11>.07 .09(>)–.12 .29(>).38 –.12<–.49 .03<.36 .05<.31 .04/.010
Greece .03(>).36 .06(>).38 .03(>).23 .12(>).26 –.07(>).19 .10>.00 .24(>).35 –.06<–.17 .03<.26 .02<.29 .03/.010
Himachal Pr. –.07(>).10 .10(>).19 .09(>).35 .09(>).29 –.04(>).20 .11(>).06 .12(>).12 –.22<–.43 –.01<.29 .02<.24 .02/.011
Hong Kong .11(>).30 .10(>).36 .11(>).33 .05(>).33 –.04(>).14 .04(>).05 .24(>).30 –.04<–.34 .02<.17 .03<.16 .04/.012
Hungary .13(>).56 .23(>).47 .14(>).38 .15(>).12 –.11>.05 .16>–.04 .33(>).35 –.13<–.45 .06<.43 .06<.44 .04/.009
Iceland .02(>).35 .22(>).40 .15(>).32 .13(>).24 –.03(>).15 .03(>)–.02 .26(>).40 –.01(<)–.01 .01<.21 –.00<.24 .05/.012
Indonesia .04(>).10 .04(>).40 .04(>).39 .06(>).18 .01(>).17 .08(>).09 .14>.10 –.10<–.39 .04<.28 .04<.28 .02/.010
Ireland .12(>).43 .14(>).41 .12(>).37 .10(>).29 –.05(>).11 .08>.03 .25(>).44 –.07<–.14 .07<.20 .08<.21 .06/.013
Israel .03(>).30 .14(>).44 .10(>).36 .13(>).17 –.06(>)–.07 .08(>)–.11 .25>.19 .01<–.14 –.01<.35 .01<.39 .03/.008
Italy .12(>).39 .18(>).42 .15(>).36 .20(>).21 –.02(>).08 .06(>)–.15 .32>.29 –.12<–.30 .08<.20 .07<.22 .03/.008
Japan .04(>).25 .13(>).52 .10(>).39 .05(>).35 –.05(>).23 .03(>).05 .20(>).34 .01<.05 .08<.26 .02<.19 .04/.011
Jordan .01(>).18 .07(>).28 .02(>).26 .13(>).33 .09(>).25 .13(>).23 .14(>).21 –.10<–.17 –.02<.25 –.02<.24 .02/.008
Kazakhstan .12(>).32 .13(>).40 .11(>).39 .09>.00 .01(>)–.09 .06(>)–.13 .09>.00 –.06(<).03 .02<.20 .06<.20 .02/.007
Korea .02(>).39 .12(>).52 .07(>).44 .13(>).43 .07(>).31 .17(>).25 .16(>).39 –.04(<).01 .03<.25 .03<.20 .05/.022
Kyrgyzstan .11(>).39 .00(>).39 .05(>).39 .07(>).07 .01(>).02 .05(>).11 .17>.09 –.00<–.09 –.01<.24 .03<.24 .03/.010
Latvia .08(>).33 .19(>).41 .12(>).36 .12(>).15 –.04(>).10 .07(>)–.08 .35>.27 –.10<–.35 .03<.13 –.01<.22 .03/.009
Liechtenstein .08(>).42 .29(>).44 .26(>).40 .20(>).14 –.07(>).07 .14(>)–.16 .35(>).30 –.14(<)–.24 .01<.24 .01<.31 .06/.018
Lithuania .06(>).37 .14(>).40 .14(>).35 .15(>).18 –.02(>).06 .10(>)–.15 .32(>).33 –.04<–.21 .02<.25 –.02<.29 .04/.009
Luxembourg .04(>).52 .19(>).44 .20(>).38 .17(>).20 –.11>.03 .16>.03 .28(>).33 –.08<–.49 .01<.36 .03<.35 .03/.010
Macao .03(>).17 .04(>).31 .08(>).23 –.00(>).25 –.02(>).24 .01(>).10 .16(>).30 –.10<–.55 .03<.07 .01<.06 .04/.014
Malaysia –.00(>).26 .07(>).41 .08(>).28 .13(>).28 .05(>).20 .14(>).28 .19(>).23 –.08<–.16 .01<.17 .00<.15 .03/.013
Malta .02(>).31 .08(>).40 .04(>).25 .14(>).39 –.05(>).15 .11>.03 .17(>).39 .03<–.21 –.03<.25 –.02<.19 .04/.009
Mauritius .02(>).16 .10(>).49 .06(>).35 .19(>).34 .06(>).09 .12>.01 .20(>).20 –.15<–.57 .06<.28 .03<.28 .03/.008
Mexico .03(>).30 .09(>).45 .04(>).38 .06(>).25 .02(>).09 .04(>)–.03 .10(>).17 –.11<–.44 .05<.33 .03<.34 .02/.006
Miranda –.00(>).37 .11(>).42 .08(>).34 .12(>).08 .02(>)–.05 .17>–.08 .16(>).13 –.15<–.20 .03<.41 –.03<.43 .03/.010
Moldova .05(>).30 .09(>).32 .09(>).31 .13(>).17 .01(>).11 .08(>).06 .15>.08 –.03<–.10 –.00<.22 .02<.20 .02/.008
Montenegro .07(>).33 .08(>).40 .14(>).38 .10(>).07 –.05(>).02 .03(>)–.21 .23(>).23 –.03<–.14 .02<.22 .04<.21 .03/.012
Netherlands .05(>).42 .15(>).50 .15(>).44 .11(>).25 –.08(>).06 .03(>)–.29 .36>.30 .01<–.41 –.03<.25 –.08<.25 .04/.011
New Zealand .06(>).41 .21(>).45 .15(>).37 .19(>).29 –.01(>).02 .13>–.02 .29(>).40 –.07(<)–.08 –.01<.25 .05<.23 .03/.008
Norway .04(>).42 .25(>).39 .18(>).35 .15(>).24 .02(>).21 .06>.02 .31(>).37 –.04(<).03 –.00<.20 –.03<.20 .03/.010
Panama .03(>).30 –.01(>).44 .01(>).42 .04(>).15 –.06>.01 .03(>)–.11 .08(>).11 –.17<–.49 .07<.26 .08<.28 .02/.009
Peru .08(>).39 .07(>).41 .05(>).35 .03(>).09 .02(>)–.02 –.05(>)–.19 .12>.08 –.15<.53 .06<.40 .08<.41 .03/.010
Poland .10(>).43 .15(>).46 .12(>).31 .16(>).25 .02(>).10 .14>.00 .29(>).34 .10<.22 .02<.33 .03<.36 .03/.009
Portugal .01(>).42 .21(>).48 .14(>).44 .13(>).39 .01(>).26 .07(>)–.09 .24(>).30 .13<–.61 .01<.37 .00<.38 .04/.013
Qatar –.02(>).15 .03(>).33 .00(>).37 .09(>).27 .01(>).02 .11>–.03 .09(>).25 –.03<–.28 –.04<.26 –.05<.23 .03/.009
Romania .10(>).42 .15(>).40 .11(>).39 .16(>).20 .03(>).08 .18>–.02 .16(>).17 –.06(<)–.00 .05<.19 .04<.22 .03/.008
Russia .10(>).32 .12(>).40 .14(>).38 .09(>).16 –.01(>).06 .03(>)–.10 .25(>).30 –.05<–.26 .04<.19 .03<.25 .03/.009
Serbia .03(>).37 .15(>).46 .11(>).40 .03(>).14 –.09(>).10 .06(>).24 .17(>).24 –.06<.12 .02<.21 –.02<.22 .03/.009
Shanghai .09(>).34 .07(>).40 .13(>).31 .05(>).29 .02(>).21 .01(>).04 .21(>).29 –.04<.22 .07<.28 .08<.29 .05/.023
Singapore .06(>).34 .14(>).46 .06(>).33 .04(>).25 –.08>.04 .01(>)–.15 .25(>).35 –.11<–.18 .04<.27 .02<.28 .03/.008
Slovakia .08(>).45 .22(>).43 .15(>).32 .15(>).20 –.02(>).13 .04(>)–.28 .27(>).29 .15<–.41 –.01<.30 –.02<.26 .03/.008
Slovenia .06(>).43 .21(>).43 .20(>).38 .17(>).21 –.06(>).08 .09(>)–.24 .29(>).33 –.07<–.10 .02<.25 .03<.29 .03/.008
Spain .03(>).47 .17(>).42 .13(>).31 .14(>).30 –.03(>).14 .13>–.01 .27(>).36 –.14<–.54 .04<.27 .04<.30 .03/.010
Sweden .09(>).42 .18(>).45 .20(>).40 .16(>).22 .00(>).15 .15>.03 .31(>).37 –.02<–.11 .01<.24 .03<.24 .03/.009
Switzerland .10(>).45 .23(>).47 .21(>).46 .19(>).19 –.06(>).04 .07(>)–.12 .34(>).35 –.16<–.40 .02<.29 .02<.29 .04/.012
Taiwan .07(>).38 .11(>).40 .10(>).34 .09(>).43 .03(>).33 .09(>).21 .20(>).42 –.10(<)–.05 .05<.29 .04<.27 .04/.011
Tamil Nadu –.01(>).06 –.18(>).29 –.13(>).26 .10(>).19 .05(>).11 .17>–.11 .17(>).23 –.04<–.23 –.12<.22 –.13<.20 .04/.014
Thailand .10(>).26 .07(>).27 .14(>).28 .05(>).25 .02(>).19 .10(>).23 .15(>).24 –.09<–.21 .05<.30 .05<.28 .03/.011
Trinidad and Tobago .00(>).19 .03(>).45 .05(>).41 .12(>).30 .01(>).03 .15>.04 .21(>).19 –.15<–.55 –.03<.12 –.04<.16 .03/.008
Tunisia .02(>).26 .07(>).25 –.01(>).26 .15(>).20 .07(>).13 .09(>)–.09 .13>–.03 –.15<–.68 –.04<.24 –.04<.25 .03/.013
Turkey .08(>).39 .12(>).39 .06(>).34 .14(>).19 .02(>).12 .03(>)–.21 .22>.17 –.14<–.37 .07<.36 .03<.35 .04/.014
UEA .02(>).20 .03(>).40 .04(>).40 .11(>).23 .02(>)–.00 .08(>)–.12 .14(>).26 –.10<–.42 .01<.34 .01<.34 .03/.010
UK .08(>).50 .14(>).42 .14(>).33 .13(>).25 –.04(>).07 .06>–.02 .29(>).39 –.02(<).02 –.01<.20 .03<.22 .03/.009
Uruguay .06(>).39 .13(>).46 .05(>).38 .13(>).25 –.03(>).05 .12>–.09 .16(>).22 –.12<–.63 .06<.38 .04<.45 .03/.010
USA .09(>).43 .19(>).35 .16(>).30 .17(>).21 –.02(>).01 .10(>)–.11 .27(>).34 –.12<–.27 .01<.33 –.01<.29 .03/.008
Mean (abs.)a (.06)/.34 (.13)/.41 (.11)/.36 (.12)/.23 (.04)/(.11) (.09)/(.10) .21/(.26) (.10)/(.32) (.04)/.28 (.03)/.28 .03/.01
Median .06/.37 .13/.42 .11/.37 .13/.23 –.02/.10 .09/–.03 .23/.30 –.09/–.24 .02/.26 .025/.25 .03/.01
Min. –.07/.06 –.18/.19 –.13/.23 –.00/.00 –.12/–.09 –.06/–.29 .08/–.03 –.22/–.68 –.12/.07 –.13/.06 .02/.006
Max. .13/.56 .29/.52 .26/.49 .20/.43 .09/.33 .18/.28 .36/.44 .03/.05 .10/.43 .09/.45 .06/.04

Note. Weighted correlation coefficients were estimated from structural equation models specified separately for each country and federal state. The general PISA factor and the reading-specific factor from the bi-factor model were correlated with student characteristics within a structural equation modeling framework. To avoid bias arising from the simultaneous estimation of the correlations, previously estimated parameter values for the bi-factor model were fixed. To further improve model fit, correlations among the student characteristics were explicitly specified. Due to the presence of missing values among student characteristics, full information maximum likelihood (FIML) estimation was employed. This procedure enabled the inclusion of all students in the analysis without casewise deletion. Non-significant correlation coefficients are presented in italics, and significant coefficients are shown in normal font (p ≤ .05). One-tailed significance tests were applied to variables with directional hypotheses, including the number of books at home, reading enjoyment, delayed schooling, and maternal and paternal education. Two-tailed tests were used for all other student-related variables. The directional hypotheses underlying the comparison of correlation coefficients are indicated by the use of less-than or greater-than signs. When these signs are enclosed in parentheses, the results are non-significant (p > .05, one-tailed test); when parentheses are absent, the results are significant (p ≤ .05, one-tailed test). In instances of negative correlation coefficients, the absolute value of these coefficients was employed for the purpose of conducting a statistical comparison.
a The mean correlations between the factors and student variables were computed as follows: The correlation coefficients were transformed into z-values using Fisher’s r-to-z transformation and weighted by the final student weights of each country and federal state. Subsequently, the overall weighted z-means were transformed back to r. In instances where negative correlation coefficients were observed, the absolute values of these coefficients were utilized to calculate the weighted absolute mean correlations, which are reported in parentheses.

PISA-g exhibited a correlation pattern with the number of books at home, reading enjoyment, paternal and maternal education, and delayed schooling in 65 out of 74 countries and federal states, which corresponds to that of intelligence (see Section 4.2). For RspecF, this pattern was observed in ten countries and federal states—a considerably smaller number. In the remaining nine countries and federal states for PISA-g and 64 for RspecF, at least one of the five hypotheses concerning the correlations between these factors and the intelligence-relevant student characteristics was not supported by the data. Specifically, in 17, 18, 24, and 5 countries and federal states, respectively, one, two, three, and four of the five hypotheses for RspecF were not substantiated. With respect to PISA-g, one of the five hypotheses was not supported in only eight countries and federal states: Iceland, Korea, Japan, Norway, Romania, Tamil Nadu, Tunisia, and the United Kingdom. For Tamil Nadu and Tunisia, the correlation between PISA-g and the number of books at home and enjoyment of reading, respectively, was not significantly positive (p > .05; see Table 4). In Iceland, Japan, Korea, Norway, Romania, and the United Kingdom, a non-significant positive relationship was shown between PISA-g and the years of delayed schooling (p > .05). Kazakhstan was the only country in which two hypotheses concerning the correlations between PISA-g and reading enjoyment, as well as the years of delayed schooling, were not supported (rPISA-g*JoyRead = .00, rPISA-g*School delay = .03, p > .05, one-tailed test).

For exploratory analyses, it was statistically tested whether delayed schooling and paternal and maternal education exhibited a weaker correlation with RspecF than with PISA-g. In the majority of countries and federal states, if not all, the results were significant, as indicated by the less-than signs without parentheses (e.g., Albania: rSchool delay*RspecF < rSchool delay*PISA-g = –.03 < –.15; p ≤ .05, one-tailed test; see Table 4). When the less-than or greater-than symbols are enclosed in parentheses, the results are not statistically significant (e.g., Iceland: rSchool delay*RspecF (<) rSchool delay*PISA-g = –.01 (<) –.01; p >.05). In accordance with the hypotheses stated in Section 4.2, it was also tested whether the number of books at home and the enjoyment of reading were more strongly correlated with RspecF than with PISA-g (see Table 4). In the majority of cases, if not in all, the results were non-significant. From a descriptive perspective, the correlations of these two student variables were weaker with RspecF than PISA-g in nearly all, if not all, countries and federal states. When statistically tested, the number of books and the enjoyment of reading showed significantly lower correlations with RspecF in comparison to PISA-g. Overall, across the majority of the 74 countries and federal states—if not all—the correlations between the five intelligence-relevant student characteristics and RspecF were significantly weaker than the corresponding correlations with PISA-g.

In 74 federal states and countries, significant correlations were observed between PISA-g and students’ ability to correctly assess the usefulness of effective strategies for understanding and remembering text information and for writing a text summary (p ≤ .05, two-tailed test). In 63 and 64 countries and states, analogous results were obtained between RspecF and the correct assessment of the usefulness of effective strategies for understanding and remembering text information and for writing a text summary, respectively (p ≤ .05, two-tailed test). For these countries and federal states, the results supported the pertinent hypotheses. The correlations of RspecF with the two aforementioned variables ranged from negligible to moderate, whereas those of PISA-g varied from weak to strong (rRspecF*WritingSum  = [–.18, .29] vs. rPISA-g*WritingSum = [.19, .52]; rRspecF*UndRem  = [–.13, .26] vs. rPISA-g*UndRem = [.23, .49]; see Table 4). Significant correlations were found between PISA-g (or RspecF) and the use of control, elaboration, and memorization strategies in 73 (or 67), 60 (or 29) and 57 (or 58) out of 74 federal states and countries, respectively (p ≤ .05, two-tailed test). In these cases, the findings substantiated the hypotheses. When considering all correlations between both factors and the use of these three strategies, correlation strengths ranged from negligible to moderate for RspecF and from negligible to nearly large for PISA-g (e.g., rRspecF*Control = [–.00, .20] vs. rPISA-g*Control = [.00, .43]; see Table 4).

No student characteristics showed a correlation with RspecF at a level of at least .50 (e.g., Kline, 2012) without also exhibiting a significant correlation with PISA-g. Such a variable would be indicative of RspecF, as illustrated in the bi-factor model (see Figure C1). Among all variables examined, reading enjoyment was the only one to demonstrate a significant positive correlation with RspecF in all countries and federal states. Contrary to expectations, in all 74 countries and federal states, the number of books at home and the correct assessment of the usefulness of strategies for writing a text summary and for understanding and remembering text information were not more strongly correlated with RspecF than with PISA-g. The same applies for the other reading-related student characteristics: In 73, 64, 47, and 62 countries and federal states, the use of control, elaboration, and memorization strategies, and reading enjoyment, respectively, did not show stronger correlations with RspecF than with PISA-g. In fact, the majority of countries and federal states, if not all, displayed the opposite pattern, with correlations between reading-related student variables and PISA-g being significantly higher than those with RspecF. This finding suggests that these student characteristics may not be reading-specific.

If a student characteristic is reading-specific, it should demonstrate a significantly stronger correlation with reading competence than with mathematical and scientific competence. In 49, 43, 61, and 73 out of 74 countries and federal states, the reading competence showed significantly stronger correlations with the correct assessment of the usefulness of strategies for writing a text summary and for understanding and remembering text information, the use of control strategies, and the enjoyment of reading, respectively, than mathematics and science competence with them (see Table B8 in Appendix B). In these countries and federal states, the respective student characteristics may be considered reading-specific. However, reading competence did not exhibit significantly stronger correlations with the number of books at home, the use of elaboration strategies, and the use of memorization strategies than did mathematics and science competence in 47, 57, and 41 countries and federal states, respectively. In addition, in 18, 12, and 4 cases, respectively, reading competence did not correlate significantly more strongly with the number of books at home, the use of elaboration strategies, and the use of memorization strategies than either mathematics or science competence. In all of these countries and federal states, the student variables were not reading-specific. Overall, from a descriptive perspective, the differences in the strength of correlations between reading, mathematical, and science competence and these seven student characteristics were negligible to small in most cases (e.g., ¯rBooks*Math = .32 vs. ¯rBooks*Reading = .32 vs. ¯rBooks*Science = .32; see Table 5).

Table 5.Correlations of mathematical, scientific, and reading literacy with reading related student characteristics as well as their comparisons
Statistic Mathematical/Readinga/Science literacy correlated with
Books Usefulness of strategies for: Use of reading strategies: Reading
at home writing summary understanding/
remembering
control elaboration memorization enjoyment
Overall (abs.) meanb .32/.32/.32 .37/.40/.39 .32/.35/.34 .20/.25/(.22) (.09)/(.08)/(.09) (.09)/(.10)/(.10) (.20)/.33/.26
Median .35/.34/.33 .37/.43/.39 .33/.37/.35 .20/.26/.22 .08/.07/.08 –.04/.02/–.03 .22/.38/.30
Min. .05/.03/.06 .13/.14/.15 .19/.13/.21 .00/.04/–.00 –.08/–.09/–.08 –.28/–.25/–.27 –.06/.04/.001
Max. .55/.55/.53 .48/.55/.52 .45/.51/.47 .40/.43/.41 .30/.32/.32 .22/.31/.28 .37/.52/.45

Note. N = 74 countries and federal states. Weighted correlation coefficients were estimated from the structural equation models specified separately for each country and federal state. The overall reading, mathematics, and science competence scales were correlated with student characteristics within a structural equation modeling framework. To improve model fit, the correlations among the student characteristics were explicitly specified. Due to the presence of missing data among student characteristics, full information maximum likelihood (FIML) estimation was employed. This approach enabled the inclusion of all students in the analysis without necessitating casewise exclusion.
a For reading competence, the overall scale with plausible values 1–5 per student was applied.
b The mean correlations between the PISA competences (reading, mathematics, and science) and reading-related student characteristics were computed as follows: The correlation coefficients were transformed into z-values using Fisher’s r-to-z transformation and weighted by the final student weights of the countries and federal states. Subsequently, the overall weighted z-means were transformed back to r. In instances where negative correlation coefficients were observed, the absolute values of these coefficients were utilized to calculate the weighted absolute average correlations, as shown in parentheses.

7. Discussion

7.1. High latent correlations as a basis for factor extraction

Significant positive and generally strong latent correlations were observed between the reading sub-competences, as well as between these sub-competences and the mathematics and science competence, in nearly all countries and federal states (.39 ≤ rlatent ≤ .96, ¯¯rlatent = .83). However, in Azerbaijan, the latent correlations between the mathematics and science total scales and between the reading subscales and the mathematics total scale were lower, ranging from .39 to .48. These consistently strong to very strong latent correlations suggest limited empirical discriminability among the PISA competence domains across countries and federal states. In addition to other factors (e.g., test-taking behavior; see Section 1), these correlational relationships may, to some extent, be facilitated by overlaps in item characteristics and cognitive demands—such as reasoning, abstract thinking, and comprehension—across the domains of reading, mathematics, and science. For example, reading-related processes, such as accessing and extracting, as well as integrating and interpreting, are relevant for mathematical and science items, especially when solution-relevant information must be identified within the task material. Reflecting and evaluating plays also a role in mathematical and science items to some degree. Moreover, the ability to work with and comprehend different forms of representation (e.g., text, diagrams, and tables) is a common requirement across items in all three domains. In general, a certain level of reading literacy is required for all items. Furthermore, some mathematical literacy is necessary for specific science and reading items that involve dealing with numbers in tables, diagrams, or graphs. Exploratory analyses indicate that, in a weak to strong (or moderate to strong) tendency, the strength of correlations between the overall scales of mathematical and science competence and/or the reading competence subscales increases as GDP per capita (or its natural logarithm) rises.

7.2. The replication of the factors and their correlations with student characteristics

The nested-factor model (i.e., the bi-factor model) showed an adequate to perfect fit in 68 countries and federal states. Regarding the remaining six countries and federal states, the model fit was considered acceptable (i.e., RMSEA ≤ .08) in five countries and poor in Liechtenstein (i.e., RMSEA = .10). In all countries and states, the bi-factor model and the adjusted one-factor model provided a superior fit compared with the one-factor model. This finding is consistent with previous research using various PISA data sets (e.g., Brunner, 2006, 2008; Pokropek, Marks, & Borgonovi, 2022; Pokropek, Marks, Borgonovi, et al., 2022). In most cases, the nested-factor model also exhibited a better fit than the adjusted one-factor model, which incorporated residual correlation between mathematics and science. However, it was observed that all aforementioned factor models exhibited inadequate model fits for Korean and Shanghai PISA data. In these two cases, a further adjusted one-factor model provided a good to perfect fit to the data, thereby indicating its superiority over the other models.

Within the nested-factor models, the high correlations between the reading competence subscales could be statistically accounted for by PISA-g and RspecF, whereas the correlations between these subscales and science or mathematical competence, as well as the correlation between mathematical and science competence, could be attributed to PISA-g. In all countries and federal states—except for Azerbaijan, in the case of mathematical competence and Reflect and Evaluate—the mathematics and science competence scales and the reading subscales measured the same latent construct, PISA-g, thereby demonstrating convergent construct validity. This was indicated by their standardized factor loadings on this factor, which were equal to or greater than .70. The overall mean of the standardized factor loadings of these competences and sub-competences on PISA-g was .87. In most cases, the reading subscales did not demonstrate adequate convergent validity with respect to RspecF. The grand mean of the standardized factor loadings for the reading literacy subscales was .82 on PISA-g and .47 on RspecF. As indicated by the magnitude of the standardized factor loadings, the degree to which PISA-g and RspecF explained the observed variance in PISA competences and reading sub-competences differed across countries and federal states. Specifically, PISA-g accounted for between 32.49% and 88.36% of the variance in mathematical literacy, between 72.25% and 98.01% of the variance in scientific literacy, and between 47.61% and 81.00% of the variance in the reading literacy subscales. The percentage of variance explained by RspecF across the reading subscales ranged from 10.89% to 42.25%. PISA-g was found to be the dominant contributor to most of the variance explained in the reading competence subscales and in mathematics and science competence. Consequently, these subscales and competences may reflect general cognitive ability (i.e., PISA-g) more strongly than domain-specific abilities.

PISA-g demonstrated significant positive correlations with maternal and paternal education in all countries and federal states as well as with the number of books and the enjoyment of reading in nearly all cases. A significant negative correlation between this factor and delayed schooling was identified in nearly all countries and federal states. Therefore, the relevant hypotheses were substantiated in those countries and federal states. As expected, the reading-specific factor demonstrated also significant positive or negative correlations with the aforementioned variables, but in a considerably smaller number of countries and federal states, with the exception of the enjoyment of reading. The correlations of PISA-g with the number of books and maternal and paternal education ranged from weak to strong; those with enjoyment of reading and delayed schooling ranged from negligible to strong. In nearly all cases, the correlations between these student characteristics and RspecF were found to be considerably weaker than those observed with PISA-g. All five hypotheses regarding the correlations between PISA-g and the aforementioned five intelligence-relevant student characteristics were supported in a total of 65 countries and federal states. Consequently, in these cases, PISA-g showed a pattern of correlations with these intelligence-relevant student characteristics, which is analogous to that of intelligence. RspecF exhibited an intelligence-related correlational pattern with the same five student variables in only ten countries and federal states. In these cases, the correlations were found to be significantly (and considerably) weaker. Unlike PISA-g, RspecF displayed a much less consistent intelligence-related correlation pattern across most countries and federal states. These findings, along with the intelligence-related PISA item demands (see Sections 3.1 to 4.1), support the assumption that PISA-g reflects a general cognitive ability (i.e., an intelligence-like ability). For instance, it encompasses the intelligent application of knowledge and given information.

PISA-g and RspecF demonstrated significant correlations with metacognitive and learning strategies in varying numbers of countries and federal states. The metacognitive strategies comprise the correct assessment of the usefulness of effective strategies for understanding and remembering text information as well as for writing a text summary. The correlations between RspecF and these two metacognitive strategies ranged from negligible to moderate, whereas the correlations of them with PISA-g varied from weak to strong. When all the correlations between the two factors and the use of the three learning strategies (control, elaboration, and memorization) are taken into account, their strengths ranged from negligible to moderate and, for PISA-g, up to nearly large. No student characteristics were found to correlate with RspecF at a level of at least .50 while at the same time not correlating significantly with PISA-g. Such a variable would be indicative of RspecF, as illustrated in the bi-factor model. The precise nature of this factor remains an open question.

Contrary to expectations, in the majority of countries and federal states—if not all—the correlations between the reading-related student variables and RspecF were not significantly stronger than their correlations with PISA-g. A student characteristic may be regarded as reading-specific if it shows significantly stronger correlations with reading competence than with mathematical and scientific competence. According to this criterion, the correct assessment of the usefulness of effective strategies for writing text summaries and for understanding and remembering textual information, the use of control strategies, and the enjoyment of reading were reading-specific; the first three in most countries and federal states and the latter in all cases, except Tunisia. However, the number of books at home, the use of memorization, and the use of elaboration strategies were not reading-specific in most or nearly all countries and federal states. Although certain student characteristics were found to be reading-specific, they generally exhibited a stronger correlation with PISA-g than with RspecF in most cases, or in all cases, depending on the specific student characteristic under consideration.

7.3. Criticism and implications

For the PISA 2009 items, for which response options were provided, students might also have guessed the correct answer, thereby potentially compromising the validity of these items. It is also plausible that guessing techniques may enhance the correlation between PISA competences and intelligence.

The utilization of manifest correlations has the potential to facilitate a more precise empirical differentiation between the reading competence subscales and other competences (e.g., German PISA 2000 data: rAcRe*InIn = .74manifest versus .94latent, rAcRe*ReEv = .64manifest vs. .88latent, rInIn*ReEv = .71manifest versus .91latent; Artelt & Schlagmüller, 2004). However, the persistent high latent correlations suggest the need to modify the PISA items themselves to align more closely with greater dimensional specificity. As a preliminary step, it is essential to identify and mitigate the similarities between the competences, including the reading subscales, as discussed in Sections 3.1 and 3.2.

The high variance explanation of the mathematical and science competence accounted for by PISA-g can also be partly due to the chosen specification of the nested-factor model. For instance, the inclusion of an additional specific factor for each of these competences within the model could potentially result in a reduced variance explanation by PISA-g. However, the PISA 2009 data set encompassed only overall scales for mathematics and science competence, thereby precluding the extraction of further competence-specific factors, for example, from the subscales of mathematics and science competence. Instead of assuming a reading-specific factor, the correlation between the reading subscales could also be attributed to a common method variance between them (Brunner, 2008), that is, a method factor.

In previous factor analyses (e.g., Brunner, 2006, 2008; Pokropek, Marks, & Borgonovi, 2022), the two subtests (word and figure analogies) of the Cognitive Ability Test (Heller & Perleth, 2000) and the Raven’s Standard Progressive Matrices (Jaworowska et al., 2000) were utilized. However, this is not sufficient for a comprehensive assessment of intelligence. Consequently, a broad intelligence test battery should be considered in conjunction with the PISA competences at the overall, subscale, and item levels in future factor analyses. This would facilitate comprehensive empirical validation of PISA-g, for instance, in terms of an ability analogous to intelligence or less. Furthermore, a comparative analysis of the PISA items with those of various intelligence tests is necessary to identify similarities in content and cognitive demand.

Given the substantial sample sizes, even negligible correlations or minor differences between correlation coefficients can attain statistical significance. Consequently, a student characteristic may appear to be reading-specific merely because its correlation with reading competence is marginally—but significantly—higher than with other competences. Therefore, it is essential to systematically assess the magnitude of these correlations in addition to their statistical significance.

It is recommended that subsequent PISA studies incorporate a comprehensive intelligence measure in conjunction with an approximately representative sample of items from the three competence domains (reading, mathematics, and science). This approach enables a thorough evaluation of PISA-g in factor analysis. In order to compute manifest correlations between domains at the item or total scale level, students must be administered a representative sample of PISA items for each domain. The divergent empirical validity of competences can then be evaluated on the basis of such correlations. In addressing the issue of motivational losses resulting from elevated demands during a single testing occasion, it is recommended to implement measurements at two distinct time points. This approach may help mitigate low test-taking effort, which can lead to student underperformance (e.g., Borger et al., 2025; He et al., 2025). Furthermore, an analysis of tasks across all subscale items within a given domain compared with other domains is advised to identify shared characteristics specific to that domain. This procedure may facilitate deriving a description of a domain-specific factor.

Accepted: January 17, 2026 CDT

References

Adams, R., & Wu, M. (2003). Proficiency scales construction. In R. Adams & M. Wu (Eds.), PISA 2000 technical report (pp. 195–216). OECD Publishing. https:/​/​doi.org/​10.1787/​9789264199521-EN
Google Scholar
Artelt, C., Naumann, J., & Schneider, W. (2010). Lesemotivation und Lernstrategien [Reading motivation and learning strategies]. In E. Klieme, C. Artelt, J. Hartig, N. Jude, O. Köller, M. Prenzel, W. Schneider, & P. Stanat (Eds.), PISA 2009. Bilanz nach einem Jahrzehnt [PISA 2009. Taking stock after a decade] (pp. 73–112). Waxmann.
Google Scholar
Artelt, C., & Schlagmüller, M. (2004). Der Umgang mit literarischen Texten als Teilkompetenz im Lesen? Dimensionsanalysen und Ländervergleiche [The handling of literary texts as a sub-competence in reading: dimensional analyses and cross-country comparisons]. In U. Schiefele, C. Artelt, W. Schneider, & P. Stanat (Eds.), Struktur, Entwicklung und Förderung von Lesekompetenz. Vertiefende Analysen im Rahmen von PISA 2000 [Structure, development and promotion of reading literacy. In-depth analyses within the framework of PISA 2000] (pp. 169–196). VS Verlag für Sozialwissenschaften. https:/​/​doi.org/​10.1007/​978-3-322-81031-1_8
Google Scholar
Artelt, C., Stanat, P., Schneider, W., & Schiefele, U. (2001). Lesekompetenz: Testkonzeption und Ergebnisse [Reading literacy: test design and results]. In J. Baumert, E. Klieme, M. Neubrand, M. Prenzel, U. Schiefele, W. Schneider, P. Stanat, K.-J. Tillmann, & M. Weiß (Eds.), PISA 2000. Basiskompetenzen von Schülerinnen und Schülern im internationalen Vergleich. [PISA 2000. Basic competencies of students in international comparison.] (pp. 69–137). Leske + Budrich. https:/​/​doi.org/​10.1007/​978-3-322-83412-6
Google Scholar
Baumert, J., Lüdtke, O., Trautwein, U., & Brunner, M. (2009). Large-scale student assessment studies measure the results of processes of knowledge acquisition: evidence in support of the distinction between intelligence and student achievement. Educational Research Review, 4(3), 165–176. https:/​/​doi.org/​10.1016/​j.edurev.2009.04.002
Google Scholar
Borger, L., Eklöf, H., Johansson, S., & Strietholt, R. (2025). The issue of test-taking motivation in low-and high-stakes tests: are students underachieving in PISA? Learning and Individual Differences, 122, 102722. https:/​/​doi.org/​10.1016/​j.lindif.2025.102722
Google Scholar
Brunner, M. (2006). Mathematische Schülerleistung: Struktur, Schulformunterschiede und Validität [Mathematical student performance: Structure, school form differences and validity] [Doctoral dissertation, Humboldt University of Berlin]. https:/​/​doi.org/​10.18452/​15480
Brunner, M. (2008). No g in education? Learning and Individual Differences, 18(2), 152–165. https:/​/​doi.org/​10.1016/​j.lindif.2007.08.005
Google Scholar
Brunner, M., Gogol, K. M., Sonnleitner, P., Keller, U., Krauss, S., & Preckel, F. (2013). Gender differences in the mean level, variability, and profile shape of student achievement: results from 41 countries. Intelligence, 41(5), 378–395. https:/​/​doi.org/​10.1016/​j.intell.2013.05.009
Google Scholar
Carroll, J. B. (1993). Human cognitive abilities. Cambridge University Press. https:/​/​doi.org/​10.1017/​CBO9780511571312
Google Scholar
Ceci, S. J. (1991). How much does schooling influence general intelligence and its cognitive components? A reassessment of the evidence. Developmental Psychology, 27(5), 703–722. https:/​/​doi.org/​10.1037/​0012-1649.27.5.703
Google Scholar
Cohen, J. (1988). The significance of a product moment rs. In J. Cohen (Ed.), Statistical power analysis for the behavioral sciences (2nd ed., pp. 75–107). Lawrence Erlbaum Associates.
Google Scholar
Diedenhofen, B., & Musch, J. (2015). Cocor: a comprehensive solution for the statistical comparison of correlations. PLOS ONE, 10(6), e0131499. https:/​/​doi.org/​10.1371/​journal.pone.0121945
Google Scholar
Ehmke, T., Drechsel, B., & Carstensen, C. H. (2008). Klassenwiederholen in PISA-I-Plus: Was lernen Sitzenbleiber in Mathematik dazu? [Repeating a class in PISA-I-Plus: What do those who have to repeat a class learn in mathematics?]. Zeitschrift für Erziehungswissenschaft, 11(3), 368–387. https:/​/​doi.org/​10.1007/​s11618-008-0033-3
Google Scholar
Ehmke, T., Drechsel, B., & Carstensen, C. H. (2010). Effects of grade retention on achievement and self-concept in science and mathematics. Studies in Educational Evaluation, 36(1–2), 27–35. https:/​/​doi.org/​10.1016/​j.stueduc.2010.10.003
Google Scholar
Flores-Mendoza, C., Ardila, R., Gallegos, M., & Reategui-Colareta, N. (2021). General intelligence and socioeconomic status as strong predictors of student performance in Latin American schools: evidence from PISA items. Frontiers in Education, 6, 632289. https:/​/​doi.org/​10.3389/​feduc.2021.632289
Google Scholar
Ganzach, Y. (2014). Adolescents’ intelligence is related to family income. Personality and Individual Differences, 59, 112–115. https:/​/​doi.org/​10.1016/​j.paid.2013.10.028
Google Scholar
Gottfredson, L. S. (1997a). Mainstream science on intelligence: an editorial with 52 signatories, history, and bibliography. Intelligence, 24(1), 13–23. https:/​/​doi.org/​10.1016/​S0160-2896(97)90011-8
Google Scholar
Gottfredson, L. S. (1997b). Why g matters: the complexity of everyday life. Intelligence, 24(1), 79–132. https:/​/​doi.org/​10.1016/​S0160-2896(97)90014-3
Google Scholar
Hair, J. F., Babin, B. J., Anderson, R. E., & Black, W. C. (2018). Multivariate data analysis (8th ed.). Cengage Learning.
Google Scholar
Haworth, C. M., Kovas, Y., Dale, P. S., & Plomin, R. (2008). Science in elementary school: Generalist genes and school environments. Intelligence, 36(6), 694–701. https:/​/​doi.org/​10.1016/​j.intell.2008.04.002
Google Scholar
He, S., Liu, X., & Cui, Y. (2025). The role of discrepancy between self-reported and response time-based questionnaire-taking efforts on Canadian students’ PISA 2022 test achievements. Educational Psychology, 1–22. https:/​/​doi.org/​10.1080/​01443410.2024.2448563
Google Scholar
Heller, K. A., Kratzmeier, H., & Lengfelder, A. (1998). Technische Daten [Technical data]. In K. A. Heller, H. Kratzmeier, & A. Lengfelder (Eds.), Matrizen-Test-Manual, Band 1: Ein Handbuch mit deutschen Normen zu den Standard Progressive Matrices von J.C. Raven [Matrices test manual, Volume 1: a manual of German norms for the Standard Progressive Matrices by J.C. Raven] (pp. 7–27). Beltz-Test.
Google Scholar
Heller, K. A., & Perleth, C. (2000). Kognitiver Fähigkeitstest für 4. bis 12. Klassen, Revision. Manual [Cognitive Ability Test for 4th to 12th grades, revision. Manual] (1st ed.). Beltz-Test.
Google Scholar
Jakubowski, M. (2013). Analysis of the predictive power of PISA test items (No. 87; OECD Education Working Papers). OECD Publishing. https:/​/​doi.org/​10.1787/​5k4bx47268g5-en
Jaworowska, A., Szustrowa, T., & Raven, J. C. (2000). Test Matryc Ravena w wersji Standard TMS: formy: Klasyczna, Równoległa, Plus: polskie standaryzacje [Raven’s matrices test in the standard TMS version, forms: regular, parallel, and plus]. Pracownia Testów Psychologicznych Polskiego Towarzystwa Psychologicznego.
Google Scholar
Jimerson, S., Carlson, E., Rotert, M., Egeland, B., & Sroufe, L. A. (1997). A prospective, longitudinal study of the correlates and consequences of early grade retention. Journal of School Psychology, 35(1), 3–25. https:/​/​doi.org/​10.1016/​S0022-4405(96)00033-7
Google Scholar
Kampa, N., Scherer, R., Saß, S., & Schipolowski, S. (2021). The relation between science achievement and general cognitive abilities in large-scale assessments. Intelligence, 86, 101529. https:/​/​doi.org/​10.1016/​j.intell.2021.101529
Google Scholar
Kline, R. B. (2012). Assumptions in structural equation modeling. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (pp. 111–125). The Guilford Press.
Google Scholar
Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed.). Guilford Press.
Google Scholar
Knoche, N., & Lind, D. (2004). Bedingungsanalysen mathematischer Leistung: Leistungen in den anderen Domänen, Interesse, Selbstkonzept und Computernutzung [Conditional analyses of mathematical performance: performance in other domains, interest, self-concept, and computer usage]. In M. Neubrand (Ed.), Mathematische Kompetenzen von Schülerinnen und Schülern in Deutschland [Mathematical competencies of students in Germany] (pp. 205–226). VS Verlag für Sozialwissenschaften. https:/​/​doi.org/​10.1007/​978-3-322-80661-1_11
Google Scholar
Kriegbaum, K., & Spinath, B. (2016). Explaining social disparities in mathematical achievement: the role of motivation. European Journal of Personality, 30(1), 45–63. https:/​/​doi.org/​10.1002/​per.2042
Google Scholar
Krohne, H. W., & Hock, M. (2015). Intelligenztests [Intelligence tests]. In M. Hasselhorn, H. Heuer, & S. Schneider (Eds.), Psychologische Diagnostik: Grundlagen und Anwendungsfelder [Psychological diagnostics: fundamentals and fields of application] (pp. 358–376). Kohlhammer Verlag.
Google Scholar
Krohne, J. A., Meier, U., & Tillmann, K.-J. (2004). Sitzenbleiben, Geschlecht und Migration —Klassenwiederholungen im Spiegel der PISA-Daten [Grade retention, gender and migration — class repetition in the mirror of PISA data]. Zeitschrift für Pädagogik, 50(3), 373–391. https:/​/​doi.org/​10.25656/​01:4816
Google Scholar
Lemos, G. C., Almeida, L. S., & Colom, R. (2011). Intelligence of adolescents is related to their parents’ educational level but not to family income. Personality and Individual Differences, 50(7), 1062–1067. https:/​/​doi.org/​10.1016/​j.paid.2011.01.025
Google Scholar
Leutner, D., Fleischer, J., & Wirth, J. (2006). Problemlösekompetenz als Prädiktor für zukünftige Kompetenz in Mathematik und in den Naturwissenschaften [Problem-solving competence as a predictor of future competence in mathematics and science]. In M. Prenzel, J. Baumert, W. Blum, R. Lehmann, D. Leutner, M. Neubrand, R. Pekrun, J. Rost, U. Schiefele, & P.-K. Deutschland (Eds.), PISA 2003. Untersuchungen zur Kompetenzentwicklung im Verlauf eines Schuljahres [PISA 2003. Investigations into the development of competencies over the course of a school year] (pp. 119–137). Waxmann.
Google Scholar
Leutner, D., Wirth, J., Klieme, E., & Funke, J. (2005). Ansätze zur Operationalisierung und deren Erprobung im Feldtest zu PISA 2000 [Approaches to operationalisation and their testing in the field trial for PISA 2000]. In E. Klieme, D. Leutner, & J. Wirth (Eds.), Problemlösekompetenz von Schülerinnen und Schülern [Problem-solving competence of students]. VS Verlag für Sozialwissenschaften. https:/​/​doi.org/​10.1007/​978-3-322-85144-4
Google Scholar
McDonald, R. P. (1999). Test theory: a unified treatment. Lawrence Erlbaum Associates Publishers.
Google Scholar
McElvany, N., Becker, M., & Lüdtke, O. (2009). Die Bedeutung familiärer Merkmale für Lesekompetenz, Wortschatz, Lesemotivation und Leseverhalten [The importance of family characteristics for reading competence, vocabulary, reading motivation and reading behaviour]. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 41(3), 121–131. https:/​/​doi.org/​10.1026/​0049-8637.41.3.121
Google Scholar
Meng, X.-L., Rosenthal, R., & Rubin, D. B. (1992). Comparing correlated correlation coefficients. Psychological Bulletin, 111(1), 172. https:/​/​doi.org/​10.1037/​/​0033-2909.111.1.172
Google Scholar
Michaelides, M. P., Ivanova, M. G., & Avraam, D. (2024). The impact of filtering out rapid-guessing examinees on PISA 2015 country rankings. Psychological Test and Assessment Modeling, 66(1), 50–62. https:/​/​doi.org/​10.2440/​001-0012
Google Scholar
Möller, J., & Schiefele, U. (2004). Motivationale Grundlagen der Lesekompetenz [Motivational foundations of reading literacy]. In U. Schiefele, C. Artelt, W. Schneider, & P. Stanat (Eds.), Struktur, Entwicklung und Förderung von Lesekompetenz: Vertiefende Analysen im Rahmen von PISA 2000 [Structure, development and promotion of reading literacy: in-depth analyses in the context of PISA 2000] (pp. 101–124). VS Verlag für Sozialwissenschaften. https:/​/​doi.org/​10.1007/​978-3-322-81031-1_5
Google Scholar
Muthén, L. K., & Muthén, B. O. (2015). Mplus user’s guide (7th ed.).
Google Scholar
OECD. (1999a). Classifying educational programmes: Manual for ISCED-97 implementation in OECD countries. OECD Publishing.
Google Scholar
OECD. (1999b). Reading literacy. In OECD (Ed.), Measuring student knowledge and skills (pp. 19–40). OECD Publishing. https:/​/​doi.org/​10.1787/​9789264173125-EN
Google Scholar
OECD. (2004). A profile of student performance in mathematics. In OECD (Ed.), Learning for tomorrow’s world: first results from PISA 2003 (pp. 35–108). OECD Publishing. https:/​/​doi.org/​10.1787/​9789264006416-en
Google Scholar
OECD. (2005). PISA 2003 technical report. OECD Publishing. https:/​/​doi.org/​10.1787/​9789264010543-en
Google Scholar
OECD. (2009a). PISA 2006 technical report. OECD Publishing. https:/​/​doi.org/​10.1787/​9789264048096-en
Google Scholar
OECD. (2009b). PISA data analysis manual: SPSS (2nd ed.). OECD Publishing. https:/​/​doi.org/​10.1787/​9789264056275-en
Google Scholar
OECD. (2010a). Learning mathematics for life: a view perspective from PISA. OECD Publishing. https:/​/​doi.org/​10.1787/​9789264075009-en
Google Scholar
OECD. (2010b). PISA 2009 assessment framework: key competencies in reading, mathematics and science. OECD Publishing. https:/​/​doi.org/​10.1787/​9789264062658-en
Google Scholar
OECD. (2010c). PISA 2009 results: learning to learn, student engagement, strategies and practices (Volume III) (OECD, Ed.). OECD Publishing. https:/​/​doi.org/​10.1787/​9789264083943-en
Google Scholar
OECD. (2010d). PISA 2009 results: What students know and can do. Student performance in reading, mathematics and science (Vol. 1). OECD Publishing. https:/​/​doi.org/​10.1787/​9789264091450-EN
Google Scholar
OECD. (2012). PISA 2009 technical report. OECD Publishing. https:/​/​doi.org/​10.1787/​9789264167872-en
Google Scholar
OECD. (2017). Scaling outcomes. In OECD (Ed.), PISA 2015 technical report (pp. 225–250). OECD Publishing. https:/​/​www.oecd.org/​pisa/​data/​2015-technical-report/​PISA2015_TechRep_Final.pdf
Google Scholar
OECD. (2019). PISA 2018 reading framework. In OECD (Ed.), PISA 2018 assessment and analytical framework (pp. 21–72). OECD Publishing. https:/​/​doi.org/​10.1787/​B25EFAB8-EN
Google Scholar
Peng, P., & Kievit, R. A. (2020). The development of academic achievement and cognitive abilities: a bidirectional Perspective. Child Development Perspectives, 14(1), 15–20. https:/​/​doi.org/​10.1111/​cdep.12352
Google Scholar
Peng, P., Wang, T., Wang, C., & Lin, X. (2019). A meta-analysis on the relation between fluid intelligence and reading/mathematics: effects of tasks, age, and social economics status. Psychological Bulletin, 145(2), 189–236. https:/​/​doi.org/​10.1037/​bul0000182
Google Scholar
Pokropek, A., Marks, G. N., & Borgonovi, F. (2022). How much do students’ scores in PISA reflect general intelligence and how much do they reflect specific abilities? Journal of Educational Psychology, 114(5), 1121–1135. https:/​/​doi.org/​10.1037/​edu0000687
Google Scholar
Pokropek, A., Marks, G. N., Borgonovi, F., Koc, P., & Greiff, S. (2022). General or specific abilities? Evidence from 33 countries participating in the PISA assessments. Intelligence, 92, 101653. https:/​/​doi.org/​10.1016/​j.intell.2022.101653
Google Scholar
Prenzel, M., Rost, J., Senkbeil, M., Häußler, P., & Klopp, A. (2001). Naturwissenschaftliche Grundbildung: Testkonzeption und Ergebnisse [Basic science education: test design and results]. In J. Baumert, E. Klieme, M. Neubrand, M. Prenzel, U. Schiefele, W. Schneider, P. Stanat, K.-J. Tillmann, & M. Weiß (Eds.), PISA 2000. Basiskompetenzen von Schülerinnen und Schülern im internationalen Vergleich [PISA 2000. Basic competencies of students in international comparison] (pp. 191–248). Leske + Budrich.
Google Scholar
Rajchert, J. M., Żułtak, T., & Smulczyk, M. (2014). Predicting reading literacy and its improvement in the Polish national extension of the PISA study: the role of intelligence, trait-and state-anxiety, socio-economic status and school-type. Learning and Individual Differences, 33, 1–11. https:/​/​doi.org/​10.1016/​j.lindif.2014.04.003
Google Scholar
Rindermann, H. (2007). Intelligenz, kognitive Fähigkeiten, Humankapital und Rationalität auf verschiedene Ebenen [Intelligence, cognitive abilities, human capital, and rationality at different levels]. Psychologische Rundschau, 58(2), 137–145. https:/​/​doi.org/​10.1026/​0033-3042.58.2.137
Google Scholar
Rindermann, H. (2011). Intelligenzwachstum in Kindheit und Jugend [Intelligence growth in childhood and adolescence]. Psychologie in Erziehung und Unterricht, 58(3), 210–224. https:/​/​doi.org/​10.2378/​peu2011.art29d
Google Scholar
Rindermann, H. (2018). Human capital, cognitive ability and intelligence. In H. Rindermann (Ed.), Cognitive capitalism: human capital and the wellbeing of nations (pp. 40–84). Cambridge University Press. https:/​/​doi.org/​10.1017/​9781107279339.004
Google Scholar
Rindermann, H., & Baumeister, A. E. (2015). Validating the interpretations of PISA and TIMSS tasks: a rating study. International Journal of Testing, 15(1), 1–22. https:/​/​doi.org/​10.1080/​15305058.2014.966911
Google Scholar
Rindermann, H., & Ceci, S. J. (2018). Parents’ education is more important than their wealth in shaping their children’s intelligence: results of 19 samples in seven countries at different developmental levels. Journal for the Education of the Gifted, 41(4), 298–326. https:/​/​doi.org/​10.1177/​0162353218799481
Google Scholar
Ritchie, S. J., & Tucker-Drob, E. M. (2018). How much does education improve intelligence? A meta-analysis. Psychological Science, 29(8), 1358–1369. https:/​/​doi.org/​10.1177/​0956797618774253
Google Scholar
Schaffner, E. (2009). Determinanten des Leseverstehens [Determinants of reading comprehension]. In W. Lenhard & W. Schneider (Eds.), Diagnostik und Förderung des Leseverständnisses [Diagnostics and promotion of reading comprehension] (pp. 19–44). Hogrefe Verlag.
Google Scholar
Schaffner, E., Schiefele, U., & Schneider, W. (2004). Ein erweitertes Verständnis der Lesekompetenz: Die Ergebnisse des nationalen Ergänzungstests [A broader understanding of reading literacy: the results of the national supplementary test]. In U. Schiefele, C. Artelt, W. Schneider, & P. Stanat (Eds.), Struktur, Entwicklung und Förderung von Lesekompetenz: Vertiefende Analysen im Rahmen von PISA 2000 [Structure, development and promotion of reading literacy: in-depth analyses in the context of PISA 2000] (pp. 197–242). VS Verlag für Sozialwissenschaften. https:/​/​doi.org/​10.1007/​978-3-322-81031-1_9
Google Scholar
Schnotz, W. (2014). Integrated model of text and picture comprehension. In R. E. Mayer (Ed.), The cambridge handbook of multimedia learning (2nd ed., pp. 72–103). Cambridge University Press. https:/​/​doi.org/​10.1017/​CBO9781139547369.006
Google Scholar
Schnotz, W., & Dutke, S. (2004). Kognitionspsychologische Grundlagen der Lesekompetenz: Mehrebenenverarbeitung anhand multipler Informationsquellen [Cognitive-psychological foundations of reading literacy: multilevel processing based on multiple sources of information]. In U. Schiefele, C. Artelt, W. Schneider, & P. Stanat (Eds.), Struktur, Entwicklung und Förderung von Lesekompetenz. Vertiefende Analysen im Rahmen von PISA 2000 [Structure, development and promotion of reading literacy. In-depth analyses within the framework of PISA 2000] (pp. 61–99). VS Verlag für Sozialwissenschaften. https:/​/​doi.org/​10.1007/​978-3-322-81031-1_4
Google Scholar
Steinmayr, R., Dinger, F. C., & Spinath, B. (2010). Parents’ education and children’s achievement: the role of personality. European Journal of Personality, 24(6), 535–550. https:/​/​doi.org/​10.1002/​per.755
Google Scholar
Urban, D., & Mayerl, J. (2014). Strukturgleichungsmodellierung: Ein Ratgeber für die Praxis [Structural equation modeling: a guidebook for the practice]. Springer Fachmedien. https:/​/​doi.org/​10.1007/​978-3-658-01919-8
Google Scholar
Walker, M. (2011). PISA 2009 Plus results: performance of 15-year-olds in reading, mathematics and science for 10 additional participants. ACER Press. https:/​/​research.acer.edu.au/​pisa/​1
Google Scholar
Warm, T. A. (1989). Weighted likelihood estimation of ability in item response theory. Psychometrika, 54(3), 427–450. https:/​/​doi.org/​10.1007/​BF02294627
Google Scholar
Wirth, J., Leutner, D., & Klieme, E. (2005). Problemlösekompetenz — Ökonomisch und zugleich differenziert erfassbar? [Problem-solving competence — Economically and at the same time differentiated measurable?]. In E. Klieme, D. Leutner, & J. Wirth (Eds.), Problemlösekompetenz von Schülerinnen und Schülern: Diagnostische Ansätze, theoretische Grundlagen und empirische Befunde der deutschen PISA-2000-Studie (pp. 73–82). VS Verlag für Sozialwissenschaften. https:/​/​doi.org/​10.1007/​978-3-322-85144-4_6
Google Scholar

Appendices

A1: Rindermann’s (2018) definition of intelligence

Rindermann (2018, p. 43) provided a comprehensive definition of intelligence with detailed descriptions of some components mentioned by Gottfredson (1997a, 1997b):

Intelligence is the ability to think, a rather knowledge-reduced mental capacity, ideally free of specific knowledge. Intelligence comprises problem solving: to solve new problems by thinking (no simple knowledge recall); reasoning: to infer (to conclude and reason, to draw inductive and deductive-logical conclusions including finding patterns in information, to correctly generalise, to apply rules for new examples and to solve syllogisms); abstract thinking: to categorise, to form concepts, to sort out less relevant information, to process abstract information in the form of verbal and numerical symbols, in the form of abstract figures and in the form of general rules; understanding: to recognise and construct relationships, structures, contexts and meaning, to have insight. Intelligence … includes the ability to change cognitive perspectives, to make plans and use foresight.

As stated in Section 5.2 of the aforementioned study, the items of the applied student characteristic scales are provided below:

Reading enjoyment was assessed by the following 11 items (OECD, 2012, p. 290):

  • “I read only if I have to”,

  • “Reading is one of my favourite hobbies”,

  • “I like talking about books with other people”,

  • “I find it hard to finish books”,

  • “I feel happy if I receive a book as a present”,

  • “For me, reading is a waste of time”,

  • “I enjoy going to a bookstore or a library”,

  • “I read only to get information that I need”,

  • “I cannot sit still and read for more than a few minutes”,

  • “I like to express my opinions about books I have read”, and

  • “I like to exchange books with my friends”.

The scale for the use of control strategies were generated based on the subsequent five items (OECD, 2012, p. 293):

  • “When I study, I start by figuring out what exactly I need to learn”,

  • “When I study, I check if I understand what I have read”,

  • “When I study, I try to figure out which concepts I still haven’t really understood”,

  • “When I study, I make sure that I remember the most important points in the text”, and

  • “When I study and I don’t understand something, I look for additional information to clarify this”.

For the scale regarding the use of elaboration strategies, the following four items were applied (OECD, 2012, p. 293):

  • “When I study, I try to relate new information to prior knowledge acquired in other subjects”,

  • “When I study, I figure out how the information might be useful outside school”,

  • “When I study, I try to understand the material better by relating it to my own experiences”, and

  • “When I study, I figure out how the text information fits in with what happens in real life”.

The four items formed the scale for the use of memorization strategies were (OECD, 2012, p. 293):

  • “When I study, I try to memorize everything that is covered in the text”,

  • “When I study, I try to memorize as many details as possible”,

  • “When I study, I read the text so many times that I can recite it”, and

  • “When I study, I read the text over and over again”.

The scale for the correct assessment of the usefulness of effective strategies for understanding and remembering text information was constructed based on students’ judgment of the usefulness of the subsequent strategies (OECD, 2010c, p. 113):

  • "A) I concentrate on the parts of the text that are easy to understand;

  • B) I quickly read through the text twice;

  • C) After reading the text, I discuss its content with other people;

  • D) I underline important parts of the text;

  • E) I summarise the text in my own words; and

  • F) I read the text aloud to another person."

For assessing the usefulness of these strategies, a six-point rating scale, ranging from 1 = not useful at all to 6 = very useful, was given. Students’ responses were required to correspond to the reading expert’s rank order, according to which strategies listed in C), D), and E) were considered more effective than those in A), B), and F). When this condition was met, one score point was assigned for each correct pairwise comparison (i.e., C > A; C > B; C > F; D > A; D > B; D > F; E > A; E > B; E > F); otherwise, no score point was assigned. Students’ final score points (e.g., 3 out of 9 equals .33) were standardized using the OECD mean and the corresponding standard deviation (OECD, 2012). Thus, standardized scores greater than zero indicate a better assessment of the usefulness of effective strategies for understanding and remembering text information compared to the OECD mean.

The scale for the correct assessment of the usefulness of effective strategies for writing a text summary was constructed based on students’ judgment of the usefulness of the subsequent strategies (OECD, 2010c, p. 113):

  • "A) I write a summary. Then I check that each paragraph is covered in the summary, because the content of each paragraph should be included;

  • B) I try to copy out accurately as many sentences as possible;

  • C) before writing the summary, I read the text as many times as possible;

  • D) I carefully check whether the most important facts in the text are represented in the summary; and

  • E) I read through the text, underlining the most important sentences, then I write them in my own words as a summary."

To assess the usefulness of these strategies, the previously described six-point rating scale was provided. Students’ responses were compared with experts’ rank order of the strategies (i.e., DE > AC > B). Students obtained one score point for each correct pairwise comparison (i.e., D > A; D > C; D > B; E > A; E > C; E > B; A > B; C > B); otherwise, no score point was assigned. The final scores (e.g., 4 out of 8 equals .50) were then standardized using the OECD mean and the corresponding standard deviation (OECD, 2012).

Appendix B: Tables

Table B1.The mapping of selected PISA 2009 reading items to the reading literacy subscales (according to the PISA-authors)
Exemplary reading items Reading subscales
AcRe InIn ReEv
The Play’s The Thing (Question 3): What were the characters in the play doing just before the curtain went up? (✓)
Brushing Your Teeth (Question 2): What do the British researchers recommend? (✓)
Brushing Your Teeth (Question 3): Why should you brush your tongue, according to Bente Hansen? (✓)
Brushing Your Teeth (Question 4): Why is a pen mentioned in the text? (✓)
Mobile Phone Safety (Question 11): “It is difficult to prove that one thing has definitely caused another. What is the relationship of this piece of information to the Point 4 Yes and No statements in the table Are mobile phones dangerous?” (✓)
Balloon (Question 4): What is the purpose of including a drawing of a jumbo jet in this text? (✓)
Balloon (Question 6): Why does the drawing show two balloons? (✓)
Blood Donation Notice (Question 9): The text says: “The instruments for taking the blood are sterile and single-use …” Why does the text include this information? (✓)

Note. A tick mark indicates the assignment of reading items to the literacy subscales Access and Retrieve (AcRe), Integrate and Interpret (InIn), and Reflect and Evaluate (ReEv). A tick without parentheses denotes assignment to a reading competence subscale, as defined by the OECD (2010d, pp. 92 et seq.). In our classification, a tick in parentheses indicates that the item partially fulfills the requirements of another subscale. This classification is based on the OECD’s descriptions of the items and their recommended solutions.

Table B2.The eight conceptualized mathematical abilities constituting mathematical competence in PISA 2009
Mathematical abilities Brief description
Mathematical thinking
and reasoning
  • Asking exploratory and probing questions (e.g., about the problem situation or the approach).
  • The distinction between different types of statements (e.g., hypotheses, definitions, and conditional assertions).
  • Logical analysis of the relationships between problem elements, as well as the understanding of mathematical concepts and how to deal with them.
Argumentation
  • Formal or logical argumentation, justification, and proof, as well as the comprehension of justifications or chains of argumentation.
Communication
  • Understanding the communicated mathematical matters and writing one‘s own point of view about it.
Modeling
  • Mathematically structuring the real-world situation and monitoring the modeling process.
  • Dealing with mathematical models (e.g., interpretation, analysis, communication, and criticism).
Problem posing
and solving
  • Recognize, formulate, and solve different types of mathematical problems and apply problem-solving strategies.
Representation
  • Understanding, interpreting, or translating different representations of mathematical objects or situations.
  • Selecting and switching between different forms of representation (e.g., text, diagram).
Using symbolic, formal, and technical language and operations
  • Decoding and interpreting formal or technical language.
  • Translating natural language into symbolic or formal language and understanding their relationship to each other.
  • Solving or manipulating equations and performing calculations.
Use of aids and tools
  • Knowledge of different aids/tools and their situation-appropriate use.

Note. The descriptions of the eight mathematical abilities were taken from OECD (2005, 2010b, 2010a).

Table B3.The description of the science sub-competences from PISA 2006
Science subscales Key characteristics
Identifying Scientific Issues
  • Identifying keywords for searching scientific information.
  • Recognizing scientifically investigable issues.
  • Recognizing key features of a scientific investigation (e.g., the control, change, or comparison of variables).
Explaining Phenomena Scientifically
  • Applying knowledge of science in a given situation.
  • The description or interpretation of phenomena and the prediction of changes.
  • Identifying appropriate descriptions, explanations, and predictions.
Using Scientific Evidence
  • Interpreting scientific evidence and making and communicating conclusions.
  • Identifying assumptions, evidence, and reasoning behind conclusions.
  • Reflecting on the societal implications of science and technological developments.

Note. Adapted from “PISA 2009 Assessment Framework: Key competencies in reading, mathematics and science” by OECD (2010b), p. 137 (https://doi.org/10.1787/9789264062658-en). Copyright 2009 by the OECD.

Table B4.The latent correlations between the reading literacy subscales and the total scales of mathematical and scientific literacy
Country NUnweighted Math with
Science
Math with reading subscales Science with
reading subscales
Intercorrelations between
reading subscales
AcRe InIn ReEv AcRe InIn ReEv AcRe*InIn AcRe*ReEv InIn*ReEv
Albania 4,596 .82 .74 .73 .71 .80 .79 .78 .92 .90 .91
Argentina 4,774 .88 .77 .79 .78 .80 .82 .81 .93 .91 .94
Australia 14,251 .90 .80 .82 .79 .85 .86 .84 .94 .94 .95
Austria 6,590 .91 .77 .78 .76 .83 .84 .83 .95 .92 .94
Azerbaijan 4,691 .48 .39 .43 .39 .61 .63 .59 .87 .85 .81
Belgium 8,501 .90 .79 .83 .81 .82 .86 .84 .92 .88 .93
Brazil 20,127 .87 .74 .77 .74 .79 .81 .78 .91 .85 .91
Bulgaria 4,507 .85 .78 .78 .77 .83 .84 .81 .94 .91 .94
Canada 23,207 .86 .73 .77 .72 .78 .82 .78 .92 .89 .92
Chile 5,669 .86 .74 .77 .73 .75 .78 .74 .91 .87 .91
Colombia 7,921 .83 .70 .73 .69 .72 .77 .74 .90 .86 .92
Costa Rica 4,578 .84 .72 .76 .71 .73 .76 .73 .90 .82 .90
Croatia 4,994 .89 .74 .74 .72 .79 .81 .79 .93 .89 .93
Czech Republic 6,064 .87 .75 .79 .76 .78 .82 .80 .94 .88 .92
Denmark 5,924 .86 .72 .75 .72 .78 .82 .79 .91 .89 .93
Estonia 4,727 .85 .73 .75 .74 .76 .80 .78 .91 .86 .92
Finland 5,810 .85 .69 .72 .71 .75 .80 .78 .90 .86 .92
France 4,298 .90 .77 .80 .79 .82 .85 .84 .92 .89 .94
Georgia 4,646 .80 .70 .67 .68 .74 .74 .75 .92 .89 .90
Germany 4,979 .91 .79 .79 .79 .82 .82 .83 .94 .91 .93
Greece 4,969 .82 .68 .71 .68 .74 .77 .75 .91 .85 .91
Himachal Pradesh 1,616 .69 .57 .59 .57 .65 .66 .60 .82 .72 .85
Hong Kong 4,837 .89 .70 .78 .74 .73 .81 .78 .87 .78 .89
Hungary 4,605 .90 .80 .81 .79 .84 .84 .82 .92 .89 .92
Iceland 3,646 .89 .76 .77 .76 .79 .82 .80 .94 .91 .93
Indonesia 5,136 .80 .63 .67 .60 .64 .69 .63 .82 .73 .83
Ireland 3,937 .92 .79 .80 .78 .81 .84 .81 .94 .90 .94
Israel 5,761 .88 .80 .81 .81 .80 .81 .82 .94 .93 .95
Italy 30,905 .87 .74 .75 .73 .79 .81 .79 .92 .89 .94
Japan 6,088 .90 .78 .79 .74 .84 .85 .79 .93 .87 .90
Jordan 6,486 .83 .72 .73 .68 .77 .80 .75 .88 .85 .91
Kazakhstan 5,412 .83 .73 .75 .72 .76 .77 .75 .91 .89 .89
Korea 4,989 .87 .73 .76 .73 .76 .81 .78 .87 .77 .90
Kyrgyzstan 4,986 .81 .72 .70 .70 .73 .73 .72 .91 .86 .87
Latvia 4,502 .84 .71 .73 .69 .76 .77 .73 .92 .87 .90
Liechtenstein 329 .85 .68 .73 .70 .76 .78 .78 .87 .86 .91
Lithuania 4,528 .87 .76 .77 .74 .79 .81 .79 .91 .89 .94
Luxembourg 4,622 .88 .76 .79 .78 .82 .85 .84 .93 .92 .94
Macao 5,952 .80 .64 .66 .67 .71 .74 .74 .89 .81 .89
Malaysia 4,999 .78 .65 .67 .66 .69 .74 .71 .86 .85 .90
Malta 3,453 .89 .79 .80 .80 .85 .87 .85 .96 .95 .96
Mauritius 4,654 .87 .79 .79 .78 .82 .83 .82 .93 .92 .94
Mexico 38,250 .83 .73 .77 .72 .72 .78 .73 .89 .81 .90
Miranda 2,901 .82 .74 .77 .75 .77 .79 .77 .89 .88 .91
Moldova 5,194 .78 .70 .68 .66 .73 .72 .70 .91 .87 .90
Montenegro 4,825 .86 .73 .74 .71 .78 .78 .75 .95 .88 .92
Netherlands 4,760 .89 .79 .82 .81 .81 .87 .85 .91 .90 .94
New Zealand 4,643 .89 .78 .82 .79 .83 .86 .83 .93 .91 .94
Norway 4,660 .87 .74 .76 .73 .78 .80 .77 .91 .89 .91
Panama 3,969 .82 .74 .75 .74 .77 .79 .76 .90 .85 .91
Peru 5,985 .82 .74 .77 .73 .76 .78 .75 .92 .87 .93
Poland 4,917 .88 .75 .77 .74 .78 .80 .78 .91 .89 .93
Portugal 6,298 .86 .72 .77 .74 .76 .80 .78 .92 .85 .91
Qatar 9,078 .87 .81 .81 .79 .86 .86 .85 .95 .93 .94
Romania 4,776 .85 .74 .74 .72 .79 .79 .77 .93 .92 .92
Russia 5,308 .84 .72 .75 .72 .74 .77 .75 .92 .89 .93
Serbia 5,523 .85 .75 .72 .70 .78 .77 .74 .92 .89 .86
Shanghai 5,115 .88 .71 .79 .70 .73 .81 .72 .87 .71 .86
Singapore 5,283 .89 .78 .83 .80 .82 .86 .84 .92 .88 .93
Slovakia 4,555 .87 .75 .75 .75 .77 .78 .79 .93 .90 .93
Slovenia 6,155 .88 .77 .80 .77 .81 .83 .80 .93 .92 .94
Spain 25,887 .82 .71 .74 .72 .74 .78 .75 .88 .84 .89
Sweden 4,567 .90 .75 .78 .78 .79 .82 .81 .93 .91 .93
Switzerland 11,812 .87 .73 .75 .73 .78 .80 .79 .94 .91 .94
Taiwan 5,831 .89 .72 .80 .78 .76 .84 .81 .89 .84 .91
Tamil Nadu 3,210 .79 .67 .68 .70 .66 .68 .67 .81 .78 .86
Thailand 6,225 .82 .68 .73 .67 .75 .77 .71 .88 .78 .85
Trinidad and Tobago 4,778 .86 .77 .78 .79 .80 .82 .82 .94 .93 .94
Tunisia 4,955 .78 .66 .69 .65 .70 .73 .68 .85 .78 .89
Turkey 4,996 .87 .73 .76 .71 .77 .80 .75 .88 .83 .91
United Arab Emirates 10,867 .86 .78 .78 .76 .83 .84 .82 .93 .89 .93
United Kingdom 12,179 .89 .79 .80 .78 .82 .85 .82 .92 .90 .93
Uruguay 5,957 .82 .72 .74 .72 .75 .77 .75 .92 .90 .93
USA 5,233 .90 .79 .82 .80 .84 .86 .84 .93 .92 .95
Overall meana .86 .74 .76 .73 .77 .80 .77 .91 .87 .92
Median .86 .74 .77 .73 .78 .80 .78 .92 .89 .92
Min./max. .48/.92 .39/.81 .43/.83 .39/.81 .61/.86 .63/.87 .59/.85 .81/.96 .71/.95 .81/.96

Note. The latent correlations between the variables were estimated for each country and federal state using the five weighted plausible values of the students. The computations were performed within the framework of a structural equation model, employing the maximum likelihood estimation method. For each country and federal state, the root mean square error of approximation (RMSEA) and the standardized root mean square residual (SRMR) were both found to be zero. Three reading literacy subscales and the overall scales for mathematics and science were utilized for computing correlations between them. All correlations were found to be significant (p < .001, two-tailed test).
a The mean correlations between the reading subscales and mathematics or science, as well as between the last two, were computed as follows: The correlation coefficients were transformed into z-values using Fisher’s r-to-z transformation and weighted by the final student weights of the countries and federal states. Subsequently, the overall weighted z-means were transformed back to r.

Table B5.The correlations between gross domestic product per capita (or its natural logarithm) and Fisher’s z-transformed competence correlation coefficients
Relationship between GDPpc or ln(GDPpc) and Fisher’s z of Type of relationship and corresponding statistics
Linear: r Linear: R² Quadratic: R² Cubic: R² Logarithmic: R²
rMathematics*Science .40/.60 .16/.36 .35/.38 .39/.38a .36/.36
rMathematics*Access and Retrieve .22/.38 .05/.15 .19/.17 .20/.17a .15/.15
rMathematics*Integrate and Interpret .30/.51 .09/.26 .25/.30 .29/.30a .26/.27
rMathematics*Reflect and Evaluate .32/.52 .11/.27 .27/.30 .31/.30a .27/.28
rScience*Access and Retrieve .36/.51 .13/.26 .26/.27 .28/.27a .26/.26
rScience*Integrate and Interpret .43/.64 .19/.41 .40/.43 .43/.43a .41/.42
rScience*Reflect and Evaluate .48/.66 .23/.44 .41/.45 .45/.45a .44/.44
rAccess and Retrieve*Integrate and Interpret .17/.33 .03/.11 .16/.12 .16/.12a .11/.11
rAccess and Retrieve*Reflect and Evaluate .27/.36 .07/.13 .13/.13 .13/.13a .13/.13
rIntegrate and Interpret*Reflect and Evaluate .34/.51 .11/.26 .23/.29 .25/.30a .26/.27

Note. N = 70 countries and federal states. For Miranda, Shanghai, Himachal Pradesh, and Tamil Nadu, no gross domestic product per capita (GDPpc) at current prices in U.S. dollars was available for the year 2009; therefore, these regions were excluded from analyses. The correlation coefficients between the total scales of mathematical and science competence and/or the reading competence subscales across countries and federal states were Fisher’s r-to-z transformed. These z-values were then correlated with GDPpc and its natural logarithm, ln(GDPpc). Significant correlation coefficients and R² values (p ≤ .05, two-tailed test) are presented in regular font, whereas non-significant values (p > .05, two-tailed test) appear in italics. To obtain the explained variance (R²), the Fisher z-transformed correlation coefficients were regressed on GDPpc and ln(GDPpc). All statistical analyses were conducted using IBM SPSS Statistics (Version 30). GDPpc in U.S. dollars by countries for 2009 was obtained from the International Monetary Fund, retrieved on October 30, 2025, from https://www.imf.org/external/datamapper/NGDPDPC@WEO/OEMDC/ADVEC/WEOWORLD.
a Due to multicollinearity, as indicated by the tolerance limit being reached, the quadratic term was excluded from the model by IBM SPSS Statistics (Version 30). This resulted in a model comprising only the linear and cubic components. Consequently, the model represents a reduced cubic model.

Table B6.Fit indices for the one-factor models (1F), the one-factor models adjusted (1F adj.), and the bi-factor models (BiF)
Country/ AIC (BIC) Change in AIC (BIC) RMSEA [90% CIRMSEA] SRMR
statistics 1F 1F adj. BiF 1F–1F adj. 1F–BiF 1F adj.–BiF 1F 1F adj. BiF 1F 1F adj. BiF
Albania 249,924 248,600 248,563 1,324 1,361 37 .18 .04 .02 .04 .004 .002
(250,020) (248,703) (248,679) (1,317) (1,341) (24) [.17, .19] [.02, .05] [.00, .04]
Argentina 259,569 257,387 257,381 2,182 2,188 6 .23 .01 .00 .04 .002 .001
(259,666) (257,490) (257,497) (2,176) (2,169) (–7) [.22, .24] [.00, .03] [.00, .02]
Australia 755,327 747,635 747,508 7,692 7,819 127 .26 .03 .01 .04 .003 .000
(755,441) (747,756) (747,644) (7,685) (7,797) (112) [.25, .26] [.03, .04] [.00, .02]
Austria 354,241 349,772 349,708 4,469 4,533 64 .34 .05 .04 .04 .003 .002
(354,342) (349,880) (349,831) (4,361) (4,511) (49) [.33, .35] [.04, .06] [.02, .05]
Azerbaijan 254,932 254,545 254,474 387 458 71 .10 .05 .04 .04 .011 .005
(255,028) (254,649) (254,590) (379) (438) (59) [.09, .11] [.04, .06] [.03, .06]
Belgium 461,635 457,919 457,878 3,716 3,757 41 .21 .03 .01 .03 .003 .001
(461,740) (458,032) (458,005) (3,708) (3,735) (27) [.20, .22] [.02, .04] [.00, .02]
Brazil 1,080,845 1,072,227 1,072,019 8,618 8,826 208 .21 .04 .02 .04 .005 .001
(1,080,964) (1,072,353) (1,072,162) (8,611) (8,802) (191) [.21, .22] [.03, .05] [.01, .03]
Bulgaria 247,616 246,271 246,248 1,345 1,368 23 .18 .03 .00 .03 .003 .001
(247,712) (246,373) (246,363) (1,339) (1,349) (10) [.17, .19] [.02, .04] [.00, .03]
Canada 1,240,790 1,230,909 1,230,794 9,881 9,996 115 .17 .02 .01 .04 .003 .001
(1,240,910) (1,231,038) (1,230,939) (9,872) (9,971) (99) [.17, .18] [.02, .03] [.00, .02]
Chile 300,486 297,964 297,955 2,522 2,531 9 .19 .01 .00 .05 .003 .001
(300,585) (298,071) (298,074) (2,514) (2,511) (–3) [.18, .20] [.00, .02] [.00, .02]
Colombia 422,950 419,687 419,680 3,263 3,270 7 .22 .03 .04 .05 .004 .003
(423,055) (419,798) (419,806) (3,257) (3,249) (–8) [.21, .23] [.02, .04] [.03, .06]
Costa Rica 241,891 239,942 239,868 1,949 2,023 74 .24 .06 .05 .05 .008 .003
(241,987) (240,045) (239,984) (1,942) (2,003) (61) [.22, .25] [.05, .07] [.03, .07]
Croatia 266,000 262,771 262,756 3,229 3,244 15 .28 .04 .04 .05 .004 .002
(266,097) (262,875) (262,873) (3,222) (3,224) (2) [.27, .29] [.03, .05] [.03, .06]
Czech Rep. 325,715 322,927 322,858 2,788 2,857 69 .22 .04 .02 .04 .005 .001
(325,815) (323,034) (322,978) (2,781) (2,837) (56) [.21, .23] [.03, .05] [.00, .03]
Denmark 315,153 312,414 312,382 2,739 2,771 32 .23 .03 .03 .04 .003 .002
(315,253) (312,521) (312,503) (2,732) (2,750) (18) [.22, .24] [.02, .04] [.01, .04]
Estonia 250,648 248,666 248,618 1,982 2,030 48 .21 .04 .00 .04 .005 .001
(250,745) (248,769) (248,734) (1,976) (2,011) (35) [.19, .22] [.02, .05] [.00, .03]
Finland 311,726 309,065 309,051 2,661 2,675 14 .26 .03 .04 .05 .003 .002
(311,826) (309,172) (309,171) (2,654) (2,655) (1) [.25, .27] [.02, .05] [.03, .06]
France 233,280 231,112 231,063 2,168 2,217 49 .24 .04 .00 .04 .004 .001
(233,375) (231,214) (231,177) (2,161) (2,198) (37) [.23, .25] [.02, .05] [.00, .03]
Georgia 254,240 252,574 252,492 1,666 1,748 82 .16 .04 .02 .05 .007 .003
(254,336) (252,677) (252,608) (1,659) (1,728) (69) [.15, .17] [.03, .06] [.00, .04]
Germany 266,752 263,613 263,545 3,139 3,207 68 .33 .05 .01 .04 .004 .001
(266,850) (263,717) (263,662) (3,133) (3,188) (55) [.32, .34] [.04, .06] [.00, .03]
Greece 271,858 269,951 269,905 1,907 1,953 46 .21 .04 .02 .05 .005 .001
(271,955) (270,055) (270,023) (1,900) (1,932) (32) [.20, .22] [.03, .05] [.00, .04]
Himachal 88,118 87,763 87,707 355 411 56 .19 .08 .05 .05 .017 .006
Pradesh (88,199) (87,850) (87,804) (349) (395) (46) [.17, .21] [.06, .11] [.02, .08]
Hong Kong 261,849 259,106 258,975 2,743 2,874 131 .25 .06 .03 .04 .009 .002
(261,946) (259,209) (259,091) (2,737) (2,855) (118) [.24, .26] [.05, .08] [.01, .05]
Hungary 244,471 242,279 242,222 2,192 2,249 57 .25 .05 .04 .03 .005 .002
(244,567) (242,381) (242,338) (2,186) (2,229) (43) [.24, .26] [.04, .07] [.03, .06]
Iceland 195,553 193,330 193,314 2,223 2,239 16 .27 .03 .03 .05 .003 .001
(195,646) (193,429) (193,426) (2,217) (2,220) (3) [.26, .29] [.02, .05] [.02, .06]
Indonesia 273,724 271,715 271,685 2,009 2,039 30 .22 .04 .02 .05 .007 .002
(273,822) (271,820) (271,803) (2,002) (2,019) (17) [.21, .23] [.02, .05] [.00, .04]
Ireland 209,005 206,096 206,077 2,909 2,928 19 .32 .04 .03 .05 .003 .001
(209,099) (206,197) (206,190) (2,902) (2,909) (7) [.31, .33] [.02, .05] [.01, .05]
Israel 312,953 310,232 310,208 2,721 2,745 24 .24 .04 .05 .04 .003 .002
(313,053) (310,339) (310,328) (2,714) (2,725) (11) [.23, .25] [.03, .06] [.03, .06]
Italy 1,672,534 1,655,969 1,655,715 16,565 16,819 254 .20 .03 .01 .05 .005 .001
(1,671,659) (1,656,103) (1,655,865) (15,556) (15,794) (238) [.20, .21] [.02, .03] [.00, .01]
Japan 331,031 327,582 327,548 3,449 3,483 34 .28 .03 .00 .04 .003 .001
(331,131) (327,690) (327,669) (3,441) (3,462) (21) [.27, .29] [.02, .04] [.00, .02]
Jordan 351,402 349,250 349,113 2,152 2,289 137 .21 .07 .06 .04 .011 .004
(351,504) (349,359) (349,235) (2,145) (2,269) (124) [.20, .22] [.06, .08] [.04, .07]
Kazakhstan 292,546 290,737 290,736 1,809 1,810 1 .17 .00 .00 .04 .001 .001
(292,645) (290,843) (290,855) (1,802) (1,790) (–12) [.16, .18] [.00, .02] [.00, .03]
Korea 266,824 264,711 264,768 2,113 2,056 –57 .24 .11 (.03)a .05b .04 .016 .05b
(266,922) (264,815) (264,781) (2,107) (2,141) (34) [.23, .25] [.10, .12]
([.02, .05])a
[.05, .06]b (.004)a
Kyrgyzstan 274,203 272,571 272,493 1,632 1,710 78 .20 .05 .04 .05 .008 .002
(274,301) (272,675) (272,610) (1,626) (1,691) (65) [.19, .21] [.04, .07] [.02, .06]
Latvia 238,133 236,141 236,136 1,992 1,997 5 .26 .01 .00 .05 .002 .001
(238,229) (236,244) (236,252) (1,985) (1,977) (–8) [.25, .27] [.00, .03] [.00, .03]
Liechtenstein 17,760 17,618 17,621 142 139 –3 .28 .06 .10 .05 .005 .006
(17,817) (17,679) (17,690) (138) (127) (–11) [.24, .32] [.00, .11] [.04, .17]
Lithuania 240,455 238,325 238,286 2,130 2,169 39 .23 .05 .06 .04 .006 .003
(240,552) (238,428) (238,401) (2,124) (2,151) (27) [.22, .24] [.04, .06] [.04, .08]
Luxembourg 249,391 247,393 247,388 1,998 2,003 5 .19 .01 .02 .04 .002 .001
(249,488) (247,496) (247,504) (1,992) (1,984) (–8) [.18, .20] [.00, .03] [.00, .04]
Macao 318,482 316,264 315,996 2,218 2,486 268 .19 .08 .01 .05 .013 .001
(318,583) (316,371) (316,116) (2,212) (2,467) (255) [.18, .20] [.07, .09] [.00, .03]
Malaysia 268,070 266,562 266,551 1,508 1,519 11 .18 .03 .03 .05 .004 .002
(268,167) (266,666) (266,669) (1,501) (1,498) (–3) [.17, .19] [.02, .04] [.01, .05]
Malta 187,024 185,374 185,364 1,650 1,660 10 .21 .04 .04 .04 .002 .002
(187,116) (185,472) (185,474) (1,644) (1,642) (–2) [.20, .22] [.02, .05] [.02, .06]
Mauritius 248,357 246,544 246,521 1,813 1,836 23 .21 .04 .03 .04 .004 .002
(248,454) (246,647) (246,637) (1,807) (1,817) (10) [.20, .22] [.02, .05] [.02, .05]
Mexico 2,045,482 2,032,735 2,032,395 12,747 13,087 340 .16 .04 .04 .04 .006 .003
(2,045,610) (2,032,872) (2,033,549) (12,738) (12,061) (–677) [.16, .17] [.03, .04] [.03, .04]
Miranda 158,676 157,942 157,938 734 738 4 .17 .01 .01 .03 .003 .001
(158,766) (158,038) (158,045) (728) (721) (–7) [.16, .19] [.00, .03] [.00, .04]
Moldova 281,976 280,404 280,299 1,572 1,677 105 .19 .05 .00 .05 .008 .001
(282,074) (280,509) (280,417) (1,565) (1,657) (92) [.18, .20] [.04, .06] [.00, .02]
Montenegro 259,619 256,903 256,763 2,716 2,856 140 .21 .05 .00 .06 .007 .000
(259,716) (257,006) (256,879) (2,710) (2,837) (127) [.20, .22] [.04, .07] [.00, .02]
Netherlands 250,581 248,759 248,755 1,822 1,826 4 .20 .03 .04 .03 .002 .002
(250,678) (248,862) (248,871) (1,816) (1,807) (–9) [.18, .21] [.02, .04] [.02, .05]
New Zealand 250,340 248,296 248,269 2,044 2,071 27 .23 .04 .03 .03 .003 .002
(250,436) (248,399) (248,385) (2,037) (2,051) (14) [.22, .24] [.03, .05] [.02, .05]
Norway 250,058 247,648 247,642 2,410 2,416 6 .25 .01 .00 .05 .002 .001
(250,154) (247,751) (247,758) (2,403) (2,396) (7) [.24, .26] [.00, .03] [.00, .03]
Panama 216,273 215,215 215,144 1,058 1,129 71 .15 .04 .02 .04 .007 .002
(216,367) (215,316) (215,257) (1,051) (1,110) (59) [.13, .16] [.03, .06] [.00, .05]
Peru 324,611 322,778 322,742 1,833 1,869 36 .21 .03 .01 .04 .005 .001
(324,712) (322,885) (322,863) (1,827) (1,849) (22) [.21, .22] [.02, .04] [.00, .03]
Poland 262,956 260,300 260,280 2,656 2,676 20 .21 .03 .03 .05 .005 .002
(263,054) (260,404) (260,397) (2,650) (2,657) (7) [.20, .22] [.02, .04] [.02, .05]
Portugal 337,023 334,246 334,142 2,777 2,881 104 .22 .05 .02 .04 .007 .001
(337,124) (334,354) (334,264) (2,770) (2,860) (90) [.21, .23] [.04, .06] [.00, .03]
Qatar 491,707 488,862 488,742 2,845 2,965 120 .18 .03 .02 .03 .003 .001
(491,813) (488,976) (488,870) (2,837) (2,943) (106) [.17, .18] [.02, .04] [.01, .04]
Romania 252,947 250,844 250,824 2,103 2,123 20 .19 .02 .01 .05 .003 .001
(253,044) (250,947) (250,941) (2,097) (2,220) (6) [.18, .20] [.01, .03] [.00, .03]
Russia 285,453 283,230 283,224 2,223 2,229 6 .23 .02 .02 .05 .002 .001
(285,551) (283,336) (283,342) (2,215) (2,209) (–6) [.22, .24] [.00, .03] [.00, .03]
Serbia 296,390 293,848 293,835 2,542 2,555 13 .23 .05 .06 .05 .004 .003
(296,389) (293,954) (293,954) (2,435) (2,435) (0) [.22, .24] [.04, .06] [.04, .08]
Shanghai 277,546 275,010 275,018 2,536 2,528 –8 .24 .07 (.00)a .04b .04 .012 .023b
(277,644) (275,115) (275,070) (2,529) (2,574) (45) [.23, .25] [.06, .08]
([.00, .01])a
[.04, .05]b (.00)a
Singapore 285,031 282,938 282,931 2,093 2,100 7 .22 .02 .02 .03 .002 .001
(285,130) (283,044) (283,049) (2,086) (2,081) (5) [.21, .23] [.01, .04] [.00, .04]
Slovakia 245,342 242,853 242,745 2,489 2,597 108 .26 .06 .05 .05 .007 .003
(245,439) (242,956) (242,860) (2,483) (2,579) (96) [.25, .27] [.05, .08] [.03, .06]
Slovenia 327,431 324,647 324,573 2,784 2,858 74 .17 .03 .03 .04 .004 .001
(327,532) (324,755) (324,694) (2,777) (2,838) (61) [.16, .18] [.02, .04] [.01, .04]
Spain 1,407,898 1,400,476 1,400,468 7,422 7,430 8 .15 .01 .02 .04 .002 .001
(1,408,020) (1,400,607) (1,400,615) (7,413) (7,405) (–8) [.14, .15] [.01, .02] [.01, .03]
Sweden 246,080 243,387 243,368 2,693 2,712 19 .30 .03 .03 .04 .004 .001
(246,176) (243,490) (243,484) (2,686) (2,692) (6) [.28, .31] [.02, .05] [.01, .05]
Switzerland 633,981 627,769 627,734 6,212 6,247 35 .23 .02 .00 .05 .002 .001
(634,091) (627,887) (627,866) (6,204) (6,225) (21) [.22, .24] [.01, .03] [.00, .02]
Taiwan 313,733 310,878 310,806 2,855 2,927 72 .23 .04 .00 .04 .007 .001
(313,833) (310,984) (310,926) (2,849) (2,907) (58) [.22, .23] [.03, .05] [.00, .03]
Tamil Nadu 172,934 171,980 171,913 954 1,021 67 .24 .07 .06 .05 .011 .004
(173,015) (172,077) (172,022) (938) (993) (55) [.23, .25] [.05, .08] [.04, .08]
Thailand 331,599 329,496 329,416 2,103 2,183 80 .23 .07 .06 .04 .007 .003
(331,700) (329,604) (329,537) (2,096) (2,163) (67) [.22, .24] [.06, .08] [.05, .07]
Trinidad/Tobago 260,473 258,665 258,611 1,808 1,862 54 .22 .05 .03 .04 .004 .001
(260,570) (258,769) (258,728) (1,801) (1,842) (41) [.21, .23] [.04, .06] [.01, .04]
Tunisia 269,183 267,753 267,601 1,430 1,582 152 .17 .06 .01 .05 .014 .002
(269,281) (267,857) (267,719) (1,424) (1,562) (138) [.16, .18] [.05, .07] [.00, .03]
Turkey 268,097 265,762 265,632 2,335 2,465 130 .27 .07 .02 .04 .011 .001
(268,194) (265,867) (265,749) (2,327) (2,445) (118) [.26, .28] [.06, .08] [.00, .04]
United Arab Emirates 584,836 581,142 580,973 3,694 3,863 169 .21 .05 .05 .03 .006 .003
(584,945) (581,259) (581,105) (3,686) (3,840) (154) [.20, .22] [.04, .06] [.04, .06]
United Kingdom 649,663 644,101 644,082 5,562 5,581 19 .15 .02 .02 .04 .003 .002
(649,774) (644,219) (644,215) (5,555) (5,559) (4) [.14, .16] [.01, .03] [.01, .03]
Uruguay 324,111 321,985 321,968 2,126 2,143 17 .19 .02 .00 .05 .003 .001
(324,211) (322,092) (322,089) (2,119) (2,122) (3) [.18, .20] [.01, .03] [.00, .03]
USA 276,782 274,001 273,963 2,781 2,819 38 .20 .02 .00 .04 .003 .000
(276,880) (274,106) (274,081) (2,774) (2,799) (25) [.19, .21] [.01, .04] [.00, .02]
Overall mean 527,361
(527,443)
523,210
(523,320)
523,134
(523,310)
4,152
(4,112)
4,227
(4,133)
79c
(86)c
.22 .04 .02 .04 .005 .003
Median 270,521
(270,618)
268,852
(268,956)
268,753
(268,871)
2,221
(2,214)
2,269 (2,249) 38
(26)
.22 .04 .02 .04 .004 .001
Min. 17,760
(17,817)
17,618
(17,679)
17,621
(17,690)
142
(138)
139 (127) –57
(–677)
.10 .00 .00 .03 .001 .000
Max. 2,045,482
(2,045,610)
2,032,735
(2,032,872)
2,032,395
(2,033,549)
16,565
(15,556)
16,819
(15,794)
340
(255)
.34 .11 .10 .06 .017 .05

Note. The model fit indices of the one-factor model, the adjusted one-factor model, and the bi-factor model are presented. The model estimation was performed separately for each country and federal state using students’ weighted plausible values (ranging from 1 to 5) and the maximum likelihood estimation method. The overall mean values of the model fit measures were calculated across all countries and federal states and weighted by the final student weights.
a The model fit values (i.e., RMSEA, RMSEA90%CI, and SRMR) reported in the parentheses refer to the further adjusted single-factor model, which includes residual correlations between mathematics and science competence as well as between Access and Retrieve and Reflect and Evaluate.
b The model fit values (i.e., RMSEA, RMSEA90%CI, and SRMR) are for the bi-factor model, in which the residual variance of the reading subscale Integrate and Interpret was negative and therefore fixed to zero.
c The differences in AIC and BIC values between the adjusted one-factor model and the bi-factor model include both positive and negative values. Therefore, the mean of the absolute differences was calculated, using the absolute values of these differences.

Table B7.Factor loadings and explained variance for reading literacy subscales, mathematical literacy and scientific literacy in the bi-factor model
Country/state Factor loadings (λ) on the general factor (PISA-g) by … Factor loadings on the reading-specific factor (RspecF) by … Percentage of variance explained by
PISA-g for … PISA-g/RspecF, and in total, for ...
Math Science AcRe InIn ReEv AcRe InIn ReEv Math Science AcRe InIn ReEv
Albania .87 .94 .85 .84 .83 .44 .49 .45 75.69 88.36 72.25⁠⁠/⁠19.36⁠⁠/⁠91.61 70.56⁠⁠/⁠24.01⁠⁠/⁠94.57 68.89⁠⁠/⁠20.25⁠⁠/⁠89.14
Argentina .92 .96 .84 .86 .85 .44 .47 .45 84.64 92.16 70.56⁠⁠/⁠19.36⁠⁠/⁠89.92 73.96⁠⁠/⁠22.09⁠⁠/⁠96.05 72.25⁠⁠/⁠20.25⁠⁠/⁠92.50
Australia .92 .98 .87 .88 .86 .43 .43 .46 84.64 96.04 75.69⁠⁠/⁠18.49⁠⁠/⁠94.18 77.44⁠⁠/⁠18.49⁠⁠/⁠95.93 73.96⁠⁠/⁠21.16⁠⁠/⁠95.12
Austria .92 .99 .84 .84 .84 .47 .50 .45 84.64 98.01 70.56⁠⁠/⁠22.09⁠⁠/⁠92.65 70.56⁠⁠/⁠25.00⁠⁠/⁠95.56 70.56⁠⁠/⁠20.25⁠⁠/⁠90.81
Azerbaijan .57 .85 .72 .75 .69 .63 .53 .56 32.49 72.25 51.84⁠⁠/⁠39.69⁠⁠/⁠91.53 56.25⁠⁠/⁠28.09⁠⁠/⁠84.34 47.61⁠⁠/⁠31.36⁠⁠/⁠78.97
Belgium .93 .97 .85 .89 .87 .39 .42 .37 86.49 94.09 72.25⁠⁠/⁠15.21⁠⁠/⁠87.46 79.21⁠⁠/⁠17.64⁠⁠/⁠96.85 75.69⁠⁠/⁠13.69⁠⁠/⁠89.38
Brazil .91 .96 .82 .85 .81 .42 .50 .43 82.81 92.16 67.24⁠⁠/⁠17.64⁠⁠/⁠84.88 72.25⁠⁠/⁠25.00⁠⁠/⁠97.25 65.61⁠⁠/⁠18.49⁠⁠/⁠84.10
Bulgaria .89 .95 .87 .88 .86 .39 .44 .42 79.21 90.25 75.69⁠⁠/⁠15.21⁠⁠/⁠90.90 77.44⁠⁠/⁠19.36⁠⁠/⁠96.80 73.96⁠⁠/⁠17.64⁠⁠/⁠91.60
Canada .90 .96 .82 .86 .81 .47 .47 .48 81.00 92.16 67.24⁠⁠/⁠22.09⁠⁠/⁠89.33 73.96⁠⁠/⁠22.09⁠⁠/⁠96.05 65.61⁠⁠/⁠23.04⁠⁠/⁠88.65
Chile .92 .94 .80 .84 .79 .47 .50 .49 84.64 88.36 64.00⁠⁠/⁠22.09⁠⁠/⁠86.09 70.56⁠⁠/⁠25.00⁠⁠/⁠95.56 62.41⁠⁠/⁠24.01⁠⁠/⁠86.42
Colombia .89 .93 .78 .82 .79 .49 .54 .51 79.21 86.49 60.84⁠⁠/⁠24.01⁠⁠/⁠84.85 67.24⁠⁠/⁠29.16⁠⁠/⁠96.40 62.41⁠⁠/⁠26.01⁠⁠/⁠88.42
Costa Rica .91 .92 .79 .82 .78 .44 .56 .46 82.81 84.64 62.41⁠⁠/⁠19.36⁠⁠/⁠81.77 67.24⁠⁠/⁠31.36⁠⁠/⁠98.60 60.84⁠⁠/⁠21.16⁠⁠/⁠82.00
Croatia .91 .98 .81 .83 .80 .49 .54 .50 82.81 96.04 65.61⁠⁠/⁠24.01⁠⁠/⁠89.62 68.89⁠⁠/⁠29.16⁠⁠/⁠98.05 64.00⁠⁠/⁠25.00⁠⁠/⁠89.00
Czech Rep. .91 .95 .82 .86 .83 .47 .49 .42 82.81 90.25 67.24⁠⁠/⁠22.09⁠⁠/⁠89.33 73.96⁠⁠/⁠24.01⁠⁠/⁠97.97 68.89⁠⁠/⁠17.64⁠⁠/⁠86.53
Denmark .87 .97 .80 .85 .81 .49 .48 .50 75.69 94.09 64.00⁠⁠/⁠24.01⁠⁠/⁠88.01 72.25⁠⁠/⁠23.04⁠⁠/⁠95.29 65.61⁠⁠/⁠25.00⁠⁠/⁠90.61
Estonia .90 .95 .80 .84 .82 .45 .52 .45 81.00 90.25 64.00⁠⁠/⁠20.25⁠⁠/⁠84.25 70.56⁠⁠/⁠27.04⁠⁠/⁠97.60 67.24⁠⁠/⁠20.25⁠⁠/⁠87.49
Finland .88 .97 .78 .83 .81 .49 .52 .48 77.44 94.09 60.84⁠⁠/⁠24.01⁠⁠/⁠84.85 68.89⁠⁠/⁠27.04⁠⁠/⁠95.93 65.61⁠⁠/⁠23.04⁠⁠/⁠88.65
France .92 .97 .84 .87 .86 .40 .47 .41 84.64 94.09 70.56⁠⁠/⁠16.00⁠⁠/⁠86.56 75.69⁠⁠/⁠22.09⁠⁠/⁠97.78 73.96⁠⁠/⁠16.81⁠⁠/⁠90.77
Georgia .86 .93 .80 .79 .80 .52 .57 .48 73.96 86.49 64.00⁠⁠/⁠27.04⁠⁠/⁠91.04 62.41⁠⁠/⁠32.49⁠⁠/⁠94.90 64.00⁠⁠/⁠23.04⁠⁠/⁠87.04
Germany .93 .97 .84 .85 .85 .45 .49 .44 86.49 94.09 70.56⁠⁠/⁠20.25⁠⁠/⁠90.81 72.25⁠⁠/⁠24.01⁠⁠/⁠96.26 72.25⁠⁠/⁠19.36⁠⁠/⁠91.61
Greece .87 .94 .78 .81 .79 .49 .55 .47 75.69 88.36 60.84⁠⁠/⁠24.01⁠⁠/⁠84.85 65.61⁠⁠/⁠30.25⁠⁠/⁠95.86 62.41⁠⁠/⁠22.09⁠⁠/⁠84.50
Himachal Pr. .79 .88 .73 .75 .70 .42 .65 .50 62.41 77.44 53.29⁠⁠/⁠17.64⁠⁠/⁠70.93 56.25⁠⁠/⁠42.25⁠⁠/⁠98.50 49.00⁠⁠/⁠25.00⁠⁠/⁠74.00
Hong Kong .92 .97 .75 .84 .81 .44 .54 .39 84.64 94.09 56.25⁠⁠/⁠19.36⁠⁠/⁠75.61 70.56⁠⁠/⁠29.16⁠⁠/⁠99.72 65.61⁠⁠/⁠15.21⁠⁠/⁠80.82
Hungary .93 .97 .87 .87 .85 .47 .44 .42 86.49 94.09 75.69⁠⁠/⁠22.09⁠⁠/⁠97.78 75.69⁠⁠/⁠19.36⁠⁠/⁠95.05 72.25⁠⁠/⁠17.64⁠⁠/⁠89.89
Iceland .92 .97 .82 .84 .82 .50 .49 .47 84.64 94.09 67.24⁠⁠/⁠25.00⁠⁠/⁠92.24 70.56⁠⁠/⁠24.01⁠⁠/⁠94.57 67.24⁠⁠/⁠22.09⁠⁠/⁠89.33
Indonesia .88 .91 .71 .76 .69 .47 .59 .51 77.44 82.81 50.41⁠⁠/⁠22.09⁠⁠/⁠72.50 57.76⁠⁠/⁠34.81⁠⁠/⁠92.57 47.61⁠⁠/⁠26.01⁠⁠/⁠73.62
Ireland .94 .98 .83 .85 .83 .46 .50 .46 88.36 96.04 68.89⁠⁠/⁠21.16⁠⁠/⁠90.05 72.25⁠⁠/⁠25.00⁠⁠/⁠97.25 68.89⁠⁠/⁠21.16⁠⁠/⁠90.05
Israel .94 .94 .85 .86 .86 .45 .47 .44 88.36 88.36 72.25⁠⁠/⁠20.25⁠⁠/⁠92.50 73.96⁠⁠/⁠22.09⁠⁠/⁠96.05 73.96⁠⁠/⁠19.36⁠⁠/⁠93.32
Italy .90 .97 .82 .83 .82 .46 .52 .49 81.00 94.09 67.24⁠⁠/⁠21.16⁠⁠/⁠88.40 68.89⁠⁠/⁠27.04⁠⁠/⁠95.93 67.24⁠⁠/⁠24.01⁠⁠/⁠91.25
Japan .92 .98 .85 .86 .81 .42 .47 .44 84.64 96.04 72.25⁠⁠/⁠17.64⁠⁠/⁠89.89 73.96⁠⁠/⁠22.09⁠⁠/⁠96.05 65.61⁠⁠/⁠19.36⁠⁠/⁠84.97
Jordan .87 .95 .82 .84 .79 .41 .48 .51 75.69 90.25 67.24⁠⁠/⁠16.81⁠⁠/⁠84.05 70.56⁠⁠/⁠23.04⁠⁠/⁠93.60 62.41⁠⁠/⁠26.01⁠⁠/⁠88.42
Kazakhstan .90 .93 .82 .83 .80 .49 .48 .47 81.00 86.49 67.24⁠⁠/⁠24.01⁠⁠/⁠91.25 68.89⁠⁠/⁠23.04⁠⁠/⁠91.93 64.00⁠⁠/⁠22.09⁠⁠/⁠86.09
Korea .91 .96 .80 .82 .81 .33 .57 .36 82.81 92.16 64.00⁠⁠/⁠10.89⁠⁠/⁠74.89 67.24⁠⁠/⁠32.49⁠⁠/⁠99.73 65.61⁠⁠/⁠12.96⁠⁠/⁠78.57
Kyrgyzstan .89 .91 .81 .80 .79 .49 .55 .44 79.21 82.81 65.61⁠⁠/⁠24.01⁠⁠/⁠89.62 64.00⁠⁠/⁠30.25⁠⁠/⁠94.25 62.41⁠⁠/⁠19.36⁠⁠/⁠81.77
Latvia .89 .95 .80 .82 .78 .50 .53 .51 79.21 90.25 64.00⁠⁠/⁠25.00⁠⁠/⁠89.00 67.24⁠⁠/⁠28.09⁠⁠/⁠95.33 60.84⁠⁠/⁠26.01⁠⁠/⁠86.85
Liechtenstein .88 .97 .78 .81 .81 .46 .51 .49 77.44 94.09 60.84⁠⁠/⁠21.16⁠⁠/⁠82.00 65.61⁠⁠/⁠26.01⁠⁠/⁠91.62 65.61⁠⁠/⁠24.01⁠⁠/⁠89.62
Lithuania .91 .96 .83 .85 .83 .43 .49 .48 82.81 92.16 68.89⁠⁠/⁠18.49⁠⁠/⁠87.38 72.25⁠⁠/⁠24.01⁠⁠/⁠96.26 68.89⁠⁠/⁠23.04⁠⁠/⁠91.93
Luxembourg .91 .97 .85 .87 .86 .44 .44 .43 82.81 94.09 72.25⁠⁠/⁠19.36⁠⁠/⁠91.61 75.69⁠⁠/⁠19.36⁠⁠/⁠95.05 73.96⁠⁠/⁠18.49⁠⁠/⁠92.45
Macao .85 .94 .76 .78 .78 .48 .61 .45 72.25 88.36 57.76⁠⁠/⁠23.04⁠⁠/⁠80.80 60.84⁠⁠/⁠37.21⁠⁠/⁠98.05 60.84⁠⁠/⁠20.25⁠⁠/⁠81.09
Malaysia .85 .92 .76 .80 .78 .49 .52 .54 72.25 84.64 57.76⁠⁠/⁠24.01⁠⁠/⁠81.77 64.00⁠⁠/⁠27.04⁠⁠/⁠91.04 60.84⁠⁠/⁠29.16⁠⁠/⁠90.00
Malta .91 .98 .87 .89 .87 .45 .43 .43 82.81 96.04 75.69⁠⁠/⁠20.25⁠⁠/⁠95.94 79.21⁠⁠/⁠18.49⁠⁠/⁠97.70 75.69⁠⁠/⁠18.49⁠⁠/⁠94.18
Mauritius .91 .96 .86 .87 .86 .41 .45 .45 82.81 92.16 73.96⁠⁠/⁠16.81⁠⁠/⁠90.77 75.69⁠⁠/⁠20.25⁠⁠/⁠95.94 73.96⁠⁠/⁠20.25⁠⁠/⁠94.21
Mexico .91 .92 .80 .85 .80 .41 .51 .44 82.81 84.64 64.00⁠⁠/⁠16.81⁠⁠/⁠80.81 72.25⁠⁠/⁠26.01⁠⁠/⁠98.26 64.00⁠⁠/⁠19.36⁠⁠/⁠83.36
Miranda .89 .92 .83 .86 .83 .41 .44 .45 79.21 84.64 68.89⁠⁠/⁠16.81⁠⁠/⁠85.70 73.96⁠⁠/⁠19.36⁠⁠/⁠93.32 68.89⁠⁠/⁠20.25⁠⁠/⁠89.14
Moldova .86 .91 .81 .79 .77 .48 .56 .52 73.96 82.81 65.61⁠⁠/⁠23.04⁠⁠/⁠88.65 62.41⁠⁠/⁠31.36⁠⁠/⁠93.77 59.29⁠⁠/⁠27.04⁠⁠/⁠86.33
Montenegro .90 .96 .81 .82 .78 .49 .57 .49 81.00 92.16 65.61⁠⁠/⁠24.01⁠⁠/⁠89.62 67.24⁠⁠/⁠32.49⁠⁠/⁠99.73 60.84⁠⁠/⁠24.01⁠⁠/⁠84.85
Netherlands .92 .97 .84 .90 .88 .39 .39 .40 84.64 94.09 70.56⁠⁠/⁠15.21⁠⁠/⁠85.77 81.00⁠⁠/⁠15.21⁠⁠/⁠96.21 77.44⁠⁠/⁠16.00⁠⁠/⁠93.44
New Zealand .92 .97 .86 .89 .85 .42 .41 .44 84.64 94.09 73.96⁠⁠/⁠17.64⁠⁠/⁠91.60 79.21⁠⁠/⁠16.81⁠⁠/⁠96.02 72.25⁠⁠/⁠19.36⁠⁠/⁠91.61
Norway .91 .96 .81 .84 .80 .48 .49 .50 82.81 92.16 65.61⁠⁠/⁠23.04⁠⁠/⁠88.65 70.56⁠⁠/⁠24.01⁠⁠/⁠94.57 64.00⁠⁠/⁠25.00⁠⁠/⁠89.00
Panama .89 .93 .83 .85 .83 .39 .49 .42 79.21 86.49 68.89⁠⁠/⁠15.21⁠⁠/⁠84.10 72.25⁠⁠/⁠24.01⁠⁠/⁠96.26 68.89⁠⁠/⁠17.64⁠⁠/⁠86.53
Peru .90 .92 .83 .86 .82 .42 .49 .47 81.00 84.64 68.89⁠⁠/⁠17.64⁠⁠/⁠86.53 73.96⁠⁠/⁠24.01⁠⁠/⁠97.97 67.24⁠⁠/⁠22.09⁠⁠/⁠89.33
Poland .92 .96 .81 .84 .81 .45 .50 .50 84.64 92.16 65.61⁠⁠/⁠20.25⁠⁠/⁠85.86 70.56⁠⁠/⁠25.00⁠⁠/⁠95.56 65.61⁠⁠/⁠25.00⁠⁠/⁠90.61
Portugal .91 .95 .80 .84 .82 .46 .52 .42 82.81 90.25 64.00⁠⁠/⁠21.16⁠⁠/⁠85.16 70.56⁠⁠/⁠27.04⁠⁠/⁠97.60 67.24⁠⁠/⁠17.64⁠⁠/⁠84.88
Qatar .91 .96 .90 .89 .88 .37 .41 .38 82.81 92.16 81.00⁠⁠/⁠13.69⁠⁠/⁠94.69 79.21⁠⁠/⁠16.81⁠⁠/⁠96.02 77.44⁠⁠/⁠14.44⁠⁠/⁠91.88
Romania .89 .96 .83 .83 .81 .49 .50 .51 79.21 92.16 68.89⁠⁠/⁠24.01⁠⁠/⁠92.90 68.89⁠⁠/⁠25.00⁠⁠/⁠93.89 65.61⁠⁠/⁠26.01⁠⁠/⁠91.62
Russia .90 .93 .79 .83 .81 .49 .53 .50 81.00 86.49 62.41⁠⁠/⁠24.01⁠⁠/⁠86.42 68.89⁠⁠/⁠28.09⁠⁠/⁠96.98 65.61⁠⁠/⁠25.00⁠⁠/⁠90.61
Serbia .90 .95 .83 .81 .78 .52 .47 .48 81.00 90.25 68.89⁠⁠/⁠27.04⁠⁠/⁠95.93 65.61⁠⁠/⁠22.09⁠⁠/⁠87.70 60.84⁠⁠/⁠23.04⁠⁠/⁠83.88
Shanghai .93 .95 .77 .86 .75 .36 .51 .39 86.49 90.25 59.29⁠⁠/⁠12.96⁠⁠/⁠72.25 73.96⁠⁠/⁠26.01⁠⁠/⁠99.97 56.25⁠⁠/⁠15.21⁠⁠/⁠71.46
Singapore .93 .97 .85 .89 .87 .39 .41 .38 86.49 94.09 72.25⁠⁠/⁠15.21⁠⁠/⁠87.46 79.21⁠⁠/⁠16.81⁠⁠/⁠96.02 75.69⁠⁠/⁠14.44⁠⁠/⁠90.13
Slovakia .91 .96 .81 .82 .83 .49 .55 .47 82.81 92.16 65.61⁠⁠/⁠24.01⁠⁠/⁠89.62 67.24⁠⁠/⁠30.25⁠⁠/⁠97.49 68.89⁠⁠/⁠22.09⁠⁠/⁠90.98
Slovenia .92 .96 .85 .87 .84 .44 .44 .48 84.64 92.16 72.25⁠⁠/⁠19.36⁠⁠/⁠91.61 75.69⁠⁠/⁠19.36⁠⁠/⁠95.05 70.56⁠⁠/⁠23.04⁠⁠/⁠93.60
Spain .89 .93 .79 .84 .81 .44 .47 .44 79.21 86.49 62.41⁠⁠/⁠19.36⁠⁠/⁠81.77 70.56⁠⁠/⁠22.09⁠⁠/⁠92.65 65.61⁠⁠/⁠19.36⁠⁠/⁠84.97
Sweden .93 .97 .82 .85 .84 .49 .48 .46 86.49 94.09 67.24⁠⁠/⁠24.01⁠⁠/⁠91.25 72.25⁠⁠/⁠23.04⁠⁠/⁠95.29 70.56⁠⁠/⁠21.16⁠⁠/⁠91.72
Switzerland .90 .96 .81 .83 .82 .49 .53 .49 81.00 92.16 65.61⁠⁠/⁠24.01⁠⁠/⁠89.62 68.89⁠⁠/⁠28.09⁠⁠/⁠96.98 67.24⁠⁠/⁠24.01⁠⁠/⁠91.25
Taiwan .92 .96 .79 .87 .84 .45 .47 .40 84.64 92.16 62.41⁠⁠/⁠20.25⁠⁠/⁠82.66 75.69⁠⁠/⁠22.09⁠⁠/⁠97.78 70.56⁠⁠/⁠16.00⁠⁠/⁠86.56
Tamil Nadu .90 .87 .75 .76 .77 .42 .58 .48 81.00 75.69 56.25⁠⁠/⁠17.64⁠⁠/⁠73.89 57.76⁠⁠/⁠33.64⁠⁠/⁠91.40 59.29⁠⁠/⁠23.04⁠⁠/⁠82.33
Thailand .88 .94 .79 .82 .76 .43 .54 .42 77.44 88.36 62.41⁠⁠/⁠18.49⁠⁠/⁠80.90 67.24⁠⁠/⁠29.16⁠⁠/⁠96.40 57.76⁠⁠/⁠17.64⁠⁠/⁠75.40
Trinidad/Tob. .91 .95 .85 .86 .87 .47 .46 .42 82.81 90.25 72.25⁠⁠/⁠22.09⁠⁠/⁠94.34 73.96⁠⁠/⁠21.16⁠⁠/⁠95.12 75.69⁠⁠/⁠17.64⁠⁠/⁠93.33
Tunisia .86 .91 .77 .81 .76 .49 .58 .48 73.96 82.81 59.29⁠⁠/⁠24.01⁠⁠/⁠83.30 65.61⁠⁠/⁠33.64⁠⁠/⁠99.25 57.76⁠⁠/⁠23.04⁠⁠/⁠80.80
Turkey .91 .95 .81 .84 .79 .39 .52 .49 82.81 90.25 65.61⁠⁠/⁠15.21⁠⁠/⁠80.82 70.56⁠⁠/⁠27.04⁠⁠/⁠97.60 62.41⁠⁠/⁠24.01⁠⁠/⁠86.42
UAE .90 .96 .86 .88 .85 .38 .46 .40 81.00 92.16 73.96⁠⁠/⁠14.44⁠⁠/⁠88.40 77.44⁠⁠/⁠21.16⁠⁠/⁠98.60 72.25⁠⁠/⁠16.00⁠⁠/⁠88.25
UK .92 .97 .85 .88 .85 .42 .44 .44 84.64 94.09 72.25⁠⁠/⁠17.64⁠⁠/⁠89.89 77.44⁠⁠/⁠19.36⁠⁠/⁠96.80 72.25⁠⁠/⁠19.36⁠⁠/⁠91.61
Uruguay .89 .92 .81 .84 .81 .48 .51 .52 79.21 84.64 65.61⁠⁠/⁠23.04⁠⁠/⁠88.65 70.56⁠⁠/⁠26.01⁠⁠/⁠96.57 65.61⁠⁠/⁠27.04⁠⁠/⁠92.65
USA .93 .98 .86 .88 .86 .41 .43 .45 86.49 96.04 73.96⁠⁠/⁠16.81⁠⁠/⁠90.77 77.44⁠⁠/⁠18.49⁠⁠/⁠95.93 73.96⁠⁠/⁠20.25⁠⁠/⁠94.21
Mean .91 .96 .81 .84 .81 .44 .50 .46 82.81 92.16 65.61⁠⁠/⁠19.36⁠⁠/⁠84.97 70.56⁠⁠/⁠25.00⁠⁠/⁠95.56 65.61⁠⁠/⁠21.16⁠⁠/⁠86.77
Median .91 .96 .82 .84 .81 .45 .50 .46 82.81 92.16 67.24⁠⁠/⁠20.25⁠⁠/⁠88.53 70.56⁠⁠/⁠24.51⁠⁠/⁠96.05 65.61⁠⁠/⁠21.16⁠⁠/⁠89.07
Min. .57 .85 .71 .75 .69 .33 .39 .36 32.49 72.25 50.41⁠⁠/⁠10.89⁠⁠/⁠70.93 56.25⁠⁠/⁠15.21⁠⁠/⁠84.34 47.61⁠⁠/⁠12.96⁠⁠/⁠71.46
Max. .94 .99 .90 .90 .88 .63 .65 .56 88.36 98.01 81.00⁠⁠/⁠39.69⁠⁠/⁠97.78 81.00⁠⁠/⁠42.25⁠⁠/⁠99.97 77.44⁠⁠/⁠31.36⁠⁠/⁠95.12

Note. The terms Math and Science refer to the total scales of mathematical and scientific literacy, respectively. The reading literacy subscales comprise Access and Retrieve (AcRe), Integrate and Interpret (InIn), and Reflect and Evaluate (ReEv). The means of the standardized factor loadings for mathematics, science, and the reading subscales, as well as the mean explained variances, were computed using the final student weights for each country. Standardized factor loadings were transformed into z-values using Fisher’s r-to-z transformation and weighted by the final student weights of the countries and federal states. Subsequently, the overall z-means of the factor loadings were transformed back to λ.

Table B8.Correlations of mathematical, scientific, and reading literacy with reading related student characteristics as well as their comparisons
Country Mathematical/Readinga/Science literacy correlated with
Books Usefulness of strategies for: Use of reading strategies: Reading
at home writing summary understanding/
remembering
control elaboration memorization enjoyment
Albania .33<.36>.34 .32<.34(>).34 .35<.38(>).38 .24<.32>.28 .13<.15(>).15 .04<.12>.07 .21<.36>.29
Argentina .39(<).38(>).39 .38<.40(>).41 .31(<).31(>).33 .18<.24>.21 .05(<).03(>).04 –.10(<)–.05(>)–.10 .14<.20>.18
Australia .36(<).35(>).37 .40<.48>.44 .34<.41>.38 .33<.38>.35 .13(<).11(>).12 .07<.10>.06 .37<.51>.44
Austria .46<.50(>).50 .43<.49>.44 .35<.44>.38 .14<.17(>).18 .06(<).03(>).10 –.15(<)–.09(>)–.11 .27<.45>.35
Azerbaijan .14<.27>.21 .14(<).14(>).18 .19<.22(>).25 .14<.21>.15 .11<.14>.10 .04<.11>.06 .12<.22>.13
Belgium .40<.41(>).42 .48<.55>.51 .45<.51>.47 .23<.30>.27 .04(<)–.00(>).05 –.22(<)–.14(>)–.19 .25<.41>.33
Brazil .24(<).22(>).24 .34<.37>.36 .32<.34>.33 .19<.26>.22 .00<.04>.02 .04<.11>.07 .11<.22>.17
Bulgaria .43(<).42(>).41 .40<.44(>).43 .37<.39(>).38 .11<.19>.16 .09(<).08(>).11 .02<.05>.01 .24<.31>.28
Canada .33<.35(>).36 .33<.40>.36 .26<.32>.27 .25<.32>.25 .04(<).03(>).05 –.02(<).04(>).–.03 .26<.45>.35
Chile .35(<).34(>).33 .37<.41>.39 .41(<).42>.40 .23<.29>.27 .10(<).06(>).08 –.02<.06>.04 .19<.29>.24
Colombia .35(<).33(>).33 .43(<).43(>).43 .42(<).41(>).41 .09(<).10(>).09 .01(<).02(>).03 –.12<–.14(>)–.13 .07<.12>.08
Costa Rica .31(<).32(>).33 .41(<).41>.38 .28(<).29>.27 .09<.15>.13 .00<–.04>–.00 –.12(<)–.03(>)–.07 .05<.16>.10
Croatia .31(<).32(>).33 .36<.48>.43 .32<.40>.37 .09<.16(>).15 .02(<)–.02(>).04 –.12(<)–.01(>)–.05 .18<.37>.27
Czech Republic .40(<).41>.40 .43<.53>.48 .36<.42>.38 .25<.29>.27 .16(<).13(>).18 –.13(<)–.08(>)–.13 .27<.46>.35
Denmark .33<.37>.36 .33<.45>.38 .35<.43>.37 .12<.20>.16 .07<.10(>).12 –.16(<)–.10(>)–.14 .31<.46>.41
Estonia .27(<).28(>).30 .33<.43>.37 .32<.39>.35 .09<.18>.14 .11(<).10(>).13 –.13(<)–.07(>)–.13 .27<.46>.34
Finland .30<.34(>).33 .36<.49>.40 .30<.42>.37 .21<.29>.24 .13(<).14(>).14 –.05(<).03(>)–.06 .31<.52>.43
France .48(<).49(>).51 .43<.48>.43 .36<.41>.39 .34<.40>.36 .08(<).06(>).09 .02<.10>.05 .31<.46<.39
Georgia .36(<).33>.30 .34(<).34(>).37 .35(<).35(>).37 .19<.24>.20 .18(<).18(>).18 .07<.13>.07 .25<.38>.32
Germany .45<.47(>).48 .46<.52>.49 .43<.47>.43 .18<.24>.20 .06(<).01(>).07 –.12(<)–.06(>)–.11 .33<.46>.38
Greece .35(<).32(>).33 .32<.36(>).36 .20(<).21(>).22 .21<.29>.25 .20(<).13(>).17 .01<.06>.01 .26<.42>.35
Himachal Pradesh .06(<).03(>).10 .13<.20>.15 .27<.33(>).32 .23(<).26(>).26 .17(<).12(>).16 .03<.11>.07 .10<.16>.10
Hong Kong .31(<).30>.28 .34<.37>.35 .30<.34>.32 .31(<).31(>).32 .15(<).10(>).13 .02<.08>.06 .23<.38>.31
Hungary .55(<).55>.53 .40<.52>.47 .31<.40>.38 .10<.17>.12 .05(<).00(>).04 .04(<).03(>)–.04 .30<.45>.35
Iceland .32(<).31(>).33 .35<.45>.39 .31<.36>.31 .23<.27>.23 .16(<).11(>).13 .00(<)–.01(>)–.03 .33<.48>.40
Indonesia .08<.10(>).09 .37(<).33(>).36 .36(<).33(>).34 .15<.17(>).17 .15(<).13(>).15 .06<.11(>).10 .07<.16>.11
Ireland .42(<).42(>).42 .39<.43>.40 .36<.38>.36 .26<.31>.29 .11(<).07(>).11 .01<.07>.03 .34<.50>.45
Israel .29(<).28(>).27 .41<.45>.42 .32<.36(>).35 .14<.21>.18 –.08(<)–.09>–.05 –.12(<)–.06(>)–.09 .15<.29>.21
Italy .35<.39>.37 .37<.45>.41 .33<.39>.35 .17<.28>.20 .08(<).06(>).07 –.14(<)–.10(>)–.15 .22<.41>.30
Japan .26(<).23(>).24 .46<.52(>).52 .34<.38(>).39 .32<.34(>).35 .22(<).18(>).22 .02<.07>.06 .24<.40>.35
Jordan .18(<).16(>).17 .25<.27(>).26 .22(<).23(>).25 .30<.35>.32 .24(<).25>.23 .22<.26>.22 .15<.25>.21
Kazakhstan .29<.33>.29 .37<.41>.36 .36<.39>.35 .00<.04>–.00 –.08(<)–.08(>)–.08 –.13(<).–.08(>)–.11 .00<.04>.00
Korea .39(<).35(>).36 .45<.52>.51 .39<.43>.42 .39<.43>.41 .28<.30(>).30 .20<.30>.26 .31<.42>.39
Kyrgyzstan .36<.38>.33 .36(<).33(>).33 .37(<).36>.34 .04<.10(>).09 –.00<.03(>).03 08.<.12(>).11 .06<.16>.10
Latvia .31(<).31(>).30 .36<.43>.39 .32<.36>.34 .14<.18>.14 .11(<).05(>).08 –.07(<)–.03(>)–.08 .21<.41>.27
Liechtenstein .38(<).40(>).41 .41<.52>.42 .35<.47>.38 .08<.24>.14 .03(<).03(>).08 –.16(<)–.05(>)–.14 .18<.42>.30
Lithuania .37(<).35(>).35 .35<.41>.38 .32<.37>.34 .14<.22>.19 .07(<).04(>).05 –.17(<)–.08(>)–.12 .26<.43>.33
Luxembourg .47(<).48(>).51 .38<.48>.44 .34<.42>.36 .16<.25>.20 .01(<)–.02(>).04 .00<.09>.04 .26<.42>.34
Macao .17(<).15(>).15 .26<.29(>).29 .20<.24>.21 .24(<).21(>).22 .23(<).18(>).21 .10(<).10(>).09 .21<.34>.29
Malaysia .28(<).21(>).21 .35<.38(>).37 .23<.28(>).27 .21<.30>.27 .16<.19(>).19 .19<.31>.28 .16<.30>.23
Malta .30(<).29(>).30 .36<.39(>).39 .21<.24(>).25 .34<.41>.38 .14(<).11(>).14 .02<.08>.03 .29<.43>.40
Mauritius .16(<).15(>).14 .46<.48>.46 .32<.34(>).34 .28<.38>.33 .06<.11>.09 .02<.07>.02 .17<.26>.19
Mexico .27(<).27(>).28 .41<.43>.41 .35(<).34(>).34 .22<.25>.23 .09(<).07(>).08 –.03(<)–.00(>)–.02 .13<.20>.17
Miranda .34(<).33(>).33 .38<.41>.38 .28<.33(>).32 .05<.12>.09 –.06(<)–.03(>)–.03 –.09(<).02(>)–.05 .08<.19>.15
Moldova .29(<).27>.25 .27<.31>.29 .28<.30>.28 .14<.21>.16 .09(<).10(>).09 .04<.09>.06 .05<.15>.09
Montenegro .33(<).31(>).30 .35<.38(>).38 .34<.40>.37 .05<.12>.07 .03(<)–.01(>).02 –.22(<)–.16(>)–.19 .16<.33>.24
Netherlands .41(<).40(>).40 .45<.51>.49 .39<.46>.43 .22<.27>.24 .05(<).03(>).07 –.28(<)–.25(>)–.27 .22<.41>.31
New Zealand .39(<).39(>).39 .41<.49>.44 .35<.39>.35 .29<.34>.27 .03(<).02(>).02 –.03(<).03(>)–.02 .33<.48>.40
Norway .39(<).37(>).40 .35<.45>.37 .32<.38>.33 .21<.28>.23 .20(<).18(>).20 .00<.04>.02 .32<.47>.37
Panama .30(<).27>.25 .40(<).38(>).39 .37(<).37(>).39 .13<.16>.14 .04(<)–.02(>)–.01 –.10(<)–.07(>)–.10 .08<.13>.10
Peru .37(<).37>.35 .39(<).39>.36 .33(<).32(>).32 .06<.09(>).10 –.04(<)–.02>.00 –.19(<)–.18>–.16 .03<.13>.11
Poland .40<.42(>).41 .41<.47>.44 .28<.33>.31 .21<.29>.24 .08(<).08(>).11 –.03<.07>.02 .28<.44>.34
Portugal .39(<).36(>).39 .42<.52>.46 .41<.44>.42 .33<.41>.38 .23(<).22(>).25 –.09(<)–.03(>)–.09 .22<.38>.30
Qatar .15(<).13(>).13 .33(<).31(>).31 .35(<).33(>).35 .26<.28>.26 .02(<).023(>).02 –.05(<).02(>)–.03 .22<.26>.25
Romania .38<.41>.39 .33<.41>.39 .34<.39(>).38 .15<.25>.20 .07(<).08(>).07 –.02<.08>–.01 .12<.23>.18
Russia .30<.32>.29 .36<.40>.38 .35<.39>.36 .14<.18>.16 .06(<).04(>).06 –.11(<)–.07(>)–.08 .25<.38>.30
Serbia .34(<).33(>).34 .41<.47>.43 .37<.40>.38 .12<.14(>).13 .10(<).03(>).08 –.24(<)–.17(>)–.22 .18<.30>.25
Shanghai .33(<).33(>).32 .36(<).37(>).38 .29<.33>.28 .27(<).27(>).28 .19(<).15(>).20 .03<.05>.034 .24<.35>.29
Singapore .31<.33(>).33 .40<.47>.46 .29<.31(>).32 .23<.25>.24 .04(<).01(>).04 –.14(<)–.13(>)–.15 .28<.42>.36
Slovakia .40<.42(>).43 .40<.48>.41 .30<.35>.30 .20<.26>.18 .13(<).10(>).11 –.28(<)–.22(>)–.26 .26<.38>.27
Slovenia .41(<).40(>).40 .39<.47>.42 .32<.42>.38 .18<.26>.21 .08(<).05(>).08 –.24(<)–.17(>)–.22 .27<.42>.33
Spain .43(<).41(>).42 .36<.44>.39 .28<.33>.29 .27<.32>.27 .15(<).11(>).12 –.01<.05>–.00 .28<.43>.35
Sweden .40(<).41(>).40 .41<.47>.43 .37<.44>.39 .22<.26>.20 .16(<).13(>).13 .04<.10>.03 .32<.47>.36
Switzerland .41<.43(>).43 .43<.52>.45 .41<.50>.45 .15<.27>.19 .02(<).01(>).04 –.15(<)–.06(>)–.11 .26<.48>.36
Taiwan .35(<).36(>).36 .37<.40>.39 .32<.34>.32 .40<.42>.41 .30<.32(>).32 .17<.24>.21 .34<.47>.43
Tamil Nadu .05(<).05(>).06 .26(<).15(>).26 .25(<).13(>).21 .18<.19>.14 .14(<).10>.04 –.10(<).00(>)–.09 .19<.29>.22
Thailand .24<.27>.24 .22<.27>.25 .24<.32>.26 .23(<).24(>).23 .19(<).18(>).17 .22<.26>.21 .16<.28>.24
Trinidad and Tobago .19(<).17(>).18 .43(<).40(>).40 .39(<).38(>).38 .25<.31>.29 .01(<).02(>).04 .01<.11>.05 .15<.26>.20
Tunisia .27(<).22(>).21 .22<.25>.22 .22(<).21(>).23 .15<.25>.19 .09<.15>.13 –.10(<)–.02(>)–.07 –.06(<).05>.00
Turkey .38(<).37(>).36 .33<.39(>).38 .27<.32(>).34 .13<.23>.20 .10<.12(>).12 –.26(<)–.16(>)–.17 .10<.25>.19
United Arab Emirates .21(<).18(>).18 .38(<).37(>).38 .36<.38(>).39 .19<.26>.22 –.02(<).01>.00 –.15(<)–.07(>)–.10 .18<.29>.26
United Kingdom .45<.48(>).49 .37<.43>.41 .29<.36>.33 .22<.28>.24 .05(<).04(>).07 –.03(<).01(>)–.01 .31<.47>.39
Uruguay .36(<).36(>).36 .42<.46>.42 .34(<).34(>).34 .21<.28>.25 .04(<).02(>).04 –.12(<)–.02(>)–.06 .16<.27>.23
USA .41(<).42(>).42 .32<.39(>).34 .28<.34>.30 .18<.26>.21 –.00(<).00(>).02 –.11(<)–.05(>)–.10 .24<.42>.35
Overall (abs.) meanb .32/.32/.32 .37/.40/.39 .32/.35/.34 .20/.25/(.22) (.09)/(.08)/(.09) (.09)/(.10)/(.10) (.20)/.33/.26
Median .35/.34/.33 .37/.43/.39 .33/.37/.35 .20/.26/.22 .08/.07/.08 –.04/.02/–.03 .22/.38/.30
Min. .05/.03/.06 .13/.14/.15 .19/.13/.21 .00/.04/–.00 –.08/–.09/–.08 –.28/–.25/–.27 –.06/.04/.001
Max. .55/.55/.53 .48/.55/.52 .45/.51/.47 .40/.43/.41 .30/.32/.32 .22/.31/.28 .37/.52/.45

Note. Weighted correlation coefficients were estimated from the structural equation models specified separately for each country and federal state. The overall reading, mathematics, and science competence scales were correlated with student characteristics within a structural equation modeling framework. To improve model fit, the correlations among the student characteristics were explicitly specified. Due to the presence of missing data among student characteristics, full information maximum likelihood (FIML) estimation was employed. This approach enabled the inclusion of all students in the analysis without necessitating casewise exclusion. Significant correlation coefficients (p ≤ .05, two-tailed test) are presented in regular font, while non-significant coefficients (p > .05) appear in italics. The directional hypotheses for the comparison of correlation coefficients are indicated by the use of less-than or greater-than signs. When these signs are enclosed in parentheses, the results are non-significant (p > .05, one-tailed test); when parentheses are absent, the results are significant (p ≤ .05, one-tailed test). In instances of negative correlation coefficients, the absolute value of these coefficients was employed for the purpose of conducting a statistical comparison between them.
a For reading competence, the overall scale with plausible values 1–5 per student was applied.
b The mean correlations between the PISA competences (reading, mathematics, or science) and reading-related student characteristics were computed as follows: The correlation coefficients were transformed into z-values using Fisher’s r-to-z transformation and weighted by the final student weights of the countries and federal states. Subsequently, the overall weighted z-means were transformed back to r. In instances where negative correlation coefficients were observed, the absolute values of these coefficients were utilized to calculate the weighted absolute average correlations, as shown in parentheses.

Appendix C: Figures

Figure C1
Figure C1.Alternative factor models explaining the correlations among reading literacy subscales, and mathematics and science total scores

Note. The terms Math and Science refer to the total scales of mathematical and scientific literacy, respectively. The subscales of reading literacy are depicted in the factor models: Access and Retrieve (AcRe), Integrate and Interpret (InIn), and Reflect and Evaluate (ReEv). In all three factor models, the mathematics and science total scale scores (i.e., plausible values) and the reading subscale scores (i.e., plausible values) are predicted by a general PISA factor (PISA-g). In the bi-factor model, the reading subscale scores are additionally determined by a reading-specific factor (RspecF). The adjusted one-factor model incorporates an additional correlation between the residuals (ε1 and ε2) of the mathematical literacy and science literacy total scales, as indicated by the bold double-headed arrow.