logo logo European Journal of Educational Research

EU-JER is a leading, peer-reviewed research journal that provides an online forum for studies in education by and for scholars and practitioners worldwide.

Subscribe to

Receive Email Alerts

for special events, calls for papers, and professional development opportunities.

Subscribe

Publisher (HQ)

Eurasian Society of Educational Research
Eurasian Society of Educational Research
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Eurasian Society of Educational Research
Headquarters
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Review Article

Mapping and Exploring Strategies to Enhance Critical Thinking in the Artificial Intelligence Era: A Bibliometric and Systematic Review

Melyani Sari Sitepu , Lantip Diat Prasojo , Hermanto , Achmad Salido , Lukman Nurhakim , Eko Setyorini , Hermina Disnawati , Bayu Wiratsongko

The emergence of artificial intelligence (AI) has transformed higher education, creating both opportunities and challenges in cultivating students&rsq.


  • Pub. date: January 15, 2026
  • Online Pub. date: November 13, 2025
  • Pages: 305-322
  • 57 Downloads
  • 326 Views
  • 0 Citations

How to Cite

Abstract:

T

The emergence of artificial intelligence (AI) has transformed higher education, creating both opportunities and challenges in cultivating students’ critical thinking skills. This study integrates quantitative bibliometric analysis and qualitative systematic literature review (SLR) to map global research trends and identify how critical thinking is conceptualized, constructed, and developed in the AI era. Scopus served as the primary data source, limited to publications from 2022 to 2024, retrieved on February 8, 2025. Bibliometric analysis using Biblioshiny R and VOSviewer followed five stages—design, data collection, analysis, visualization, and interpretation—while the SLR employed a deductive thematic approach consistent with PRISMA guidelines. A total of 322 documents were analyzed bibliometrically, and 34 were included in the qualitative synthesis. Results show that Education Sciences and Cogent Education are the most productive journals, whereas Education and Information Technologies have the highest citation impact. Several influential documents and authors have shaped global discussions on AI adoption in higher education and its relationship to critical thinking. Thematic mapping identified five major research clusters: pedagogical integration, ethical and evaluative practices, technical and application-oriented AI models, institutional accountability, and socio-technical systems thinking. Conceptually, critical thinking is understood as a reflective, evaluative, and metacognitive reasoning process grounded in intellectual autonomy and ethical judgment. Across the reviewed literature, strategies for fostering critical thinking converge into three integrated approaches: ethical curriculum integration, pedagogical and assessment redesign, and reflective human–AI collaboration. Collectively, these strategies ensure that AI strengthens rather than replaces human reasoning in higher education.

Keywords: AI era, critical thinking, higher education, pedagogical strategies, personalized learning.

description PDF
file_save XML
Article Metrics
Views
57
Download
326
Citations
Crossref
0

Introduction

The rapid advancement of artificial intelligence (AI) has profoundly transformed various sectors, including higher education, by reshaping teaching and learning paradigms. AI-driven technologies enable personalized and adaptive learning experiences, allowing students to tailor their learning strategies and progress at an individualized pace (Chadha, 2024; Tulcanaza-Prieto et al., 2023; Vishwanathaiah et al., 2023). The integration of AI into educational environments, however, necessitates the redesign of pedagogical frameworks to ensure that technology supports—not supplants—the learning process (Baskara, 2023; Imran et al., 2024). Through adaptive learning systems and intelligent tutoring applications, AI enhances educational accessibility and promotes self-directed learning (Hongli & Leong, 2024; Wei, 2023). Moreover, generative AI tools contribute to the creation of dynamic, interactive learning environments that foster exploration, problem-solving, and collaborative knowledge construction (Moulin, 2024).

Despite the significant advantages brought by artificial intelligence (AI) in higher education, increasing concern has arisen over students’ growing dependence on AI tools, which may impede the development of higher-order cognitive skills such as critical thinking and reasoning (Imran et al., 2024; Walter, 2024). The convenience and immediacy of AI-generated outputs often discourage students from engaging in deeper analytical exploration, thereby limiting opportunities for independent intellectual inquiry (Luo, 2024; Walter, 2024). This dependency presents a crucial pedagogical dilemma for universities striving to integrate technology effectively while preserving students’ capacity for autonomous reasoning and reflective judgment. Addressing this challenge requires a deliberate balance between leveraging AI for learning efficiency and ensuring that it remains a catalyst for—not a substitute for—cognitive development.

In this context, strengthening students’ critical thinking skills becomes increasingly vital to maintain intellectual autonomy in the era of pervasive AI use. Critical thinking is a foundational competency in higher education, encompassing the ability to analyze, interpret, synthesize, evaluate, infer, and self-regulate to make sound judgments and solve complex problems (Facione, 20 23; Paul & Elder, 2019). Although AI enhances academic productivity by efficiently processing and generating vast amounts of information, it does not inherently cultivate the reflective, evaluative, and metacognitive processes essential to critical reasoning (Fan et al., 2025; Gerlich, 2025). Since AI systems produce responses based on data-driven patterns rather than authentic human reasoning, they may inadvertently constrain students’ ability to engage with uncertainty, evaluate multiple perspectives, and construct independent analytical judgments (Zhai et al., 2024). These cognitive limitations are further compounded by ethical concerns—such as academic integrity, algorithmic bias, and the authenticity of AI-generated content—which complicate the pedagogical role of AI in higher education (Amirjalili et al., 2024; Barua, 2024). Without purposeful instructional design and guided reflection, AI risks functioning as a cognitive shortcut that replaces genuine intellectual effort rather than fostering critical inquiry and thoughtful engagement.

Several studies have investigated the role of AI in higher education and its implications for the development of critical thinking skills. For instance,Kizilcec et al. (2024) highlighted the influence of generative AI tools on academic practices, particularly the risk of academic dishonesty, which may undermine efforts to cultivate critical thinking. Similarly,Sarwanti et al. (2024) examined students’ perceptions and experiences with ChatGPT, revealing that over-reliance on AI tools can create substantial barriers to the development of independent analytical skills.Dergaa et al. (2023) emphasized both the potential benefits and ethical risks of natural language processing (NLP) technologies such as ChatGPT in academic writing, underscoring the necessity of preserving human critical reasoning in educational contexts. In addition,Murtiningsih et al. (2024) pointed out practical challenges associated with AI use in higher education, noting a decline in students’ reflective and analytical capabilities when dependence on AI becomes excessive.

Although these studies provide valuable insights into the impact, risks, and perceptions related to AI use in higher education, a research gap persists. Most prior research has primarily focused on students’ experiences, the risks of AI dependency, and ethical considerations, leaving limited understanding of how critical thinking itself is conceptualized, structured, and strategically fostered in the AI era. To address this gap, the present study adopts a combined bibliometric and systematic literature review (SLR) approach to comprehensively map the research landscape of critical thinking in higher education amid AI integration. Specifically, this study aims to identify the core constructs of critical thinking emerging from existing literature and to propose strategic approaches for its development in the AI era. By doing so, it contributes to a more nuanced understanding of how AI can be effectively leveraged to support the cultivation of critical thinking within higher education frameworks. Guided by this objective, three primary research questions structure this investigation:

1.What is the current mapping of publications on critical thinking in higher education in the AI ​​era?

2.Which constructs of critical thinking in the AI era have been identified by researchers?

3.What strategies are recommended for fostering critical thinking in higher education amid the growing influence of AI?

Methodology

Research Design

This study employs a quantitative bibliometric analysis approach and a qualitative systematic literature review. The bibliometric analysis maps the current publication landscape, while the systematic literature review explores strategies for developing critical thinking in higher education in the AI era. The Scopus database, recognized as one of the most reliable and widely used sources of academic information, serves as the primary source for this research. However, using a single database introduces biases that limit the appearance of similar documents in other sources. Therefore, this limitation is stated in the conclusion section, and the justification for the findings is specific to the Scopus database.

Data Collection

The bibliometric investigation followed five main stages: research design, data collection, data analysis, data visualization, and data interpretation (Salido et al., 2024; Zupic & Čater, 201 4).The study design included establishing the theme of critical thinking in the era of AI in higher education as the research area, and Scopus as the primary study database. Scopus was chosen due to its reliability in providing comprehensive bibliographic metadata and as a source known for providing reputable international journals (Nasrum et al., 2025).Data collection was conducted on February 18, 2025, using the search query: “critical thinking” AND (“Artificial Intelligence” OR AI) AND (“higher education” OR university OR college OR institute OR campus). The search query was developed considering related keywords and validated by two additional researchers. The search yielded 322 documents published between 2022 and 2024, limited to articles, conference papers, and reviews. The starting year of 2022 was chosen to correspond with the introduction of AI chatbots in higher education (McGrath et al., 202 5; Neumann et al., 2023).Restricting document types aligns with the study's objective of mapping publications related to research activities. All relevant bibliographic metadata were exported in Comma-Separated Values (CSV) format to ensure compatibility with Biblioshiny R and VOSviewer.

Simultaneously, a systematic literature review was conducted following the PRISMA guidelines, which include four stages: identification, screening, eligibility, and inclusion (Moher et al., 201 5), as illustrated in Figure 1. Initially, 448 publications on critical thinking in the AI era in higher education were identified in the Scopus database. During the initial screening, 282 documents were excluded because they were published before 2022, were not original research, had not reached final publication, or were not in English. This criterion ensured that the review focused on original, fully published research conducted in the period reflecting the emergence of AI chatbots such as ChatGPT and published in widely accessible languages. Further filtering excluded 125 documents due to closed access or the absence of keywords relevant to the study topic. Finally, 12 documents were removed after full-text review, as they were opinion pieces, reviews, or editorials, despite being categorized as original articles in Scopus. This multi-layered filtering process resulted in 34 documents included as the primary dataset for analysis.

Figure 1

Figure 1. PRISMA Flow Chart

Data Analysis

Bibliometric analysis was performed using Biblioshiny R and VOSviewer. Prior to analysis, metadata were cleaned using OpenRefine to remove inconsistencies and duplicate terms. Additional cleaning was performed using a thesaurus during mapping in VOSviewer. Data visualization was applied throughout the analysis process, and outputs relevant to the study objectives were selected for presentation. The visualizations were interpreted by the authors and validated by three additional researchers with expertise in the respective fields.

For the systematic literature review, the 34 selected documents were examined and summarized using a deductive thematic analysis approach (Nowell et al., 2017). This process involved identifying and categorizing key themes to address the research questions effectively. The analysis focused on two main areas: (1) the constructs of critical thinking in the AI era and (2) strategies for developing critical thinking suggested by previous studies. The thematic findings were validated by three co-authors, who also participated in the bibliometric validation, and the results presented reflect a consensus reached through discussion among all validators.

Findings

The findings of this study cover two main aspects: (1) an overview of the landscape and mapping of publications on critical thinking in the era of AI in higher education, and (2) a synthesis of research on the integration of AI in the development of critical thinking in higher education. The landscape analysis and mapping of publications include the identification of key journal sources, globally impactful documents, impactful authors, current key research themes, and emerging themes with potential for future exploration. Meanwhile, the synthesis of research findings presents an overview of the conceptualization and construct of critical thinking in the era of AI, and strategies for developing critical thinking in the era of AI are suggested by previous researchers. These findings provide a broad analytical perspective, offering critical insights related to the research themes.

Landscape and Mapping of Publications on Critical Thinking in Higher Education in the Era of AI

This study identified the ten most relevant journal sources that significantly contribute to the development of AI-integrated critical thinking in higher education. As illustrated in Table 1, these sources were identified based on the number of papers they published between 2022 and 2024.

Table 1. Top Ten Most Relevant Sources

Source Documents Citations
Cogent education 8 146
Education sciences 8 894
Journal of applied learning and teaching 8 67
ASEE annual conference and exposition, conference proceedings 7 10
Communications in computer and information science 7 73
Frontiers in education 7 101
Lecture notes in networks and systems 7 18
Journal of information technology education: research 6 86
Education and information technologies 5 443
IEEE global engineering education conference, educon 5 17

Table 1 reveals the ten most influential sources contributing to the field of artificial intelligence (AI) and critical thinking development in higher education. The analysis shows that Education Sciences and Cogent Education are the most productive outlets, each publishing eight papers on the topic, followed by The Journal of Applied Learning and Teaching with an equal number of publications. Other important contributors include the ASEE Annual Conference and Exposition Proceedings, Communications in Computer and Information Science (CCIS), Frontiers in Education, and Lecture Notes in Networks and Systems, each producing seven documents. Meanwhile, The Journal of Information Technology Education: Research, Education and Information Technologies, and IEEE EDUCON complete the top ten list with six and five publications, respectively. Despite similar productivity levels, citation patterns vary significantly—Education Sciences (894 citations) and Education and Information Technologies (443 citations) stand out as high-impact journals, indicating that the influence of publications is not always proportional to the number of documents produced. This divergence between productivity and impact aligns with previous bibliometric findings showing that open-access and well-indexed journals tend to accumulate more citations due to their broader visibility and interdisciplinary readership (Huang et al., 2024).

Furthermore, the dominance of Education Sciences and Education and Information Technologies underscores their pivotal role in integrating AI applications with critical thinking frameworks in higher education. These journals not only publish frequently but also attract substantial scholarly attention, reflecting their capacity to bridge discussions among technology, learning analytics, and higher-order thinking. In contrast, technically oriented venues or conference proceedings such as ASEE Proceedings and CCIS exhibit lower citation rates, likely due to their focus on emerging, short-cycle studies. This pattern reinforces the notion that peer-reviewed journals with established academic reputations function as long-term repositories of impactful research, whereas conference proceedings primarily serve as incubators for early-stage ideas (Franco, 2017; Kochetkov et al., 202 2). For future researchers exploring AI and critical thinking, these sources may serve as strategic publication targets. Subsequently, the results of the globally cited document analysis are presented in Table 2.

Table 2. Top Ten Most Global Cited Documents

Paper DOI Total Citations
Dergaa et al. (2023), Biol. Sport 10.5114/BIOLSPORT.2023.125623 474
Michel-Villarreal et al. (2023), Educ. Sci. 10.3390/educsci13090856 472
Walter, 2024, Int. J. Educ. Technol. High. Educ. 10.1186/s41239-024-00448-3 331
Thornhill-Miller et al. (2023), J. Intell. 10.3390/jintelligence11030054 312
Chan and Lee (2023), Smart Learn. Environ. 10.1186/s40561-023-00269-3 290
Mohamed (2024), Educ. Inf. Technol. 10.1007/s10639-023-11917-z 232
Nikolic et al. (2023), Eur. J. Eng. Educ. 10.1080/03043797.2023.2213169 215
Malik et al. (2023), Int. J. Educ. Res. Open 10.1016/j.ijedro.2023.100296 203
Lo (2023), J. Acad. Librariansh. 10.1016/j.acalib.2023.102720 202
van den Berg and du Plessis (2023), Educ. Sci. 10.3390/educsci13100998 195

Table 2 presents the ten most globally cited documents in the field of artificial intelligence (AI) and critical thinking development in higher education, revealing clear disparities in scholarly influence among publications. The seminal work by Dergaa et al. (2023) received 474 citations, making it the most influential document within the dataset, followed closely by Michel-Villarreal et al. (2023) with 472 citations. Both publications substantially exceed the citation counts of other works, thus serving as pivotal references guiding subsequent research on AI-based educational innovation. Walter (2024) recorded 331 citations, Thornhill-Miller et al.(2023) obtained 312 citations, and Chan and Lee (2023) garnered 290 citations, indicating strong but secondary influence. Meanwhile, mid-range influential studies include Mohamed (2024) with 232 citations, Nikolic et al.(2023) with 215 citations, and Malik et al.(2023) with 203 citations, followed by Lo (2023) with 73 citations and van den Berg and du Plessis (2023) with 72 citations. This gradient of citation frequency reflects a citation concentration phenomenon, in which a small number of highly visible papers attract the majority of scholarly attention, shaping conceptual frameworks and methodological standards within the domain. Such citation asymmetry is characteristic of emerging interdisciplinary fields, where a few pioneering studies establish foundational directions for future inquiry and practice (Ke, 2020).

Several of the most-cited documents exemplify this phenomenon.Dergaa et al. (2023) explore the prospects and risks of generative AI tools such as ChatGPT in academic writing, emphasizing how these tools can enhance efficiency while simultaneously challenging the authenticity of critical reasoning. Meanwhile,Michel-Villarreal et al. (2023) adopt an ethnographic approach to examine ChatGPT’s role in higher education, highlighting the need for institutional policies, ethical frameworks, and the cultivation of reflective student engagement. The emergence of van den Berg and du Plessis’ work further signals that structural and curricular transformation, rather than mere tool adoption, lies at the core of this research domain. Collectively, these themes indicate a dominant research focus on the intersection between generative AI, academic integrity, and critical thinking development in higher education. For future researchers, engaging with these highly cited works provides both theoretical foundations and strategic guidance for defining research scope and selecting high-visibility publication venues aligned with these influential studies. Subsequently, the analysis of impactful authors is presented in Figure 2.

Figure 4

Figure 2. Top Ten Impactful Authors

Figure 2 identifies several influential authors in this publication network, with Chan leading the list with 360 global citations, followed by Marzuki with 343 citations. Meanwhile,Belkina, Daniel, Grundy, Haque, Hassan, Lyden, Neal, and Nikolic each received 265 citations, forming a cluster of mid-tier yet impactful contributors. Within this scope, Chan’s research focuses on pedagogical adaptation and generational differences in the adoption and use of AI in higher education (Chan & Lee, 2023). Her studies examine how factors such as technological readiness, digital confidence, and ethical perception influence the ability of educators and students to employ AI critically and responsibly. Her findings affirm that the implementation of generative AI in teaching should be accompanied by pedagogical innovation, ethical reflection, and professional development for educators to ensure that digital transformation does not erode authentic thinking and humanistic values in the learning process (Chan & Lee, 2023; Chan & Tsi, 2024).

Meanwhile, Marzuki contributes through research exploring students’ experiences and perceptions of AI use in academic activities, particularly in language-based learning and academic writing contexts (Malik et al., 2023). His studies provide empirical foundations for understanding how students’ interactions with AI affect cognitive processes, creativity, and critical reasoning, emphasizing the importance of balancing technological assistance with authenticity in learning (Darwin et al., 2024; Werdiningsih et al., 2024). In parallel, Nikolic and collaborators examine the implications of AI use for academic integrity and assessment systems in engineering and STEM education, identifying risks of misuse, policy gaps, and the need for institutional frameworks that ensure fair and ethical evaluation (Nikolic et al., 2023, 2024). Collectively, these three authors represent the three major axes of AI research in higher education: human-centered pedagogy (Chan & Lee, 2023; Chan & Tsi, 2024), authentic and reflective learning (Darwin et al., 2024; Malik et al., 2023; Werdiningsih et al., 2024), and integrity-driven assessment (Nikolic et al., 2023, 2024).

Figure 5

Figure 3. Keyword Network Visualization

Furthermore, the VOSviewer analysis, based on a minimum occurrence threshold of five keywords, identifies five interconnected clusters that collectively construct the intellectual structure of current research on artificial intelligence (AI) and critical thinking development in higher education, as presented in Figure 3. Cluster 1 (red), consisting of 24 items such as artificial intelligence, student, critical thinking, curriculum, teaching, pedagogy, teachers, active learning, learning systems, learning process, and problem-solving, represents the pedagogical and instructional dimensions of AI integration. Studies in this cluster emphasize how AI is integrated into curricula to promote higher-order thinking, problem-solving, and reflective learning practices. This finding aligns with some recent work, which observes that educational AI research is moving beyond tool adoption toward pedagogical redesign centered on metacognition and inquiry-based learning, and its integration with technology (Baskara, 2023; Imran et al., 2024). However, while these studies highlight the positive role of AI in fostering student engagement and personalized learning, other scholars—such as Khan et al. (202 4) and Ogunleye et al. (2024)—warn that without teacher preparedness and ethical guidance, such innovations may risk creating cognitive dependency or superficial understanding. Overall, this cluster highlights a dynamic tension between technological facilitation and the preservation of authentic critical thinking—a central theme also reflected in the preceding citation and authorship analyses.

Cluster 2 (green) and Cluster 4 (yellow) capture the ethical, evaluative, and institutional dimensions of AI in education. The green cluster, comprising 17 items including education, assessment, learning, e-learning, student learning, human, creativity, plagiarism, technology, adult, and thinking, reflects research exploring how AI reshapes academic evaluation and creative learning processes. Meanwhile, the yellow cluster, consisting of 13 items such as higher education, ChatGPT, generative ai, academic integrity, academic writing, university students, ai literacy, risk management, ethics, and challenges, captures the emerging discourse on generative AI and its implications for academic honesty. These themes correspond to studies emphasizing the necessity of ethical responsibility in the use of AI in higher education, including the development of institutional policies and adaptive evaluation guidelines suited for digital learning contexts (Atenas et al., 2023; Khan et al., 2024; Wang et al., 2024). The emphasis on integrity and authenticity aligns with the findings of Dergaa et al. (2023) and Michel-Villarreal et al. (2023), who argue that generative AI tools such as ChatGPT pose serious challenges to authorship authenticity and critical thinking assessment. Conversely, studies such as Walter (2024) and van den Berg and du Plessis (2023) highlight the positive opportunities offered by AI integration in promoting creativity and student engagement when supported by strong digital literacy and ethical policies. Together, these clusters indicate that global research directions are shifting from mere technological adoption toward approaches that balance digital innovation, academic honesty, and institutional accountability as foundational principles of educational transformation in the AI era. As Rane et al. (2024) point out, there is a growing need to design new academic policies to mitigate AI misuse while promoting ethical student practice.

Cluster 3 (blue) and Cluster 5 (purple) represent the technical and applicative dimensions of AI research in education. The blue cluster, which includes 15 items such as contrastive learning, adversarial machine learning, language model, large language model, student perspectives, federated learning, personalized learning, systems thinking, and data privacy, indicates increasing attention to the development of large language models (LLMs), federated learning, and adaptive systems that safeguard data privacy while supporting personalized learning. Meanwhile, the purple cluster,comprising 11 items such as ai tools, ai in education, ai chatbots, chatbots, deep learning, natural languages, language processing, dan educational technology, focuses on the practical application of AI technologies in learning contexts—for example, using chatbots as virtual tutors, teaching assistants, and language learning aids. This pattern aligns with studies suggesting that the focus of AI-based education research has shifted from algorithmic development toward more contextual, application-driven approaches aimed at improving learning quality (Esakkiammal & Kasturi, 2024; Guettala et al., 2024). Compared with previous findings, however, the present clustering results demonstrate a closer integration between technical and pedagogical discourse, where issues such as data privacy and academic integrity increasingly appear together in the same research discussions. This finding indicates that AI research in higher education is progressing toward a phase of conceptual consolidation, in which technological development, ethical policy, and pedagogical innovation are viewed as interdependent elements in shaping responsible learning practices in the AI era (Chaparro-Banegas et al., 2024; Chia et al., 2024; Malik et al., 2023).

Figure 6

Figure 4. Keyword Overlay Visualization

Finally, the overlay visualization provides temporal evidence of how research on artificial intelligence (AI) and critical thinking in higher education has evolved from conceptual exploration toward systemic and multidisciplinary inquiry, as shown in Figure 4. The most recent keywords highlighted in yellow—including university students, information management, economic and social effects, generative adversarial network, contrastive learning, adversarial machine learning, federated learning, systems thinking, and generative AI—illustrate the rapid expansion of focus areas in recent years. These terms indicate that academic attention is shifting from pedagogical integration to technological sophistication, data governance, and socio-institutional accountability (Chan & Lee, 2023; Tarisayi, 2024). The growing prominence of federated learning and adversarial machine learning suggests increased concern for data privacy, model resilience, and fairness in AI-enabled education, reflecting global trends in responsible AI research (Moshawrab et al., 2023). Similarly, the emphasis on systems thinking and economic and social effects demonstrates emerging recognition that AI implementation in higher education cannot be separated from broader societal systems (Walter, 2024). This observation aligns with the discussions of Almaraz-López et al. (2023) and Chadha (2024), who noted that educational AI should be understood not merely as a pedagogical innovation but as a socio-technical ecosystem that reshapes institutional governance, labor structures, and learning equity.

In addition, the emergence of generative AI, information management, and university students in the latest research timeline indicates a shift toward human-centered and ethical AI research, emphasizing digital literacy, student agency, and institutional readiness. These developments echo the findings of Dergaa et al. (2023) and Walter (2024), who identified a growing tension between technological acceleration and the preservation of authenticity and academic integrity. The increasing focus on contrastive learning and generative adversarial networks (GANs) represents a deepening of technical sophistication in educational AI, where algorithms originally designed for creative generation and self-supervised learning are being adapted to enhance personalized and adaptive education. Such innovations reveal promising new research frontiers, including (1) ethical architectures for decentralized AI learning systems that preserve privacy and fairness; (2) AI-based cognitive analytics to monitor and enhance students’ critical thinking development; (3) integrative frameworks for systems thinking that connect technological, economic, and human dimensions of AI adoption; and (4) policies to manage the socio-economic impacts of generative AI on higher education labor and equity. Collectively, these trajectories highlight that the boundaries of AI-in-education research are maturing toward interdisciplinary synthesis—combining technical, ethical, and pedagogical expertise to ensure that future AI ecosystems in higher education remain inclusive, transparent, and human-centered (Bond et al., 2024; Jacques et al., 2024).

Concepts and Constructs of AI-Integrated Critical Thinking in Higher Education

Table 3 summarizes the results of the synthesis of 34 documents that discuss the conceptualization of critical thinking by the authors reviewed. This section also includes the construct of critical thinking integrated with AI in higher education.

Table 3. Constructs of Critical Thinking in the AI Era

Author (Year) Concept of Critical Thinking Construction of Critical Thinking in the AI Era
Tsopra et al. (2023) Digital health reasoning; analytical–clinical integration. Project-based Artificial Intelligence–Clinical Decision Support System (AI-CDSS) design integrating clinical, technical, and ethical reasoning.
Malik et al. (2023) Active, analytical, and constructive cognition. Balanced human–AI interaction fostering evaluation and reflection.
Al Ka’bi (2023) Not mentioned AI as a support for analytical and creative thinking.
Michalon and Camacho-Zuñiga (2023) Responsible and context-sensitive reasoning. Verification of AI errors fostering rational skepticism.
Chia et al. (2024) Evaluative reasoning within AI literacy. Emphasis on ethical evaluation and information credibility.
Mirón-Mérida and García-García (2024) One of the 4Cs: critical and creative thinking. Reflective AI use through debates and active learning.
Atenas et al. (2023) Interdisciplinary and reflective problem-solving. Critical data literacy and ethical engagement with AI.
Bozkurt et al. (2024) Evaluative and reflective capacity. Self-regulation and ethical evaluation of AI outputs.
Crudele and Raffaghelli (2023) Reasoning, reflection, and argumentation. Argument mapping to enhance analytical reasoning.
Michel-Villarreal et al. (2023) Reflective and evidence-based reasoning. Human reflection balanced with AI assistance.
Räisä and Stocchetti (2024) Epistemic awareness of knowledge formation. Reflection on AI opacity and cognitive autonomy.
Quintero-Gámez et al. (2024) Not mentioned Not specified; AI as predictive analytical tool.
Klimova and de Campos, 2024) Cognitive and ethical reasoning. Information evaluation and prompt literacy.
Costa et al. (2024) Higher-order cognitive and reflective skill. Ethical use and verification of AI-generated information.
Valova et al. (2024) Analytical, ethical, and epistemic competence. Responsible AI use with ethical guidance.
Asamoah et al. (2024) Evaluative, analytical and independent reasoning. Domain Knowledge, Ethical Acumen, and Query Capabilities (DEQ) model: domain knowledge, ethics, and questioning.
Werdiningsih et al. (2024) Evaluative and originality-preserving reasoning. Ethical AI use under educator supervision.
Chaparro-Banegas et al. (2024) Reflective and skill-based process. Ethical and transparent AI integration.
Ruiz-Rojas et al. (2024) Reflective analysis, evaluation, and autonomy. Generative AI use enhancing creativity and reflection.
Darwin et al. (2024) Skeptical, analytical, and rigorous reasoning. Human oversight of AI to sustain critical inquiry.
Banihashem et al. (2024) Contextual and reflective cognition. Human–AI collaboration through critical prompting.
Wang et al. (2024) Original, creative, and reflective authorship. Evaluation and revision of AI outputs.
Zhou et al. (2024) Analytical, inferential, and metacognitive reasoning. Active AI engagement enhancing self-regulation.
Borkovska et al. (2024) Analytical and communicative soft skill. AI-supported reflective and collaborative learning.
Ogunleye et al. (2024) Evaluative and problem-solving competence. Authentic assessments beyond AI capacity.
Sarwanti et al. (2024) Deep and rigorous cognitive process. Guided and responsible AI use.
Jayasinghe (2024) Higher-order reflective problem-solving. AI-facilitated Socratic dialogue and feedback.
Khan et al. (2024) Analytical judgment and factual evaluation. Human–AI collaboration with ethical literacy.
Avsheniuk et al. (2024) Multidimensional reasoning (analysis, evaluation, creativity). Ethical AI–human synergy promoting reflection.
Broadhead (2024) Analytical and interpretive intellectual freedom. Resistance to AI bias and preservation of autonomy.
Furze et al. (2024) Contextual and evaluative human reasoning. Human evaluation via AI Assessment Scale (AIAS).
Liu and Tu (2024) Purposeful and reflective judgment. AI-based SSI learning model encouraging debate.
Almulla and Ali (2024) Active human cognitive engagement. Pedagogical balance between human and AI reasoning.
Zang et al. (2022) Cognitive understanding (implicit). AI–5G integration enhancing reflective learning.

Table 3 presents some information regarding the concept and construct of critical thinking in the AI ​​era, across the reviewed studies. Critical thinking in the AI ​​era is consistently conceptualized as a multidimensional and reflective cognitive process encompassing interpretation, analysis, evaluation, inference, and self-regulation, in line with the frameworks of Facione (20 23) and Paul and Elder (2019). These studies collectively affirm that critical thinking remains a human-centered intellectual and ethical capacity—one that involves questioning assumptions, assessing evidence, and applying metacognitive regulation to reach reasoned judgments.Scholars such as Malik et al. (2023),Ruiz-Rojas et al. (2024),Zhou et al.(2024),Darwin et al. (2024), and Avsheniuk et al. (2024) emphasize that AI can catalyze deeper analysis and reflection when integrated thoughtfully into learning processes. In this sense, AI becomes a cognitive scaffold that encourages learners to compare, verify, and critique machine-generated outputs, thereby reinforcing analytical, evaluative, and inferential dimensions of critical thinking (Costa et al., 2024; Jayasinghe, 2024; Liu & Tu, 2024; Michalon & Camacho-Zuñiga, 2023; Tsopra et al., 2023).

At a broader level, several authors describe evaluative and ethical reasoning as core constructs of critical thinking in the AI era. Studies such as those by Bozkurt et al. (2024),Chia et al.(2024),Atenas et al. (2023),Asamoah et al. (2024),Klimova and de Campos, 2024),Valova et al.(2024), and Werdiningsih et al. (2024) conceptualize critical thinking as the ability to use AI responsibly, incorporating ethical awareness, data literacy, and epistemic sensitivity. These scholars argue that critical thinking now extends beyond cognitive skills to include ethical discernment, transparency, and social accountability in interacting with intelligent systems. Such perspectives echo Paul and Elder's (2019) notion of intellectual virtues—fair-mindedness, humility, and integrity—highlighting that critical thinkers must not only reason effectively but also act ethically in complex technological contexts. Similarly,Broadhead (2024),Räisä and Stocchetti (2024), and Atenas et al. (2023) stress the epistemic dimension of critical thinking, warning that overreliance on AI may erode autonomy and reflexivity, leading to passive acceptance of algorithmic outputs instead of deliberate, evidence-based reasoning.

Meanwhile, a number of studies focus on metacognitive and reflective constructs of critical thinking that emphasize learning through interaction with AI.Authors such as Wang et al. (2024),Zhou et al. (2024),Crudele and Raffaghelli (2023),Borkovska et al. (2024),Banihashem et al.(2024), and Furze et al. (2024) highlight that metacognition—self-regulation, reflection, and awareness of one’s cognitive processes—becomes central to developing critical thinking in the AI-mediated learning environment. By analyzing AI-generated errors or inconsistencies, learners cultivate reflective skepticism and adaptive reasoning. Other studies, including Mirón-Mérida and García-García (2024),Ogunleye et al. (2024), and Sarwanti et al. (2024), identify authentic assessment, problem-solving, and dialogic learning as pedagogical conditions that sustain critical thinking and prevent cognitive dependency on AI. Collectively, these findings reveal that critical thinking in the AI era is constructed through reflective interaction, ethical awareness, and evaluative autonomy, requiring intentional pedagogical strategies to ensure that AI functions as an enhancer—not a substitute—of human reasoning.

Overall, the literature demonstrates a converging view that critical thinking in the AI ​​era integrates three interrelated constructs: (1) analytical-evaluative reasoning, (2) ethical-epistemic awareness, and (3) metacognitive reflection and regulation. These constructs align with the core dimensions identified by Facione (20 23) and Paul and Elder (2019), but are expanded to encompass ethical literacy and technological reflexivity unique to the digital age. While AI provides new opportunities for cognitive stimulation, its pedagogical value relies on human guidance, reflective dialogue, and critical engagement.

Concepts and Constructs of AI-Integrated Critical Thinking in Higher Education

Table 4 summarizes the results of the synthesis of 34 documents that discuss strategies for developing critical thinking in the AI era in higher education.

Table 4. Recommended Strategies for Fostering Critical Thinking in Higher Education in the AI Era

Author (Year) Recommended Strategies
Tsopra et al. (2023) Project-based learning as AI-CDSS designers; multidisciplinary integration (clinical, ethical, technical); interactive, innovation-oriented curriculum; cultivation of digital leadership and creativity
Malik et al. (2023) Balanced human–AI collaboration; responsible AI literacy training; active faculty guidance and mentoring; emphasis on academic ethics and integrity; promotion of creativity and self-reflection
Al Ka’bi (2023) Not mentioned
Michalon and Camacho-Zuñiga (2023) Use AI inaccuracies for reflective learning; design activities verifying AI outputs; encourage iterative human–AI dialogue to foster analytical reasoning
Chia et al. (2024) Responsible AI usage training; integration of AI literacy in curriculum; prompt-engineering instruction; treat AI as a supporting—not primary—source; strengthen verification and evaluation habits
Mirón-Mérida and García-García (2024) Conscious and critical AI integration; use of engaging, personalized, paper-based, and oral tasks; building trust and academic integrity in classrooms
Atenas et al. (2023) Embed data ethics and justice in curricula; ethical and socio-technical learning approaches; promote interdisciplinary dialogue and collaboration; empower students to challenge algorithmic inequality
Bozkurt et al. (2024) Critical evaluation of AI outputs; reinforce higher-order thinking and creativity; redesign curriculum and assessment for reflection; foster ethical awareness and AI literacy
Crudele and Raffaghelli (2023) Strengthen argumentative reasoning skills; use Argument Maps for logic visualization; apply hybrid learning environments; build digital literacy and manage cognitive load
Michel-Villarreal et al. (2023) Interactive use of AI for discussion and debate; active teacher supervision; innovative and authentic assessments (“AI-proof”); promote AI ethics and literacy education
Räisä and Stocchetti (2024) Develop epistemic and technological literacy; foster transparency and reflection on AI processes; rehumanize teaching and discussion; integrate critical philosophy of technology; train adaptive skepticism toward uncertainty
Quintero-Gámez et al. (2024) Not mentioned
Klimova and de Campos, 2024) Critical evaluation of AI-generated content; develop prompt literacy skills; train ethical awareness and bias detection; integrate AI into peer feedback and authentic assessment
Costa et al. (2024) Encourage verification of AI outputs; promote ethical AI use; design interactive, collaborative learning; reform curricula and assessments for high-level thinking
Valova et al. (2024) • Responsible and balanced AI integration• Train evaluative and epistemic skills• Enhance educator competency and AI ethics• Embed verification and reflection practices
Asamoah et al. (2024) • Apply DEQ model (Domain Knowledge, Ethical Acumen, Querying)• Encourage reflective AI use• Train effective question formulation (prompting)• Strengthen ethical awareness and institutional guidance
Werdiningsih et al. (2024) Establish ethical AI-use guidelines; promote originality and integrity; encourage critical evaluation of AI suggestions; provide training for ethical, balanced AI integration; maintain human oversight and reflective learning
Chaparro-Banegas et al. (2024) • Active, participatory, experiential learning• Ethical AI integration in dynamic classrooms• Continuous digital and ethical training for educators• Implement inclusive, transparent educational policies
Ruiz-Rojas et al. (2024) Integrate AI pedagogically into curricula; utilize AI tools (ChatGPT, YOU.COM, ChatPDF, Tome AI, Canva) for analysis and collaboration; provide continuous training and ethical literacy
Darwin et al. (2024) Balance AI use with human reasoning; maintain human oversight and skepticism; integrate AI in inquiry-driven, reflective pedagogy
Banihashem et al. (2024) Combine AI and human feedback loops; use AI for descriptive assessment; preserve human contextual judgment; employ rigorous prompt design for reliable output
Wang et al. (2024) Recognize AI limitations; personalize assignments to prevent automation; use demanding evaluation rubrics; train ethical and reflective AI engagement
Zhou et al. (2024) Design user-friendly, purpose-driven AI tools; embed self-regulation strategies in AI-based learning; integrate AI contextually into curricula; train critical evaluation of AI outputs
Borkovska et al. (2024) Personalize learning with AI interaction; use ChatGPT for critical reflection activities; encourage evaluation of AI results; balance AI use with social and emotional interaction
Ogunleye et al. (2024) Redesign authentic and reflective assessments; employ AI for comparative and analytical exercises; revise curricula and enhance faculty AI competency
Sarwanti et al. (2024) Provide structured guidance and training; establish institutional AI-use policies; redesign curricula to embed AI reflectively
Jayasinghe (2024) Use AI for personalized feedback; apply problem-based and Socratic learning; facilitate collaborative discussions and self-reflection; support educator–AI co-teaching models
Khan et al. (2024) Integrate EMIAS for critical information evaluation; combine AI with human judgment (human-in-the-loop); foster AI literacy and ethical policy debates
Avsheniuk et al. (2024) Encourage critical engagement with AI; promote responsible and reflective AI use; maintain balance with traditional pedagogy; emphasize human judgment and creativity
Broadhead (2024) Reinforce deep reading and argumentation; resist intellectual outsourcing to AI; challenge dominant paradigms and biases; critically evaluate technology’s purpose; preserve dialogic, human-centered education
Furze et al. (2024) Apply AI Assessment Scale (AIAS) for ethical integration; center assessments on human reflection; use AI to support—not replace—reasoning; encourage evaluation of AI outputs at multiple levels
Liu and Tu (2024) Implement AI-supported Socio-Scientific Issue (SSI) model; contextualize interdisciplinary learning; promote digital literacy, collaboration, and self-regulation
Almulla and Ali (2024) Use AI complementarily, not substitutively; ensure ethical and balanced integration; strengthen digital literacy and evaluation skills; scaffolded, instructor-guided learning
Zang et al. (2022) Integrate AI and 5G for interactive, personalized learning; promote deeper understanding through data exploration; use technology to enhance analytical reflection

Table 4 shows that strategies for fostering critical thinking in higher education amid the rise of artificial intelligence (AI) converge around five main domains: (1) responsible and ethical AI integration, (2) curriculum and assessment redesign, (3) guided human–AI collaboration, (4) enhancement of metacognitive and dialogic practices, and (5) development of AI and data literacy. These findings can be interpreted through the conceptual frameworks of critical thinking proposed by Facione (20 23) and Paul and Elder (2019), both of whom emphasize analysis, evaluation, and self-regulation as the foundation of reasoned and ethical judgment.

Responsible and ethical AI integration is the most prominent strategy identified across the reviewed literature (Almulla & Ali, 2024; Chia et al., 2024; Malik et al., 2023; Mirón-Mérida & García-García, 2024; Valova et al., 2024). The authors consistently argue that AI should function as a cognitive tool rather than a substitute for human reasoning. This finding aligns with Paul and Elder's (2019) principle of intellectual autonomy, which positions learners as active, reflective agents in their own thinking processes. Ethical integration involves explicit guidance, transparency, and academic integrity, cultivating what Facione (20 23) calls intellectual responsibility and truth-seeking in learners’ engagement with technology.

Furthermore, curriculum and assessment redesign is viewed as essential to ensure that AI adoption does not diminish cognitive rigor (Bozkurt et al., 2024; Crudele & Raffaghelli, 2023; Ogunleye et al., 2024; Werdiningsih et al., 2024). This strategy emphasizes process-oriented learning and the creation of AI-proof assessments that require originality, logical reasoning, and personal reflection. Such approaches correspond with Paul and Elder’s intellectual standards of depth and significance, reinforcing the idea that critical thinking must emerge from intellectually engaged and contextually grounded learning experiences (Paul & Elder, 2019).

Guided human–AI collaboration also appears as a central strategy to promote reflective skepticism and active reasoning (Banihashem et al., 2024; Jayasinghe, 2024; Michalon & Camacho-Zuñiga, 2023; Tsopra et al., 2023). Under instructor supervision, students engage in iterative interactions with AI—verifying, analyzing, and refining AI-generated outputs. This process strengthens intellectual perseverance and open-mindedness (Facione, 20 23), encouraging learners to construct understanding through dialogic inquiry between human and machine rather than accepting technological authority uncritically.

Additionally, several authors highlight metacognitive and dialogic practices as foundational to sustaining critical thinking in the AI era (Broadhead, 2024; Crudele & Raffaghelli, 2023; Liu & Tu, 2024). Activities such as reflective discussion, debate, argumentative writing, and Socratic questioning promote self-assessment and cognitive regulation. These practices embody Paul and Elder’s notion of thinking about one’s thinking, fostering continuous intellectual self-correction amid technologically mediated learning environments (Paul & Elder, 2019).

Finaly, the development of AI and data literacy emerges as a key dimension of epistemic awareness in higher education (Atenas et al., 2023; Khan et al., 2024; Räisä & Stocchetti, 2024). Understanding algorithmic bias, data justice, and system transparency enables students to assess the reliability, accuracy, and fairness of AI-generated information. In line with Facione's (20 23) emphasis on interpretation and evaluation, AI literacy enhances learners’ ability to navigate complex digital information critically and ethically.

Overall, the reviewed studies indicate that cultivating critical thinking in the AI era requires a balanced pedagogical ecosystem that harmonizes technological advancement with human intellectual agency. The most effective strategies position AI as a reflective partner that enhances and extends human reasoning without undermining intellectual autonomy. Higher education institutions are thus encouraged to integrate ethical and AI literacy modules within curricula, design assessments that foster analytical depth and originality, and strengthen instructor guidance to facilitate critical dialogue between students and technology. Through these approaches, AI functions not merely as a tool of automation but as a catalyst for developing reflective judgment, intellectual integrity, and self-directed thinking—three essential pillars of critical thinking as envisioned by Facione (20 23) and Paul and Elder (2019).

Conclusion

This study aimed to systematically map and synthesize the global research landscape on the development of critical thinking in higher education within the context of artificial intelligence (AI). The bibliometric and content analyses reveal that Education Sciences and Cogent Education are the most productive sources in this domain. At the same time, Education and Information Technologies demonstrates the highest citation impact—highlighting the growing intersection between AI integration, pedagogy, and higher-order cognitive skills. At the document level, highly cited works such as Dergaa et al. (2023) and Michel-Villarreal et al. (2023) have shaped foundational debates concerning generative AI, academic integrity, and reflective engagement. Influential authors including Chan, Marzuki, and Nikolic further exemplify three key scholarly trajectories: human-centered pedagogy, authentic and reflective learning, and integrity-driven assessment. Thematic clustering of recent publications reveals five dominant areas—(1) pedagogical and instructional integration of AI, (2) ethical and evaluative dimensions in academic integrity, (3) technical and application-oriented AI models, (4) institutional accountability and policy frameworks, and (5) socio-technical systems thinking. Emerging themes such as generative AI, federated learning, contrastive learning, and data privacy underscore a transition from tool adoption toward systemic and interdisciplinary inquiry. Future research directions may include investigating ethical architectures for decentralized AI systems, AI-based cognitive analytics for assessing critical thinking, and socio-technical frameworks that connect technological innovation with equity and institutional governance in higher education.

The findings also illuminate how critical thinking is conceptualized and constructed within AI-mediated learning environments. Conceptually, most authors align with the multidimensional frameworks proposed by Facione (20 23) and Paul and Elder (201 9), viewing critical thinking as a reflective, evaluative, and ethical process of reasoning. In the AI era, this construct expands to encompass digital epistemic awareness—the ability to question the credibility, bias, and opacity of algorithmic knowledge. Across the reviewed literature, strategies for fostering critical thinking converge into three integrated approaches: ethically embedding AI in the curriculum, redesigning pedagogy and assessment to prioritize analysis and originality, and developing reflective human-AI collaboration through faculty mentoring processes to facilitate critical dialogue between students and AI. Collectively, these strategies aim to ensure that AI enhances rather than replaces human reasoning, reinforcing intellectual autonomy, integrity, and self-directed inquiry as the cornerstones of higher education.

Despite its contributions, this study acknowledges limitations, as its conclusions are limited to the Scopus database. This single-database analysis, while comprehensive, does not encompass all relevant studies indexed in other databases. Consequently, the thematic patterns and emphases identified in this study are interpreted as representative of existing research. Future studies could address this limitation by incorporating multi-database searches (e.g., Web of Science, Dimension, ERIC, and others).

Overall, the study offers important implications for higher education policy, curriculum design, and teaching practice. Universities should integrate AI and data ethics modules within curricula to cultivate responsible digital citizenship; instructors should adopt dialogic, inquiry-based pedagogies that engage students in evaluating and contextualizing AI outputs; and assessment systems should emphasize process, reasoning, and originality rather than automated efficiency. At the institutional level, transparent policies and continuous professional development programs are essential to maintain academic integrity and intellectual rigor in AI-augmented learning environments. Collectively, these measures will help position AI as a catalyst for reflective judgment and ethical reasoning—supporting the enduring mission of higher education to nurture thoughtful, autonomous, and critically engaged learners in the digital age.

Gratitude is extended to the Center for Higher Education Funding and Assessment, Ministry of Higher Education, Science, and Technology (Pusat Pembiayaan dan Asesmen Pendidikan Tinggi/PPAPT, Kementerian Pendidikan Tinggi, Sains dan Teknologi/Kemdiktisaintek) and the Education Fund Management Institution (Lembaga Pengelola Dana Pendidikan/LPDP) of the Republic of Indonesia for their assistance and financial backing in facilitating the author's academic pursuits.

Conflict of Interest Declaration

All author affirms that there are no conflicts of interest in the composition and dissemination of this study.

Funding

This study was conducted without financial support from any funding agency, whether governmental, commercial, or non-profit.

Generative AI Statement

During the preparation of this manuscript, AI-based tools such as ChatGPT-5 were utilized to enhance readability, and Grammarly was employed to ensure grammatical accuracy. All outputs generated by these tools were subsequently reviewed, revised, and validated by the authors. The authors retain full responsibility for the accuracy, integrity, and content of the final published work.

Authorship Contribution Statement

Sitepu: Conceptualization, Formal analysis, Methodology, Writing – original draft.Prasojo: Supervision, Validation, Data curation. Hermanto: Supervision, Validation, Data curation. Salido: Validation, Data curation, Writing – original draft.Nurhakim: Methodology, Resources, Investigation.Setyorini: Formal analysis, Investigation, Writing – review & editing.Disnawati: Visualization, Writing – review & editing.Wiratsongko: Software, Investigation.

References

Al Ka’bi, A. (2023). Proposed artificial intelligence algorithm and deep learning techniques for development of higher education. International Journal of Intelligent Networks, 4, 68–73. https://doi.org/10.1016/j.ijin.2023.03.002

Almaraz-López, C., Almaraz-Menéndez, F., & López-Esteban, C. (2023). Comparative study of the attitudes and perceptions of university students in business administration and management and in education toward artificial intelligence. Education Sciences, 13(6), Article 609. https://doi.org/10.3390/educsci13060609

Almulla, M., & Ali, S. I. (2024). The changing educational landscape for sustainable online experiences: Implications of ChatGPT in Arab students’ learning experience. International Journal of Learning, Teaching and Educational Research, 23(9), 285 – 306. https://doi.org/10.26803/ijlter.23.9.15

Amirjalili, F., Neysani, M., & Nikbakht, A. (2024). Exploring the boundaries of authorship: A comparative analysis of AI-generated text and human academic writing in English literature. Frontiers in Education, 9, Article 1347421. https://doi.org/10.3389/feduc.2024.1347421

Asamoah, P., Zokpe, D., Boateng, R., Marfo, J. S., Boateng, S. L., Asamoah, D., Muntaka, A. S., & Manso, J. F. (2024). Domain knowledge, ethical acumen, and query capabilities (DEQ): A framework for generative AI use in education and knowledge work. Cogent Education, 11(1), Article 2439651. https://doi.org/10.1080/2331186X.2024.2439651

Atenas, J., Havemann, L., Rodés, V., & Podetti, M. (2023). Critical data literacy in praxis: An open education approach for academic development. Edutec, 84(85), 49–67. https://doi.org/10.21556/edutec.2023.85.2851

Avsheniuk, N., Lutsenko, O., Svyrydiuk, T., & Seminikhyna, N. (2024). Empowering language learners’ critical thinking: evaluating ChatGPT’s role in English course implementation. Arab World English Journal, (Special Issue on ChatGPT), 210-224. https://doi.org/10.24093/awej/chatgpt.14

Banihashem, S. K., Kerman, N. T., Noroozi, O., Moon, J., & Drachsler, H. (2024). Feedback sources in essay writing: Peer-generated or AI-generated feedback? International Journal of Educational Technology in Higher Education, 21, Article 23. https://doi.org/10.1186/s41239-024-00455-4

Barua, M. (2024). Assessing the performance of ChatGPT in answering patients’ questions regarding congenital bicuspid aortic valve. Cureus, 16(10), Article e72293. https://doi.org/10.7759/cureus.72293

Baskara, F. R. (2023). Chatbots and flipped learning: Enhancing student engagement and learning outcomes through personalised support and collaboration. IJORER: International Journal of Recent Educational Research, 4(2), 223–238. https://doi.org/10.46245/ijorer.v4i2.331

Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21, Article 4. https://doi.org/10.1186/s41239-023-00436-z

Borkovska, I., Kolosova, H., Kozubska, I., & Antonenko, I. (2024). Integration of AI into the distance learning environment: Enhancing soft skills. Arab World English Journal, (Special Issue on ChatGPT), 56-72.

Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., Stracke, C. M., Singh, L., Crompton, H., Koutropoulos, A., Terentev, E., Pazurek, A., Nichols, M., Sidorkin, A. M., Costello, E., Watson, S., Mulligan, D., Honeychurch, S., … Asino, T. I. (2024). The manifesto for teaching and learning in a time of generative AI: A critical collective stance to better navigate the future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777  

Broadhead, L.-A. (2024). Insidious chatter versus critical thinking: Resisting the Eurocentric siren song of AI in the classroom. Journal of Applied Learning and Teaching, 7(2), 28 – 37. https://doi.org/10.37074/jalt.2024.7.2.9

Chadha, A. (2024). Transforming higher education for the digital age: Examining emerging technologies and pedagogical innovations. Journal of Interdisciplinary Studies in Education, 13(S1), 53–70. https://doi.org/10.32674/em2qsn46  

Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learning Environments, 10, Article 60. https://doi.org/10.1186/s40561-023-00269-3

Chan, C. K. Y., & Tsi, L. H. Y. (2024). Will generative AI replace teachers in higher education? A study of teacher and student perceptions. Studies in Educational Evaluation, 83, Article 101395. https://doi.org/10.1016/j.stueduc.2024.101395

Chaparro-Banegas, N., Mas-Tur, A., & Roig-Tierno, N. (2024). Challenging critical thinking in education: New paradigms of artificial intelligence. Cogent Education, 11(1), Article 2437899. https://doi.org/10.1080/2331186X.2024.2437899

Chia, C. S. C., Phan, J., Harry, O., & Lee, K. M. (2024). Graduate students’ perception and use of ChatGPT as a learning tool to develop writing skills. International Journal of TESOL Studies, 6(3), 113–127. https://doi.org/10.58304/ijts.20240308

Costa, A. R., Lima, N., Viegas, C., & Caldeira, A. (2024). Critical minds: Enhancing education with ChatGPT. Cogent Education, 11(1), Article 2415286. https://doi.org/10.1080/2331186X.2024.2415286

Crudele, F., & Raffaghelli, J. E. (2023). Promoting critical thinking through argument mapping: A lab for undergraduate students. Journal of Information Technology Education: Research, 22, 497–525. https://doi.org/10.28945/5220

Darwin, Rusdin, D., Mukminatien, N., Suryati, N., Laksmi, E. D., & Marzuki. (2024). Critical thinking in the AI era: An exploration of EFL students’ perceptions, benefits, and limitations. Cogent Education, 11(1), Article 2290342. https://doi.org/10.1080/2331186X.2023.2290342

Dergaa, I., Chamari, K., Zmijewski, P., & Ben Saad, H. (2023). From human writing to artificial intelligence generated text: Examining the prospects and potential threats of ChatGPT in academic writing. Biology of Sport, 40(2), 615 – 622. https://doi.org/10.5114/BIOLSPORT.2023.125623

Esakkiammal, S., & Kasturi, K. (2024). Advancing Educational Outcomes with Artificial Intelligence: Challenges, Opportunities, And Future Directions. International Journal of Computational and Experimental Science and Engineering, 10(4), 1749–1756. https://doi.org/10.22399/ijcesen.799

Facione, P. A. (2023). Critical thinking: What it is and why it counts. Insight Assessment. https://bit.ly/3XoUry9

Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & Gašević, D. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56(2), 489–530. https://doi.org/10.1111/bjet.13544

Franco, E. L. (2017). The downside of the shifting paradigm of scholarly publishing in the biomedical sciences: Predatory publishing. Journal of Obstetrics and Gynaecology Canada, 39(7), 513–515. https://doi.org/10.1016/j.jogc.2017.03.104

Furze, L., Perkins, M., Roe, J., & MacVaugh, J. (2024). The AI assessment scale (AIAS) in action: A pilot implementation of GenAI-supported assessment. Australasian Journal of Educational Technology, 40(4), 38–55. https://doi.org/10.14742/ajet.9434

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), Article 6. https://doi.org/10.3390/soc15010006

Guettala, M., Bourekkache, S., Kazar, O., & Harous, S. (2024). Generative artificial intelligence in education: Advancing adaptive and personalized learning. Acta Informatica Pragensia, 13(3), 460–489. https://doi.org/10.18267/j.aip.235

Hongli, Z., & Leong, W. Y. (2024). AI solutions for accessible education in underserved communities. Journal of Innovation and Technology, 2024(11). https://doi.org/10.61453/joit.v2024no11

Huang, D., Huang, Y., & Cummings, J. J. (2024). Exploring the integration and utilisation of generative AI in formative e-assessments: A case study in higher education. Australasian Journal of Educational Technology, 40(4), 1 – 19. https://doi.org/10.14742/ajet.9467

Imran, M., Almusharraf, N., Abdellatif, M. S.,  & Abbasova, M. Y. (2024). Artificial intelligence in higher education: Enhancing learning systems and transforming educational paradigms. International Journal of Interactive Mobile Technologies, 18(18), 34–48. https://doi.org/10.3991/ijim.v18i18.49143

Jacques, P. H., Moss, H. K., & Garger, J. (2024). A Synthesis of AI in Higher Education: Shaping the Future. Journal of Behavioral and Applied Management, 24(2), 103–111. https://doi.org/10.21818/001c.122146

Jayasinghe, S. (2024). Promoting active learning with ChatGPT: A constructivist approach in Sri Lankan higher education. Journal of Applied Learning and Teaching, 7(2), 141 – 154. https://doi.org/10.37074/jalt.2024.7.2.26

Ke, Q. (2020). Technological impact of biomedical research: The role of basicness and novelty. Research Policy, 49(7), Article 104071 https://doi.org/10.1016/j.respol.2020.104071

Khan, U. A., Kauttonen, J., Aunimo, L., & Alamäki, A. V. (2024). A system to ensure information trustworthiness in artificial intelligence enhanced higher education. Journal of Informastion Technology Education Research, 23, Article 13. https://doi.org/10.28945/5295

Kizilcec, R. F., Huber, E., Papanastasiou, E. C., Cram, A., Makridis, C. A., Smolansky, A., Zeivots, S., & Raduescu, C. (2024). Perceived impact of generative AI on assessments: comparing educator and student perspectives in Australia, Cyprus, and the United States. Computers and Education: Artificial Intelligence, 7, Article 100269. https://doi.org/10.1016/j.caeai.2024.100269

Klimova, B., & de Campos, V. P. L. (2024). University undergraduates’ perceptions on the use of ChatGPT for academic purposes: Evidence from a university in Czech Republic. Cogent Education, 11(1), Article 2373512. https://doi.org/10.1080/2331186X.2024.2373512

Kochetkov, D., Birukou, A., & Ermolayeva, A. (2022). The importance of conference proceedings in research evaluation: A methodology for assessing conference impact. In V. M. Vishnevskiy, K. E. Samouylov, & D. V. Kozyrev (Eds.), Distributed Computer and Communication Networks (Volume 1552, pp. 359-370). Springer.  https://doi.org/10.1007/978-3-030-97110-6_28

Liu, Q., & Tu, C. C. (2024). Improving critical thinking through AI-supported socio-scientific issues instruction. Journal of Logistics, Informatics and Service Science, 11(3), 52–65. https://doi.org/10.33168/JLISS.2024.0304

Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. Journal of Academic Librarianship, 49(4), Article 102720. https://doi.org/10.1016/j.acalib.2023.102720

Luo, Y. (2024). Revolutionizing education with AI: The adaptive cognitive enhancement model (ACEM) for personalized cognitive development. In Proceedings of the 2nd International Conference on Machine Learning and Automation (pp. 71-76). EWA. https://doi.org/10.54254/2755-2721/82/20240929

Malik, A. R., Pratiwi, Y., Andajani, K., Numertayasa, I. W., Suharti, S., Darwis, A., & Marzuki. (2023). Exploring Artificial Intelligence in Academic Essay: Higher Education Student’s Perspective. International Journal of Educational Research Open, 5, Article 100296. https://doi.org/10.1016/j.ijedro.2023.100296

McGrath, C., Farazouli, A., & Cerratto-Pargman, T. (2025). Generative AI chatbots in higher education: A review of an emerging research area. Higher Education, 89, 1533–1549. https://doi.org/10.1007/s10734-024-01288-w

Michalon, B., & Camacho-Zuñiga, C. (2023). ChatGPT, a brand-new tool to strengthen timeless competencies. Frontiers in Education, 8, Article 1251163. https://doi.org/10.3389/feduc.2023.1251163

Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13(9), Article 856. https://doi.org/10.3390/educsci13090856

Mirón-Mérida, V. A., & García-García, R. M. (2024). Developing written communication skills in engineers in Spanish: Is ChatGPT a tool or a hindrance? Frontiers in Education, 9, Article 1416152. https://doi.org/10.3389/feduc.2024.1416152

Mohamed, A. M. (2024). Exploring the potential of an AI-based Chatbot (ChatGPT) in enhancing English as a Foreign Language (EFL) teaching: Perceptions of EFL Faculty Members. Education and Information Technologies, 29, 3195–3217. https://doi.org/10.1007/s10639-023-11917-z

Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., Stewart, L. A., & PRISMA-P Group. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4, Article 1.  https://doi.org/10.1186/2046-4053-4-1

Moshawrab, M., Adda, M., Bouzouane, A., Ibrahim, H., & Raad, A. (2023). Reviewing federated learning aggregation algorithms; strategies, contributions, limitations and future perspectives. Electronics, 12(10), Article 2287. https://doi.org/10.3390/electronics12102287

Moulin, T. C. (2024). Learning with AI language models: Guidelines for the development and scoring of medical questions for higher education. Journal of Medical Systems, 48, Article 45. https://doi.org/10.1007/s10916-024-02069-9

Murtiningsih, S., Sujito, A., & Khin Soe, K. (2024). Challenges of using ChatGPT in education: A digital pedagogy analysis. International Journal of Evaluation and Research in Education, 13(5), 3466-3473.  https://doi.org/10.11591/ijere.v13i5.29467

Nasrum, A., Salido, A., & Chairuddin, C. (2025). Unveiling emerging trends and potential research themes in future ethnomathematics studies: A global bibliometric analysis (from inception to 2024). International Journal of Learning, Teaching and Educational Research, 24(2), 206–226. https://doi.org/10.26803/ijlter.24.2.11

Neumann, M., Rauschenberger, M., & Schon, E. M. (2023). “We need to talk about ChatGPT”: The future of AI and higher education. In Proceedings of the 5th International Workshop on Software Engineering Education for the Next Generation (SEENG) (pp. 29-32). IEEE. https://doi.org/10.1109/SEENG59157.2023.00010

Nikolic, S., Daniel, S., Haque, R., Belkina, M., Hassan, G. M., Grundy, S., Lyden, S., Neal, P., & Sandison, C. (2023). ChatGPT versus engineering education assessment: a multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integrity. European Journal of Engineering Education, 48(4), 559–614. https://doi.org/10.1080/03043797.2023.2213169

Nikolic, S., Sandison, C., Haque, R., Daniel, S., Grundy, S., Belkina, M., Lyden, S., Hassan, G. M., & Neal, P. (2024). ChatGPT, Copilot, Gemini, SciSpace and Wolfram versus higher education assessments: an updated multi-institutional study of the academic integrity impacts of Generative Artificial Intelligence (GenAI) on assessment, teaching and learning in engineering. Australasian Journal of Engineering Education, 29(2), 126–153. https://doi.org/10.1080/22054952.2024.2372154

Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16(1), 1-13. https://doi.org/10.1177/1609406917733847

Ogunleye, B., Ibilola, K. I., Zakariyyah, Ajao, O., Olayinka, O., & Sharma, H. (2024). Higher education assessment practice in the era of generative AI tools. Journal of Applied Learning & Teaching, 7(1), 25–34. https://doi.org/10.37074/jalt.2024.7.1.28

Paul, R., & Elder, L. (2019). The miniature guide to critical thinking concepts and tools. Bloomsbury Publishing.

Quintero-Gámez, L., Tariq, R., Sánchez-Escobedo, P., & Sanabria-Z, J. (2024). Data analytics and Artificial Neural Network framework to profile academic success: case study. Cogent Education, 11(1), Article 2433807. https://doi.org/10.1080/2331186X.2024.2433807

Räisä, T., & Stocchetti, M. (2024). Epistemic injustice and education in the digital age. Journal of Digital Social Research, 6(3), 1–9. https://doi.org/10.33621/jdsr.v6i3.33235

Rane, N., Shirke, S., Choudhary, S. P., & Rane, J. (2024). Education strategies for promoting academic integrity in the era of artificial intelligence and ChatGPT: Ethical considerations, challenges, policies, and future directions. Journal of ELT Studies, 1(1), 36–59. https://doi.org/10.48185/jes.v1i1.1314

Ruiz-Rojas, L. I., Salvador-Ullauri, L., & Acosta-Vargas, P. (2024). Collaborative working and critical thinking: Adoption of generative artificial intelligence tools in higher education. Sustainability, 16, Article 5367. https://doi.org/10.3390/su16135367

Salido, A., Sugiman, Fauziah, P. Y., Kausar, A., Haskin, S., & Azhar, M. (2024). Parental involvement in students’ mathematics activities: A bibliometric analysis. Eurasia Journal of Mathematics, Science and Technology Education, 20(10), Article em2513. https://doi.org/10.29333/ejmste/15179

Sarwanti, S., Sariasih, Y., Rahmatika, L., Islam, M. M., & Riantina, E. M. (2024). Are they literate on ChatGPT? University language students’ perceptions, benefits and challenges in higher education learning. Online Learning Journal, 28(3), 105–130. https://doi.org/10.24059/olj.v28i3.4599

Tarisayi, K. S. (2024). ChatGPT use in universities in South Africa through a socio-technical lens. Cogent Education, 11(1), Article 2295654. https://doi.org/10.1080/2331186X.2023.2295654

Thornhill-Miller, B., Camarda, A., Mercier, M., Burkhardt, J.-M., Morisseau, T., Bourgeois-Bougrine, S., Vinchon, F., El Hayek, S., Augereau-Landais, M., Mourey, F., Feybesse, C., Sundquist, D., & Lubart, T. (2023). Creativity, critical thinking, communication, and collaboration: Assessment, certification, and promotion of 21st century skills for the future of work and education. Journal of Intelligence, 11(3), Article 54. https://doi.org/10.3390/jintelligence11030054

Tsopra, R., Peiffer-Smadja, N., Charlier, C., Campeotto, F., Lemogne, C., Ruszniewski, P., Vivien, B., & Burgun, A. (2023). Putting undergraduate medical students in AI-CDSS designers’ shoes: An innovative teaching method to develop digital health critical thinking. International Journal of Medical Informatics, 171, Article 104980. https://doi.org/10.1016/j.ijmedinf.2022.104980

Tulcanaza-Prieto, A. B., Cortez-Ordoñez, A., & Lee, C. W. (2023). Influence of customer perception factors on AI-enabled customer experience in the Ecuadorian banking environment. Sustainability, 15(16), Article 12441. https://doi.org/10.3390/su151612441

Valova, I., Mladenova, T., & Kanev, G. (2024). Students’ perception of ChatGPT usage in education. International Journal of Advanced Computer Science and Applications, 15(1), 466 – 473. https://doi.org/10.14569/IJACSA.2024.0150143

van den Berg, G., & du Plessis, E. (2023). ChatGPT and generative AI: Possibilities for its contribution to lesson planning, critical thinking and openness in teacher education. Education Sciences, 13, Article 998. https://doi.org/10.3390/educsci13100998

Vishwanathaiah, S., Fageeh, H. N., Khanagar, S. B., & Maganur, P. C. (2023). Artificial intelligence its uses and application in pediatric dentistry: A review. Biomedicines, 11(3), Article 788. https://doi.org/10.3390/biomedicines11030788

Walter, Y. (2024). Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21, Article 15. https://doi.org/10.1186/s41239-024-00448-3

Wang, C., Aguilar, S. J., Bankard, J. S., Bui, E., & Nye, B. (2024). Writing with AI: What college students learned from utilizing ChatGPT for a writing assignment. Education Sciences, 14(9), Article 976. https://doi.org/10.3390/educsci14090976

Wei, L. (2023). Artificial intelligence in language instruction: Impact on English learning achievement, L2 motivation, and self-regulated learning. Frontiers in Psychology, 14, Article 126195. https://doi.org/10.3389/fpsyg.2023.1261955

Werdiningsih, I., Marzuki, & Rusdin, D. (2024). Balancing AI and authenticity: EFL students’ experiences with ChatGPT in academic writing. Cogent Arts and Humanities, 11(1), Article 2392388. https://doi.org/10.1080/23311983.2024.2392388

Zang, G., Liu, M., & Yu, B. (2022). The application of 5G and artificial intelligence technology in the innovation and reform of college English education. Computational Intelligence and Neuroscience, 2022(1), Article 9008270. https://doi.org/10.1155/2022/9008270

Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review. Smart Learning Environments, 11, Article 28. https://doi.org/10.1186/s40561-024-00316-7

Zhou, X., Teng, D., & Al-Samarraie, H. (2024). The mediating role of generative AI self-regulation on students’ critical thinking and problem-solving. Education Sciences, 14, Article 1302. https://doi.org/10.3390/educsci14121302

Zupic, I., & Čater, T. (2014). Bibliometric methods in management and organization. Organizational Research Methods, 18(3), 429–472. https://doi.org/10.1177/1094428114562629

...