logo logo European Journal of Educational Research

EU-JER is a leading, peer-reviewed research journal that provides an online forum for studies in education by and for scholars and practitioners worldwide.

Subscribe to

Receive Email Alerts

for special events, calls for papers, and professional development opportunities.

Subscribe

Publisher (HQ)

Eurasian Society of Educational Research
Eurasian Society of Educational Research
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Eurasian Society of Educational Research
Headquarters
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS

' Gemini' Search Results



...

Developing students' competency in constructing scientific explanations is a critical aspect of science learning. However, limited research has been conducted to explore the role of Generative Artificial Intelligence (Gen AI) in fostering this competency. Moreover, the factors influencing this competency development in the Gen AI-assisted learning environment remain underexamined. This study aimed to compare students' competency in constructing scientific explanations before and after participating in phenomenon-based learning with Microsoft Copilot and to investigate the factors influencing the development of this competency. A pretest-posttest quasi-experimental design was employed with 23 eighth-grade students from an all-girls school in Thailand. The research instruments included lesson plans for phenomenon-based learning with Microsoft Copilot, a competency test for constructing scientific explanations, and a mixed-format questionnaire. The results from the Wilcoxon Signed-Ranks Test revealed a statistically significant improvement in students' competency in constructing scientific explanations after the learning intervention (Z = 4.213, p < .001). Thematic analysis identified four key factors contributing to this development: (a) the role of Microsoft Copilot in enhancing deep understanding, (b) connecting theories to real-world phenomena through learning media, (c) collaborative learning activities, and (d) enjoyable learning experiences and student engagement. These findings suggest that the integration of Gen AI technology with phenomenon-based learning can effectively enhance students’ competency in constructing scientific explanations and provide valuable insights for the development of technology-enhanced science education. 

description Abstract
visibility View cloud_download PDF
10.12973/eu-jer.14.4.1087
Pages: 1087-1103
cloud_download 455
visibility 3235
2
Article Metrics
Views
455
Download
3235
Citations
Crossref
2

Scopus
0

...

This preliminary study examines how three generative AI tools, ChatGPT-4, Google Gemini, and Microsoft Copilot, support B+ level English as a Foreign Language (EFL) students in opinion essay writing. Conducted at a preparatory school in Türkiye, the study explored student use of the tools for brainstorming, outlining, and feedback across three essay tasks. A mixed methods design combined rubric-based evaluations, surveys, and reflections. Quantitative results showed no significant differences between tools for most criteria, indicating comparable performance in idea generation, essay structuring, and feedback. The only significant effect was in the feedback stage, where ChatGPT-4 scored higher than both Gemini and Copilot for actionability. In the brainstorming stage, a difference in argument relevance was observed across tools, but this was not statistically significant after post-hoc analysis. Qualitative findings revealed task-specific preferences: Gemini was favored for clarity and variety in brainstorming and outlining, ChatGPT-4 for detailed, clear, and actionable feedback, and Copilot for certain organizational strengths. While the tools performed similarly overall, perceptions varied by task and tool, highlighting the value of allowing flexible tool choice in EFL writing instruction.

description Abstract
visibility View cloud_download PDF
10.12973/eu-jer.14.4.1291
Pages: 1291-1308
cloud_download 194
visibility 5249
0
Article Metrics
Views
194
Download
5249
Citations
Crossref
0

Scopus
0

...

Text readability assessment stands as a fundamental component of foreign language education because it directly determines students' ability to understand their course materials. The ability of current tools, including ChatGPT, to precisely measure text readability remains uncertain. Readability describes the ease with which readers can understand written material, while vocabulary complexity and sentence structure, along with syllable numbers and sentence length, determine its level. The traditional readability formulas rely on data from native speakers yet fail to address the specific requirements of language learners. The absence of appropriate readability assessment methods for foreign language instruction demonstrates the need for specialized approaches in this field. This research investigates the potential use of ChatGPT to evaluate text readability for foreign language students. The examination included selected textbooks through text analysis with ChatGPT to determine their readability level. The obtained results were evaluated against traditional readability assessment approaches and established formulas. The research aims to establish whether ChatGPT provides an effective method to evaluate educational texts for foreign language instruction. The research evaluates ChatGPT's capabilities beyond technical aspects. The study examines how this technology may influence students' learning experiences and outcomes. The text clarity evaluation capabilities of ChatGPT might lead to innovative approaches for developing educational tools. The implementation of this approach would generate lasting benefits for educational practices in schools. For example, ChatGPT’s readability classifications correlated strongly with Flesch-Kincaid scores (r = .75, p < .01), and its mean readability rating (M = 2.17, SD = 1.00) confirmed its sensitivity to text complexity.

description Abstract
visibility View cloud_download PDF
10.12973/eu-jer.15.1.101
Pages: 101-119
cloud_download 268
visibility 1765
0
Article Metrics
Views
268
Download
1765
Citations
Crossref
0

Comparing ChatGPT and Gemini on a Two-Tier Static Fluid Test: Capability and Scientific Consistency

chatgpt comparative study gemini static fluid two-tier test

Sarintan N. Kaharu , I Komang Werdhiana , Jusman Mansyur


...

This study examined the capability and scientific consistency of ChatGPT and Gemini using a two-tier test. The capability and scientific consistency of ChatGPT and Gemini were compared with those of students. The study used 60 new chats with ChatGPT and Gemini, 120 students in 8th and 9th grade, 129 students in 11th and 12th grade, 260 undergraduate elementary teacher education students (across four cohorts), and 51 students from the professional education program for elementary school teachers. Data were collected through online testing for student participants and prompting processes for ChatGPT and Gemini using a 25-item two-tier test. Quantitative data analysis was employed to compare capability and consistency scores across all subjects. Qualitative-descriptive analysis was also conducted to examine the aspects of capability and scientific consistency behavior of ChatGPT and Gemini. Data analysis showed that the capability and scientific consistency of ChatGPT-4 and Gemini in responding to the test type were categorized as low and below the entry threshold, and higher than those of the students. Both generative AI systems performed better at providing theoretical justifications or reasoning than at answering factual questions about static fluids. ChatGPT outperformed Gemini only in the combined scores for Tier-1 and Tier-2 items. Both generative AI systems demonstrated conceptual insights and understanding of static fluids, though these insights sometimes contained biases and contradictions. As AI systems built on large language models, ChatGPT and Gemini heavily rely on availability and require a more extensive and diverse database containing static fluid cases.

description Abstract
visibility View cloud_download PDF
10.12973/eu-jer.15.1.223
Pages: 223-250
cloud_download 39
visibility 311
0
Article Metrics
Views
39
Download
311
Citations
Crossref
0

Mapping and Exploring Strategies to Enhance Critical Thinking in the Artificial Intelligence Era: A Bibliometric and Systematic Review

ai era critical thinking higher education pedagogical strategies personalized learning

Melyani Sari Sitepu , Lantip Diat Prasojo , Hermanto , Achmad Salido , Lukman Nurhakim , Eko Setyorini , Hermina Disnawati , Bayu Wiratsongko


...

The emergence of artificial intelligence (AI) has transformed higher education, creating both opportunities and challenges in cultivating students’ critical thinking skills. This study integrates quantitative bibliometric analysis and qualitative systematic literature review (SLR) to map global research trends and identify how critical thinking is conceptualized, constructed, and developed in the AI era. Scopus served as the primary data source, limited to publications from 2022 to 2024, retrieved on February 8, 2025. Bibliometric analysis using Biblioshiny R and VOSviewer followed five stages—design, data collection, analysis, visualization, and interpretation—while the SLR employed a deductive thematic approach consistent with PRISMA guidelines. A total of 322 documents were analyzed bibliometrically, and 34 were included in the qualitative synthesis. Results show that Education Sciences and Cogent Education are the most productive journals, whereas Education and Information Technologies have the highest citation impact. Several influential documents and authors have shaped global discussions on AI adoption in higher education and its relationship to critical thinking. Thematic mapping identified five major research clusters: pedagogical integration, ethical and evaluative practices, technical and application-oriented AI models, institutional accountability, and socio-technical systems thinking. Conceptually, critical thinking is understood as a reflective, evaluative, and metacognitive reasoning process grounded in intellectual autonomy and ethical judgment. Across the reviewed literature, strategies for fostering critical thinking converge into three integrated approaches: ethical curriculum integration, pedagogical and assessment redesign, and reflective human–AI collaboration. Collectively, these strategies ensure that AI strengthens rather than replaces human reasoning in higher education.

description Abstract
visibility View cloud_download PDF
10.12973/eu-jer.15.1.305
Pages: 305-322
cloud_download 57
visibility 331
0
Article Metrics
Views
57
Download
331
Citations
Crossref
0

...