'generative ai' Search Results
Intermediality in Student Writing: A Preliminary Study on The Supportive Potential of Generative Artificial Intelligence
artificial intelligence automated writing evaluation chatgpt intermedia transmedia...
The proliferating field of writing education increasingly intersects with technological innovations, particularly generative artificial intelligence (GenAI) resources. Despite extensive research on automated writing evaluation systems, no empirical investigation has been reported so far on GenAI’s potential in cultivating intermedial writing skills within first language contexts. The present study explored the impact of ChatGPT as a writing assistant on university literature students’ intermedial writing proficiency. Employing a quasi-experimental design with a non-equivalent control group, researchers examined 52 undergraduate students’ essay writings over a 12-week intervention. Participants in the treatment group harnessed the conversational agent for iterative essay refinement, while the reference group followed traditional writing processes. Utilizing a comprehensive four-dimensional assessment rubric, researchers analyzed essays in terms of relevance, integration, specificity, and balance of intermedial references. Quantitative analyses revealed significant improvements in the AI-assisted group, particularly in relevance and insight facets. The findings add to the research on technology-empowered writing learning.
Cultural Integration in English Teaching for Art Majors in Vietnam: Learners’ Voices
art majors cultural integration culturally responsive curriculum english language instruction...
This study investigates how undergraduate art majors at the National University of Art Education in Vietnam perceive the cultural integration into their English curriculum. A quantitative design was employed using a researcher-developed questionnaire administered to 214 students. Data were analysed using descriptive statistics, independent-samples t-tests, and multiple regression. Findings indicated that students valued culturally relevant content, particularly materials connected to both Vietnamese and international art as well as experiential and student-centered instructional strategies. Reported challenges included limited cultural background knowledge, cognitive overload, and reduced confidence when discussing culture in English. Crucially, results from multiple regression revealed that how culture is taught may have a greater impact on students’ experiences than the content itself. Therefore, these findings underscore the importance of aligning instructional approaches with learners’ disciplinary identities and offer implications for culturally responsive curriculum design, professional development, and the implementation of context-specific teaching strategies in English language instruction for art students.
Generative AI-Assisted Phenomenon-Based Learning: Exploring Factors Influencing Competency in Constructing Scientific Explanations
constructing scientific explanations factors generative ai microsoft copilot phenomenon-based learning...
Developing students' competency in constructing scientific explanations is a critical aspect of science learning. However, limited research has been conducted to explore the role of Generative Artificial Intelligence (Gen AI) in fostering this competency. Moreover, the factors influencing this competency development in the Gen AI-assisted learning environment remain underexamined. This study aimed to compare students' competency in constructing scientific explanations before and after participating in phenomenon-based learning with Microsoft Copilot and to investigate the factors influencing the development of this competency. A pretest-posttest quasi-experimental design was employed with 23 eighth-grade students from an all-girls school in Thailand. The research instruments included lesson plans for phenomenon-based learning with Microsoft Copilot, a competency test for constructing scientific explanations, and a mixed-format questionnaire. The results from the Wilcoxon Signed-Ranks Test revealed a statistically significant improvement in students' competency in constructing scientific explanations after the learning intervention (Z = 4.213, p < .001). Thematic analysis identified four key factors contributing to this development: (a) the role of Microsoft Copilot in enhancing deep understanding, (b) connecting theories to real-world phenomena through learning media, (c) collaborative learning activities, and (d) enjoyable learning experiences and student engagement. These findings suggest that the integration of Gen AI technology with phenomenon-based learning can effectively enhance students’ competency in constructing scientific explanations and provide valuable insights for the development of technology-enhanced science education.
Evaluating Generative AI Tools for Improving English Writing Skills: A Preliminary Comparison of ChatGPT-4, Google Gemini, and Microsoft Copilot
ai tools english writing skills generative ai...
This preliminary study examines how three generative AI tools, ChatGPT-4, Google Gemini, and Microsoft Copilot, support B+ level English as a Foreign Language (EFL) students in opinion essay writing. Conducted at a preparatory school in Türkiye, the study explored student use of the tools for brainstorming, outlining, and feedback across three essay tasks. A mixed methods design combined rubric-based evaluations, surveys, and reflections. Quantitative results showed no significant differences between tools for most criteria, indicating comparable performance in idea generation, essay structuring, and feedback. The only significant effect was in the feedback stage, where ChatGPT-4 scored higher than both Gemini and Copilot for actionability. In the brainstorming stage, a difference in argument relevance was observed across tools, but this was not statistically significant after post-hoc analysis. Qualitative findings revealed task-specific preferences: Gemini was favored for clarity and variety in brainstorming and outlining, ChatGPT-4 for detailed, clear, and actionable feedback, and Copilot for certain organizational strengths. While the tools performed similarly overall, perceptions varied by task and tool, highlighting the value of allowing flexible tool choice in EFL writing instruction.
Factors Contributing to Higher Education Students' Acceptance of Artificial Intelligence: A Systematic Review
ai acceptance artificial intelligence higher education systematic review...
The rapid integration of artificial intelligence (AI) technologies into the field of higher education is causing widespread public discourse. However, existing research is fragmented and lacks systematic synthesis, which limits understanding of how college and university students adopt artificial intelligence technologies. To address this gap, we conducted a systematic review following the guidelines of the PRISMA statement, including studies from ScienceDirect, Web of Science, Scopus, PsycARTICLES, SOC INDEX, and Embase databases. A total of 5594 articles were identified in the database search; 112 articles were included in the review. The criteria for inclusion in the review were: (i) publication date; (ii) language; (iii) participants; (iv) object of research. The results of the study showed: (a) The Technology Acceptance Model and the Unified Theory of Technology Acceptance and Use are most often used to explain the AI acceptance; (b) quantitative research methods prevail; (c) AI is mainly used by students to search and process information; (d) technological factors are the most significant factors of AI acceptance; (e) gender, specialty, and country of residence influence the AI acceptance. Finally, several problems and opportunities for future research are highlighted, including problems of psychological well-being, students’ personal and academic development, and the importance of financial, educational, and social support for students in the context of widespread artificial intelligence.
Comparing ChatGPT and Gemini on a Two-Tier Static Fluid Test: Capability and Scientific Consistency
chatgpt comparative study gemini static fluid two-tier test...
This study examined the capability and scientific consistency of ChatGPT and Gemini using a two-tier test. The capability and scientific consistency of ChatGPT and Gemini were compared with those of students. The study used 60 new chats with ChatGPT and Gemini, 120 students in 8th and 9th grade, 129 students in 11th and 12th grade, 260 undergraduate elementary teacher education students (across four cohorts), and 51 students from the professional education program for elementary school teachers. Data were collected through online testing for student participants and prompting processes for ChatGPT and Gemini using a 25-item two-tier test. Quantitative data analysis was employed to compare capability and consistency scores across all subjects. Qualitative-descriptive analysis was also conducted to examine the aspects of capability and scientific consistency behavior of ChatGPT and Gemini. Data analysis showed that the capability and scientific consistency of ChatGPT-4 and Gemini in responding to the test type were categorized as low and below the entry threshold, and higher than those of the students. Both generative AI systems performed better at providing theoretical justifications or reasoning than at answering factual questions about static fluids. ChatGPT outperformed Gemini only in the combined scores for Tier-1 and Tier-2 items. Both generative AI systems demonstrated conceptual insights and understanding of static fluids, though these insights sometimes contained biases and contradictions. As AI systems built on large language models, ChatGPT and Gemini heavily rely on availability and require a more extensive and diverse database containing static fluid cases.
0
Mapping and Exploring Strategies to Enhance Critical Thinking in the Artificial Intelligence Era: A Bibliometric and Systematic Review
ai era critical thinking higher education pedagogical strategies personalized learning...
The emergence of artificial intelligence (AI) has transformed higher education, creating both opportunities and challenges in cultivating students’ critical thinking skills. This study integrates quantitative bibliometric analysis and qualitative systematic literature review (SLR) to map global research trends and identify how critical thinking is conceptualized, constructed, and developed in the AI era. Scopus served as the primary data source, limited to publications from 2022 to 2024, retrieved on February 8, 2025. Bibliometric analysis using Biblioshiny R and VOSviewer followed five stages—design, data collection, analysis, visualization, and interpretation—while the SLR employed a deductive thematic approach consistent with PRISMA guidelines. A total of 322 documents were analyzed bibliometrically, and 34 were included in the qualitative synthesis. Results show that Education Sciences and Cogent Education are the most productive journals, whereas Education and Information Technologies have the highest citation impact. Several influential documents and authors have shaped global discussions on AI adoption in higher education and its relationship to critical thinking. Thematic mapping identified five major research clusters: pedagogical integration, ethical and evaluative practices, technical and application-oriented AI models, institutional accountability, and socio-technical systems thinking. Conceptually, critical thinking is understood as a reflective, evaluative, and metacognitive reasoning process grounded in intellectual autonomy and ethical judgment. Across the reviewed literature, strategies for fostering critical thinking converge into three integrated approaches: ethical curriculum integration, pedagogical and assessment redesign, and reflective human–AI collaboration. Collectively, these strategies ensure that AI strengthens rather than replaces human reasoning in higher education.
0