'reading' Search Results
Factors Contributing to Higher Education Students' Acceptance of Artificial Intelligence: A Systematic Review
ai acceptance artificial intelligence higher education systematic review...
The rapid integration of artificial intelligence (AI) technologies into the field of higher education is causing widespread public discourse. However, existing research is fragmented and lacks systematic synthesis, which limits understanding of how college and university students adopt artificial intelligence technologies. To address this gap, we conducted a systematic review following the guidelines of the PRISMA statement, including studies from ScienceDirect, Web of Science, Scopus, PsycARTICLES, SOC INDEX, and Embase databases. A total of 5594 articles were identified in the database search; 112 articles were included in the review. The criteria for inclusion in the review were: (i) publication date; (ii) language; (iii) participants; (iv) object of research. The results of the study showed: (a) The Technology Acceptance Model and the Unified Theory of Technology Acceptance and Use are most often used to explain the AI acceptance; (b) quantitative research methods prevail; (c) AI is mainly used by students to search and process information; (d) technological factors are the most significant factors of AI acceptance; (e) gender, specialty, and country of residence influence the AI acceptance. Finally, several problems and opportunities for future research are highlighted, including problems of psychological well-being, students’ personal and academic development, and the importance of financial, educational, and social support for students in the context of widespread artificial intelligence.
Enhancing Readability Assessment for Language Learners: A Comparative Study of AI and Traditional Metrics in German Textbooks
educational technology foreign language education learning materials readability assessment text analysis...
Text readability assessment stands as a fundamental component of foreign language education because it directly determines students' ability to understand their course materials. The ability of current tools, including ChatGPT, to precisely measure text readability remains uncertain. Readability describes the ease with which readers can understand written material, while vocabulary complexity and sentence structure, along with syllable numbers and sentence length, determine its level. The traditional readability formulas rely on data from native speakers yet fail to address the specific requirements of language learners. The absence of appropriate readability assessment methods for foreign language instruction demonstrates the need for specialized approaches in this field. This research investigates the potential use of ChatGPT to evaluate text readability for foreign language students. The examination included selected textbooks through text analysis with ChatGPT to determine their readability level. The obtained results were evaluated against traditional readability assessment approaches and established formulas. The research aims to establish whether ChatGPT provides an effective method to evaluate educational texts for foreign language instruction. The research evaluates ChatGPT's capabilities beyond technical aspects. The study examines how this technology may influence students' learning experiences and outcomes. The text clarity evaluation capabilities of ChatGPT might lead to innovative approaches for developing educational tools. The implementation of this approach would generate lasting benefits for educational practices in schools. For example, ChatGPT’s readability classifications correlated strongly with Flesch-Kincaid scores (r = .75, p < .01), and its mean readability rating (M = 2.17, SD = 1.00) confirmed its sensitivity to text complexity.
0
The Effects of Improvised Molecular Kit on Student Academic Performance in Organic Chemistry
academic performance attitude chemistry education molecular kit organic chemistry...
The complexity of naming and writing structures of functional groups presents a challenge for many students, often leading to difficulties in mastering fundamental concepts in organic chemistry. This underscores the need for an innovative teaching tool to improve students' understanding and attitude toward the abstract concept of organic chemistry. This study examined the effect of an improvised molecular kit on students' academic performance and attitudes in organic chemistry, focusing on the concepts of alcohol, aldehyde, carboxylic acid, ester, ether, and ketone. A quasi-experimental pretest-posttest design was employed, comparing the control group (traditional teaching methods) with the experimental group (using the improvised molecular kit). Pretest results indicated that both groups initially "did not meet the expectation" in all topics. However, posttest scores showed significant improvement, with the experimental group achieving higher mean scores, while the control group remained at a level of "fairly satisfactory" to "satisfactory." Statistical analysis ANCOVA confirmed significant differences (p < .001) in learning gains, demonstrating the effectiveness of the molecular kit. Furthermore, students' attitudes toward the kit were positive, with strong agreement on its ability to enhance engagement, understanding, and visualization of molecular structures. These findings suggest that the improvised molecular kit is an effective instructional tool, improving conceptual retention and fostering a more interactive learning experience. Integrating hands-on learning strategies in organic chemistry could significantly enhance students' comprehension and overall academic performance.
0