'microsoft copilot' Search Results
Generative AI-Assisted Phenomenon-Based Learning: Exploring Factors Influencing Competency in Constructing Scientific Explanations
constructing scientific explanations factors generative ai microsoft copilot phenomenon-based learning...
Developing students' competency in constructing scientific explanations is a critical aspect of science learning. However, limited research has been conducted to explore the role of Generative Artificial Intelligence (Gen AI) in fostering this competency. Moreover, the factors influencing this competency development in the Gen AI-assisted learning environment remain underexamined. This study aimed to compare students' competency in constructing scientific explanations before and after participating in phenomenon-based learning with Microsoft Copilot and to investigate the factors influencing the development of this competency. A pretest-posttest quasi-experimental design was employed with 23 eighth-grade students from an all-girls school in Thailand. The research instruments included lesson plans for phenomenon-based learning with Microsoft Copilot, a competency test for constructing scientific explanations, and a mixed-format questionnaire. The results from the Wilcoxon Signed-Ranks Test revealed a statistically significant improvement in students' competency in constructing scientific explanations after the learning intervention (Z = 4.213, p < .001). Thematic analysis identified four key factors contributing to this development: (a) the role of Microsoft Copilot in enhancing deep understanding, (b) connecting theories to real-world phenomena through learning media, (c) collaborative learning activities, and (d) enjoyable learning experiences and student engagement. These findings suggest that the integration of Gen AI technology with phenomenon-based learning can effectively enhance students’ competency in constructing scientific explanations and provide valuable insights for the development of technology-enhanced science education.
Evaluating Generative AI Tools for Improving English Writing Skills: A Preliminary Comparison of ChatGPT-4, Google Gemini, and Microsoft Copilot
ai tools english writing skills generative ai...
This preliminary study examines how three generative AI tools, ChatGPT-4, Google Gemini, and Microsoft Copilot, support B+ level English as a Foreign Language (EFL) students in opinion essay writing. Conducted at a preparatory school in Türkiye, the study explored student use of the tools for brainstorming, outlining, and feedback across three essay tasks. A mixed methods design combined rubric-based evaluations, surveys, and reflections. Quantitative results showed no significant differences between tools for most criteria, indicating comparable performance in idea generation, essay structuring, and feedback. The only significant effect was in the feedback stage, where ChatGPT-4 scored higher than both Gemini and Copilot for actionability. In the brainstorming stage, a difference in argument relevance was observed across tools, but this was not statistically significant after post-hoc analysis. Qualitative findings revealed task-specific preferences: Gemini was favored for clarity and variety in brainstorming and outlining, ChatGPT-4 for detailed, clear, and actionable feedback, and Copilot for certain organizational strengths. While the tools performed similarly overall, perceptions varied by task and tool, highlighting the value of allowing flexible tool choice in EFL writing instruction.