' automated writing evaluation' Search Results
The Effect of Project Based Assessment with Value Clarification Technique in Improving Students’ Civics Learning Outcomes by Controlling the Family Environment
family environment project based assessment learning outcomes vct learning...
The decline in student character is the result of low student learning outcomes. The common student learning outcomes are influenced by several factors, and one of them is teacher-centered, monotonous learning model. For this reason, it is deemed necessary to conduct research that aims to determine the effect of project-based assessment on values clarification technique (VCT) learning on improving students’ learning outcomes by controlling the family environment. This study uses a 2x2 factorial experimental design. The sample was selected through multistage random sampling with 120 students. The two-way ANCOVA data analysis technique was used to analyze the data. The findings obtained after controlling the family environment are: 1) civics learning outcomes from the group of students who used value clarification techniques are higher than those using conventional learning models and 2) civics learning outcomes from the group of students who were given project-based assessments are higher than the group who are given conventional assessments. Thus, it can be recommended that civics education teachers used appropriate VCT and project-based assessments to improve learning outcomes.
Intermediality in Student Writing: A Preliminary Study on The Supportive Potential of Generative Artificial Intelligence
artificial intelligence automated writing evaluation chatgpt intermedia transmedia...
The proliferating field of writing education increasingly intersects with technological innovations, particularly generative artificial intelligence (GenAI) resources. Despite extensive research on automated writing evaluation systems, no empirical investigation has been reported so far on GenAI’s potential in cultivating intermedial writing skills within first language contexts. The present study explored the impact of ChatGPT as a writing assistant on university literature students’ intermedial writing proficiency. Employing a quasi-experimental design with a non-equivalent control group, researchers examined 52 undergraduate students’ essay writings over a 12-week intervention. Participants in the treatment group harnessed the conversational agent for iterative essay refinement, while the reference group followed traditional writing processes. Utilizing a comprehensive four-dimensional assessment rubric, researchers analyzed essays in terms of relevance, integration, specificity, and balance of intermedial references. Quantitative analyses revealed significant improvements in the AI-assisted group, particularly in relevance and insight facets. The findings add to the research on technology-empowered writing learning.
Evaluating Generative AI Tools for Improving English Writing Skills: A Preliminary Comparison of ChatGPT-4, Google Gemini, and Microsoft Copilot
ai tools english writing skills generative ai...
This preliminary study examines how three generative AI tools, ChatGPT-4, Google Gemini, and Microsoft Copilot, support B+ level English as a Foreign Language (EFL) students in opinion essay writing. Conducted at a preparatory school in Türkiye, the study explored student use of the tools for brainstorming, outlining, and feedback across three essay tasks. A mixed methods design combined rubric-based evaluations, surveys, and reflections. Quantitative results showed no significant differences between tools for most criteria, indicating comparable performance in idea generation, essay structuring, and feedback. The only significant effect was in the feedback stage, where ChatGPT-4 scored higher than both Gemini and Copilot for actionability. In the brainstorming stage, a difference in argument relevance was observed across tools, but this was not statistically significant after post-hoc analysis. Qualitative findings revealed task-specific preferences: Gemini was favored for clarity and variety in brainstorming and outlining, ChatGPT-4 for detailed, clear, and actionable feedback, and Copilot for certain organizational strengths. While the tools performed similarly overall, perceptions varied by task and tool, highlighting the value of allowing flexible tool choice in EFL writing instruction.