'readability assessment' Search Results
Development of Web-based Application for Teacher Candidate Competence Instruments: Preparing Professional Teachers in the IR 4.0 Era
instrument application ir 40 pedagogy professional social personality...
This research aimed to develop a web-based application for teacher candidate competence instruments to prepare professional teachers in the Industrial Revolution 4.0 (IR 4.0) era. Teacher candidate competencies consist of pedagogical, professional, social, and personality competences. This is a research and development with 8 stages, involving the development of instrument grids/ construct, focus group discussions, instrument item development, instrument validation, manual instrument testing, application development, application assessment by experts, application trial, and final revision of the application. The initial focus group discussions involved 9 experts, while the instrument validation involved 35 experts consisting of 21 experts for pedagogical and professional competences, 7 experts for social competences, 7 experts for the personality competences, and 4 media experts. The trial involved a total of 107 Mathematics, Indonesian, and English student teacher candidates. Expert validation was analyzed using the Aiken formula; application effectiveness and readability were described based on expert judgment, and discrimination tests on the results of social and personality competence tests between the study programs used the Multivariate Analysis of Variants. The results showed that there were no differences in social and personality competences between Mathematics, Indonesian, and English prospective teachers. The developed instruments for pedagogical, professional, personality, and social competences were deemed valid. The application has met the readability aspect and is scored well by experts with an average assessment rating of .78. These results suggest that the application can be used by the government as a solution to assess teacher candidate competences in the IR 4.0 era.
Development of a Self-Evaluation Instrument with Programmatic Assessment Components for Undergraduate Medical Students
instrument development medical education programmatic assessment...
This study aimed to develop and test a student self-assessment instrument based on the programmatic assessment (PA) components. We applied a series of psychometric research methods by (a) conducting a literature study to find PA constructs, (b) developing the students' self-questionnaires, (c) ensuring content validity, (d) testing face validity, and (e) conducting reliability tests that involve medical students, medical teachers, medical educationalist, and an international PA expert. Face validity (readability test) was conducted with 30 medical students from an Indonesian university who were in their last year of pre-clinical education and had average scores above or equal to their classmates. The confirmatory factor analysis (CFA) was used to report the instruments’ validity and reliability. The final instrument was tested on 121 medical students with excellent GPAs from another medical school with a middle-level accreditation. The PA consists of five components: ‘learning activities’, ‘assessment activities’, 'supporting activities’, 'intermediate evaluations’, and ‘final evaluations'. These components are conveyed through 41 relevant statements with a four-point Likert scale and three yes/no statements. According to the respondents, there was a lack of 'supporting activities' and 'intermediate evaluation' components in the PA in their universities. This study has developed and tested a five-component evaluation instrument based on medical students' perceptions regarding PA implementation.
Text Comprehension as a Mediator in Solving Mathematical Reality-Based Tasks: The Impact of Linguistic Complexity, Cognitive Factors, and Social Background
experimental design language in mathematics linguistic complexity mediation analysis reality-based tasks...
Successfully solving reality-based tasks requires both mathematical and text comprehension skills. Previous research has shown that mathematical tasks requiring language proficiency have lower solution rates than those that do not, indicating increased difficulty through textual input. Therefore, it is plausible to assume that a lack of text comprehension skills leads to performance problems. Given that different sociodemographic characteristics and cognitive factors can influence task performance, this study aims to determine whether text comprehension mediates the relationship between these factors and competence in solving reality-based tasks. Additionally, it examines the impact of systematic linguistic variation in texts. Using an experimental design, 428 students completed three reality-based tasks (word count: M = 212.4, SD = 19.7) with different linguistic complexities as part of a paper-pencil test. First, students answered questions about the situation-related text comprehension of each text, followed by a mathematical question to measure their competence in solving reality-based tasks. The results indicate that: a) Tasks with texts of lower linguistic complexity have a significantly higher solution rate for both text comprehension (d = 0.189) and mathematical tasks (d = 0.119). b) Cognitive factors are significant predictors of mathematical solutions. c) Text comprehension mediates the relationship between the impact of students’ cultural resources and cognitive factors and their competence to solve reality-based tasks. These findings highlight the importance of linguistic complexity for mathematical outcomes and underscore the need to reinforce text comprehension practice in mathematical education owing to its mediating role.
Enhancing Readability Assessment for Language Learners: A Comparative Study of AI and Traditional Metrics in German Textbooks
educational technology foreign language education learning materials readability assessment text analysis...
Text readability assessment stands as a fundamental component of foreign language education because it directly determines students' ability to understand their course materials. The ability of current tools, including ChatGPT, to precisely measure text readability remains uncertain. Readability describes the ease with which readers can understand written material, while vocabulary complexity and sentence structure, along with syllable numbers and sentence length, determine its level. The traditional readability formulas rely on data from native speakers yet fail to address the specific requirements of language learners. The absence of appropriate readability assessment methods for foreign language instruction demonstrates the need for specialized approaches in this field. This research investigates the potential use of ChatGPT to evaluate text readability for foreign language students. The examination included selected textbooks through text analysis with ChatGPT to determine their readability level. The obtained results were evaluated against traditional readability assessment approaches and established formulas. The research aims to establish whether ChatGPT provides an effective method to evaluate educational texts for foreign language instruction. The research evaluates ChatGPT's capabilities beyond technical aspects. The study examines how this technology may influence students' learning experiences and outcomes. The text clarity evaluation capabilities of ChatGPT might lead to innovative approaches for developing educational tools. The implementation of this approach would generate lasting benefits for educational practices in schools. For example, ChatGPT’s readability classifications correlated strongly with Flesch-Kincaid scores (r = .75, p < .01), and its mean readability rating (M = 2.17, SD = 1.00) confirmed its sensitivity to text complexity.
0