logo logo European Journal of Educational Research

EU-JER is a leading, peer-reviewed research journal that provides an online forum for studies in education by and for scholars and practitioners worldwide.

Subscribe to

Receive Email Alerts

for special events, calls for papers, and professional development opportunities.

Subscribe

Publisher (HQ)

Eurasian Society of Educational Research
Eurasian Society of Educational Research
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Eurasian Society of Educational Research
Headquarters
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS

' readability assessment' Search Results



Development of Web-based Application for Teacher Candidate Competence Instruments: Preparing Professional Teachers in the IR 4.0 Era

instrument application ir 40 pedagogy professional social personality

Badrun Kartowagiran , Suyanta Suyanta , Syukrul Hamdi , Amat Jaedun , Ahman , Rusijono Rusijono , Lukman A.R. Laliyo


...

This research aimed to develop a web-based application for teacher candidate competence instruments to prepare professional teachers in the Industrial Revolution 4.0 (IR 4.0) era. Teacher candidate competencies consist of pedagogical, professional, social, and personality competences. This is a research and development with 8 stages, involving the development of instrument grids/ construct, focus group discussions, instrument item development, instrument validation, manual instrument testing, application development, application assessment by experts, application trial, and final revision of the application. The initial focus group discussions involved 9 experts, while the instrument validation involved 35 experts consisting of 21 experts for pedagogical and professional competences, 7 experts for social competences, 7 experts for the personality competences, and 4 media experts. The trial involved a total of 107 Mathematics, Indonesian, and English student teacher candidates. Expert validation was analyzed using the Aiken formula; application effectiveness and readability were described based on expert judgment, and discrimination tests on the results of social and personality competence tests between the study programs used the Multivariate Analysis of Variants. The results showed that there were no differences in social and personality competences between Mathematics, Indonesian, and English prospective teachers. The developed instruments for pedagogical, professional, personality, and social competences were deemed valid. The application has met the readability aspect and is scored well by experts with an average assessment rating of  .78. These results suggest that the application can be used by the government as a solution to assess teacher candidate competences in the IR 4.0 era.

description Abstract
visibility View cloud_download PDF
10.12973/eu-jer.9.4.1749
Pages: 1749-1763
cloud_download 1045
visibility 3168
5
Article Metrics
Views
1045
Download
3168
Citations
Crossref
5

Scopus
5

Development of a Self-Evaluation Instrument with Programmatic Assessment Components for Undergraduate Medical Students

instrument development medical education programmatic assessment

Dina Qurratu Ainin , Yoyo Suhoyo , Artha Budi Susila Duarsa , Mora Claramita


...

This study aimed to develop and test a student self-assessment instrument based on the programmatic assessment (PA) components. We applied a series of psychometric research methods by (a) conducting a literature study to find PA constructs, (b) developing the students' self-questionnaires, (c) ensuring content validity, (d) testing face validity, and (e) conducting reliability tests that involve medical students, medical teachers, medical educationalist, and an international PA expert. Face validity (readability test) was conducted with 30 medical students from an Indonesian university who were in their last year of pre-clinical education and had average scores above or equal to their classmates. The confirmatory factor analysis (CFA) was used to report the instruments’ validity and reliability. The final instrument was tested on 121 medical students with excellent GPAs from another medical school with a middle-level accreditation. The PA consists of five components: ‘learning activities’, ‘assessment activities’, 'supporting activities’, 'intermediate evaluations’, and ‘final evaluations'. These components are conveyed through 41 relevant statements with a four-point Likert scale and three yes/no statements. According to the respondents, there was a lack of 'supporting activities' and 'intermediate evaluation' components in the PA in their universities. This study has developed and tested a five-component evaluation instrument based on medical students' perceptions regarding PA implementation.

description Abstract
visibility View cloud_download PDF
10.12973/eu-jer.12.2.649
Pages: 649-662
cloud_download 585
visibility 2345
2
Article Metrics
Views
585
Download
2345
Citations
Crossref
2

Scopus
1

...

Successfully solving reality-based tasks requires both mathematical and text comprehension skills. Previous research has shown that mathematical tasks requiring language proficiency have lower solution rates than those that do not, indicating increased difficulty through textual input. Therefore, it is plausible to assume that a lack of text comprehension skills leads to performance problems. Given that different sociodemographic characteristics and cognitive factors can influence task performance, this study aims to determine whether text comprehension mediates the relationship between these factors and competence in solving reality-based tasks. Additionally, it examines the impact of systematic linguistic variation in texts. Using an experimental design, 428 students completed three reality-based tasks (word count: M = 212.4, SD = 19.7) with different linguistic complexities as part of a paper-pencil test. First, students answered questions about the situation-related text comprehension of each text, followed by a mathematical question to measure their competence in solving reality-based tasks. The results indicate that: a) Tasks with texts of lower linguistic complexity have a significantly higher solution rate for both text comprehension (d = 0.189) and mathematical tasks (d = 0.119). b) Cognitive factors are significant predictors of mathematical solutions. c) Text comprehension mediates the relationship between the impact of students’ cultural resources and cognitive factors and their competence to solve reality-based tasks. These findings highlight the importance of linguistic complexity for mathematical outcomes and underscore the need to reinforce text comprehension practice in mathematical education owing to its mediating role.

description Abstract
visibility View cloud_download PDF
10.12973/eu-jer.14.1.23
Pages: 23-39
cloud_download 599
visibility 2644
2
Article Metrics
Views
599
Download
2644
Citations
Crossref
2

Scopus
1

...

Text readability assessment stands as a fundamental component of foreign language education because it directly determines students' ability to understand their course materials. The ability of current tools, including ChatGPT, to precisely measure text readability remains uncertain. Readability describes the ease with which readers can understand written material, while vocabulary complexity and sentence structure, along with syllable numbers and sentence length, determine its level. The traditional readability formulas rely on data from native speakers yet fail to address the specific requirements of language learners. The absence of appropriate readability assessment methods for foreign language instruction demonstrates the need for specialized approaches in this field. This research investigates the potential use of ChatGPT to evaluate text readability for foreign language students. The examination included selected textbooks through text analysis with ChatGPT to determine their readability level. The obtained results were evaluated against traditional readability assessment approaches and established formulas. The research aims to establish whether ChatGPT provides an effective method to evaluate educational texts for foreign language instruction. The research evaluates ChatGPT's capabilities beyond technical aspects. The study examines how this technology may influence students' learning experiences and outcomes. The text clarity evaluation capabilities of ChatGPT might lead to innovative approaches for developing educational tools. The implementation of this approach would generate lasting benefits for educational practices in schools. For example, ChatGPT’s readability classifications correlated strongly with Flesch-Kincaid scores (r = .75, p < .01), and its mean readability rating (M = 2.17, SD = 1.00) confirmed its sensitivity to text complexity.

description Abstract
visibility View cloud_download PDF
10.12973/eu-jer.15.1.101
Pages: 101-119
cloud_download 28
visibility 215
0
Article Metrics
Views
28
Download
215
Citations
Crossref
0

...