logo logo European Journal of Educational Research

EU-JER is is a, peer reviewed, online academic research journal.

Subscribe to

Receive Email Alerts

for special events, calls for papers, and professional development opportunities.

Subscribe

Publisher (HQ)

Eurasian Society of Educational Research
Eurasian Society of Educational Research
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Eurasian Society of Educational Research
Headquarters
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Research Article

Intermediality in Student Writing: A Preliminary Study on The Supportive Potential of Generative Artificial Intelligence

Zhadyra Smailova , Saule Abisheva , Кarlygash Zhapparkulova , Ainura Junissova , Khorlan Kaskabassova

The proliferating field of writing education increasingly intersects with technological innovations, particularly generative artificial intelligence (.


  • Pub. date: July 15, 2025
  • Online Pub. date: May 29, 2025
  • Pages: 847-857
  • 33 Downloads
  • 210 Views
  • 0 Citations

How to Cite

Abstract:

T

The proliferating field of writing education increasingly intersects with technological innovations, particularly generative artificial intelligence (GenAI) resources. Despite extensive research on automated writing evaluation systems, no empirical investigation has been reported so far on GenAI’s potential in cultivating intermedial writing skills within first language contexts. The present study explored the impact of ChatGPT as a writing assistant on university literature students’ intermedial writing proficiency. Employing a quasi-experimental design with a non-equivalent control group, researchers examined 52 undergraduate students’ essay writings over a 12-week intervention. Participants in the treatment group harnessed the conversational agent for iterative essay refinement, while the reference group followed traditional writing processes. Utilizing a comprehensive four-dimensional assessment rubric, researchers analyzed essays in terms of relevance, integration, specificity, and balance of intermedial references. Quantitative analyses revealed significant improvements in the AI-assisted group, particularly in relevance and insight facets. The findings add to the research on technology-empowered writing learning.

Keywords: Artificial intelligence, automated writing evaluation, ChatGPT, intermedia, transmedia.

description PDF
file_save XML
Article Metrics
Views
33
Download
210
Citations
Crossref
0

Scopus
0

Introduction

Developing writing skills is a core and extensively researched area within language learning (Chen et al., 2023; Cong, 2025). These days, automated writing evaluation (AWE) systems are applied in pedagogical contexts “en masse” (Shi & Aryadoust, 2024) and are manifested in writing assistants ranging from countless grammar checkers to more sophisticated tools such as Cambridge’s Write & Improve. However, these solutions are limited to marking and scoring user-submitted texts. Moreover, these tools are majorly aimed at English as a second language learners (Fu et al., 2024), rather than supporting the mastery of one’s mother tongue. The recent unleashing of freely available generative agents, such as ChatGPT, brings the potential for AWE to extend towards providing comprehensive and detailed content-based writing feedback not limited to English. Not only does this make it possible for learners to advance in foreign language acquisition, but also in L1 writing proficiency, which is highly relevant for those seeking to develop more advanced or specialized writing skills. One such area of specialized writing is intermedial writing, which requires not only linguistic proficiency but also the ability to navigate and integrate multiple forms of media and modes of expression (Hankin, 2021; Riu-Comut, 2025).

Intermedia

The concept of intermediality has undergone significant evolution over the past few decades, reflecting broader cultural and technological shifts. Originating from the Latin term ‘medium,’ which denotes the middle or an intermediate course of action, intermediality now encompasses the interactions between diverse media types and contexts (Jensen & Schirrmacher, 2024). As contemporary writing increasingly intersects with visual, auditory, and digital media, advanced feedback systems – especially those powered by neural networks – are uniquely positioned to support learners in mastering these complex, intermedial forms of writing.

The advent of hypertext and hypermedia theories in the 1990s, alongside the widespread digitalization of the era, enriched the discourse on intermediality, emphasizing the genuine integration of two or more material manifestations of different media. Scholars such as Lars Elleström and Irina Rajewsky have since expanded this field by distinguishing various types of intermedial relations, including media combination, intermedial references, and transmediation (He & Bruhn, 2023). One frequently mentioned instance of intermediality is Toni Morrison’s novel Jazz, where the author mimics the aesthetic and structural features of jazz music in its narrative technique, e.g., the interplay of ten sections of the book akin to instrumental solos (Caldwell, 2023).

Intermediality is a cornerstone of contemporary literary and artistic practices. Fostering the capacity to harness intermediality is essential in L1 writing education for several reasons. First, intermedial writing is likely to encourage students to think beyond the boundaries of a single medium, prompting creativity in their work. By integrating diverse media forms, students can create more layered narratives that resonate with current audiences (Aulia & Oktaviani, 2024). Tsang et al. (2022) advocate that intermediality can make learning experiences more tangible by helping students create something new and expanding the range of experiences available to them. Second, in today’s multimedia landscape, intermediality reflects the way people consume and interact with information across various platforms. Hence, developing intermedial writing skills prepares students for the realities of modern communication and artistic expression.

Intermedial writing transcends traditional academic boundaries, flowing seamlessly into the practical realm where content creators craft narratives that dance between platforms and media formats. By weaving together textual, visual, and auditory elements, communicators build more accessible, engaging, and memorable experiences. The creative industries – from publishing and film to gaming and advertising – increasingly value professionals who can think and compose across media boundaries (Sharkey et al., 2023). Content creation roles now routinely demand the seamless integration of text, visuals, audio, and interactive elements, reflecting our increasingly interconnected media ecosystem (Fusillo & Lino, 2024).

Generative Artificial Intelligence

Given these considerations, it is crucial to explore novel approaches to attaining the corresponding skills. In this context, artificial intelligence (AI) systems known as Generative AI (GenAI) have the capability to output diverse forms of media based on the data they were trained on (Sengar et al., 2024). A subset of GenAI, called large language models (LLMs), powers tools like ChatGPT and can, among other things, follow human instructions, handle code, and generate contextually relevant texts, in response to input from users (Yang et al., 2024). The application of GenAI is expanding across various industries, where it is utilized to produce content (Formosa et al., 2024; Hidayat, 2024). By leveraging AI support, learners can receive immediate feedback on their writing, enabling swift revision. This real-time guidance also aids in grasping the basal principles of adequate writing and cueing growth areas (Jackaria et al., 2024; Marzuki et al., 2023).

When evaluating the role of GenAI in writing education, research up to this point has mainly adopted a theoretical lens or focused on domains such as grammar and sentence structure (Kim et al., 2025; Li et al., 2024). The extant studies suggest that generative instruments can contribute to learners’ writing development, particularly by proposing sentence rewording options (Law, 2024). To our knowledge, however, there have been no previous empirical studies on GenAI-assisted intermedial writing. This is likely due to the fact that the transmediality topic is primarily accommodated in the field of literature and linguistic studies rather than educational research.

This Study

To address the evidence gap delineated above, this preliminary study introduces a learning approach using a conversational agent as a writing assistant to support the cultivation of university students’ intermedial writing skills. This investigation aims to evaluate the impact of GenAI feedback on literature students’ intermedial writing skills. Specifically, the research question posed is: Does the use of ChatGPT as a writing assistant improve the quality of intermediality in literature essays compared to non-AI-facilitated writing? By addressing this question, the study seeks to pioneer the practical application of GenAI tools in intermedial writing education and lay the foundation for further research in this direction. In higher education L2 writing courses, incorporating feedback as an instructional tool has reaped burgeoning attention given its acknowledged beneficial effects on the writing proficiency (Lu et al., 2024). The present exploration is an attempt to extrapolate the concept to L1 contexts.

Methodology

Research Design

A pre/post-test quasi-experimental research design with a non-equivalent control group was employed. This design enabled the assessment of the intervention effect of GenAI while preventing intervention contamination by involving a historical control group.

Participants

Prior to the commencement of the study, the research proposal was reviewed and approved by the ethics committee of the third author’s institution of affiliation. The researchers then reached out to literature undergraduates through university administrations and social media platforms. The study’s purpose, along with the assurance of data confidentiality and the right to withdraw at any time, was clearly communicated to the potential participants. For control group subjects, two criteria of eligibility were preset: (a) being an undergraduate literature student and (b) agreement to write and submit six literature essays. In the case of the intervention group, it was additionally required from a potential participant to claim a prior experience with GenAI conversational agents. A total of 58 students, who were enrolled in Russian language and literature undergraduate programs at various study years across three public universities in the authors’ country of affiliation, expressed interest and provided their informed consent to participate. To prevent sharing the GenAI prompt used by active participants (explained in the following subsection) with controls, the latter were recruited and assessed a year ahead of the intervention group. In 2023 and 2024, a comparison group (n = 29) and an experimental group (n = 29) were created, respectively. Eventually, two individuals from the experimental group and four from the historical comparison group failed to complete all six assigned writing tasks and were subsequently excluded from the analysis, resulting in a final sample size of 52 participants (experimental group, n = 27; comparison group, n = 25).

Experimental Procedures

As an experimentation, 27 literature students harnessed ChatGPT or a publicly available telegram bot backed by ChatGPT (thus, participants could select a convenient way of communication with the LLM) as a feedback assistant in creating appropriate intermedial references in literature essay writing (language: Russian). Researchers provided the experimental group participants with a self-crafted prompt which the students entered into the bot’s conversation panel along with the essay text thus enabling the chatbot to take an intermediate writing instructor role. Resultantly, the participants iteratively refined their essays in response to GenAI feedback. Participants were instructed that they should use intermediality as a tool to augment their analysis, rather than as the primary focus of the essay. This intermedial practice lasted 12 weeks in the spring semester 2024. Both groups adhered to identical deadlines for submission, with each essay due approximately every two weeks. No strict word count was imposed. During this writing intervention, six different essays were therefore garnered from each participant. Using a researcher-designed rubric, the texts were human assessed in terms of how successfully intermediality was utilized. The results were contrasted with those from a comparison group of 25 literature students (writing six essays over this period as well, but without the chatbot help) to see whether the AI-facilitated changes differ from the natural student writing development. Participants in neither group received feedback from teachers on the essays. Before the full-scale study, a pilot study was conducted with a small focus group to refine the prompt, instructions, data collection instruments and procedures. Below is the prompt for intermedial writing assistance (translated from Russian to English):

I am writing a literature essay in Russian. I am specifically looking to improve my use of intermedial references within this essay. Please review the essay below and provide targeted feedback focused on the following areas: Suggest specific points within my essay where incorporating references to other art forms (like film, music, visual arts, etc.) would strengthen my analysis or argument. Point out connections between my literary topic and other media that I might have missed. Guide me in making meaningful and insightful connections between my literary subject and the other art form(s) I reference. Help me explore the why and how of these connections, rather than just mentioning them superficially. Check the accuracy of my intermedial references and suggest clearer ways to express them. However, do not suggest any ready-to-use text. Ensure that my references are appropriate within the context of my essay and my expression. Do not rewrite or revise entire sentences or paragraphs for me. Instead, provide suggestions and guidance that I can leverage to improve my own writing. Focus on helping me refine my existing ideas and expression, rather than changing the overall style or argument of my essay. Please refrain from addressing general grammar, spelling, or punctuation issues, unless they directly impact the clarity of my intermedial references. Topic: [Inserted here]. Essay: [Inserted here]

Data Source

At pre-test, participants completed a brief survey inquiring students’ gender (male/female), age (integer), and the year of study (from first to fourth year). At post-test, students’ essays were scored using a researcher-designed scoring scheme (Table 1), which was piloted by two trained human raters on 40 essays from the web. The pilot study confirmed adequate inter-rater agreement, sufficient discriminatory power of the instrument, and the lack of floor and ceiling effects. The rubric comprises four criteria, each scored from 1 to 5 and focuses on how effectively and appropriately the student integrates references to other media into their essay about literature.

Table 1. Intermediality Assessment Rubric

Score Criteria
  Relevance & Insight
1 References are irrelevant or forced, distracting from the analysis of the literature.
2 References are tangential or superficial, offering limited insight into the literature.
3 References are generally related to the literature, but the connection could be clearer or more insightful.
4 References are clearly relevant to the analysis, providing new perspectives on the literary text.
5 Intermedial references are deeply insightful, adding significant layers of meaning to the analysis of the literature. The connections are original and thought-provoking.
  Integration
1 References are simply listed or mentioned with no attempt to integrate them into the analysis.
2 Integration is clumsy, and the reference feels detached from the main argument.
3 Integration is somewhat awkward or abrupt, requiring more explanation to connect the reference to the literary text.
4 References are smoothly integrated and contribute to the overall argument.
5 References are seamlessly woven into the essay’s argument, flowing naturally from the analysis of the literature. They are not merely added but are integral to understanding the essay.
  Specificity & Depth
1 References are inaccurate or misrepresent the referenced medium.
2 References are vague or general, offering little concrete information about the referenced medium.
3 References lack specific details, making it difficult to fully grasp the connection.
4 References are specific enough to be meaningful, but could be explored in greater depth.
5 References are specific and detailed, demonstrating a deep understanding of both the literature and the referenced medium.
  Balance & Focus
1 The essay becomes primarily about the referenced media, with minimal focus on the literary text.
2 References frequently overshadow the analysis of the literature, shifting the focus away from the primary text.
3 References occasionally distract from the focus on the literature.
4 References are balanced with the analysis of literature, contributing without dominating.
5 Intermedial references enhance the analysis of literature without overshadowing it. The essay maintains a clear focus on the literary text.

Data Collection and Analysis

A total of 312 paper-pencil essays were handed in by the participants to research assistants blinded to group assignment. The assistants encoded the papers into Microsoft Word files and eventually forwarded them to the raters who independently scored the essays via the five-point rubric. Inter-rater reliability was appraised using Cohen’s Kappa coefficient, achieving a substantial agreement of .82. Throughout the research process, participant confidentiality was maintained, with their texts anonymized and stored in password-protected files on encrypted devices, and access to the data restricted to the research team.

A repeated measures analysis of covariance (RM ANCOVA) fitted to a linear mixed-effects model was conducted to investigate the effect of the 12-weeks-long intervention on each of the four domains of students’ essay writing performance over time, while controlling for the influence of the year of study. The six essay submission time points comprised a within-subject factor. A between-subject factor was membership to either the bot-assisted or non-AI group. Sidak-corrected pairwise comparisons were performed between the control and intervention groups at each time point, as well as between time 1 and time 6 for each outcome. The assumptions of normality (Shapiro-Wilk test and Kolmogorov-Smirnov test) and homogeneity of variance (Levene’s test) were met. Linearity and homogeneity of regression slopes were confirmed through inspection of residual plots. The effect size was judged by η2p. The significance cut-off was p < .05. Statistical analyses and plots were computed in R.

Results

Regarding the Relevance and Insight criterion, RM ANCOVA revealed a statistically significant interaction effect between group and time (F(5,250) = 3.53, p = .004), despite a small effect size (η2p = .014). While both groups showed similar performance in the first four essays, significant intergroup differences emerged in Essay 5 (p = .002) and Essay 6 (p = .018), with the ChatGPT-assisted group demonstrating superior performance (Figure 1). The intervention group showed a trend toward improvement from the first essay to the sixth one (p = .076), while the control group’s performance remained relatively stable (p = .585). According to the rubric, this suggests that GenAI-supported writers developed a greater ability to create insightful intermedial references that added meaningful layers to their literary analysis.

Figure 2
 

 

Figure 1. Mean Scores for Relevance and Insight Intermediality Domain. Error Bars Represent Standard Errors. P-values are from Sidak-adjusted Between-group Comparisons 

The analysis for the Integration domain showed no significant interaction between group and time (F(5,250) = 1.15, p = .334), with a negligible effect magnitude (η2p = .005). This suggests that both groups progressed similarly over time in integrating intermedial references within their essays. Pairwise comparisons did not reveal any significant differences at any essay submission time points between the groups (Figure 2). However, the intragroup post-hoc test revealed significant improvement in the intervention group from submission 1 to submission 6 (p = .005), whereas the control group had no significant change (p = .622). This indicates that the use of ChatGPT may have specifically aided students in more seamlessly integrating their intermedial references into their essays over the course of the investigation.

 

 

Figure 2. Mean Scores for Integration Intermediality Domain. Error Bars Represent Standard Errors. P-values are from Sidak-adjusted Between-group Comparisons 

Concerning Specificity and Depth, there was a significant effect of the covariate (year of study) (F(1,49) = 6.88, p = .012), yielding a large effect size (η2p = .123); moreover, a marginally significant group by time interaction (F(5,250) = 2.31, p = .045) was observed, with a small effect magnitude (η2p = .009). This favors the idea that, relative to the writing-as-usual counterparts, the LLM feedback played a role in the development of more sophisticated and nuanced intermedial writing in literature students, particularly those in higher years of study. Within-group comparisons between time points 1 and 6 did not reach statistical significance for either group (Figure 3). A significant between-group difference was observed in Essay 5 (p = .003), with the chatbot-empowered group outperforming the controls. This likely implies that the chatbot’s guidance was effective in helping students construct slightly more precise and well-elaborated intermedial references by this stage of the intervention.

Figure 4
 

Figure 3. Mean Scores for Specificity and Depth Intermediality Domain. Error Bars Represent Standard Errors. P-values are from Sidak-adjusted Between-group Comparisons 

The analysis of Balance and Focus scores did not yield a statistically significant interaction effect between group and time (F(5, 250) = 1.05, p = .391), which is supported by its effect size being near zero (η2p = .004). Nor were there any significant differences between groups at any of the individual time points (Figure 4). Intragroup comparisons between assignments 1 and 6 were also non-significant. These outcomes suggest that both groups exhibited consistent performance across the study duration and the use of the virtual mentor did not significantly affect the students’ ability to balance intermedial references with the analysis of the literary text.

Figure 5
 

 

Figure 4. Mean Scores for Balance and Focus Intermediality Domain. Error Bars Represent Standard Errors. P-values are from Sidak-adjusted Between-group Comparisons 

Notably, across all four quality domains, the bot-assisted group achieved higher mean scores on the final essay compared to the non-GenAI students. This may assume a potential cumulative benefit of the GenAI intervention on the quality of student writing over time.

Discussion

The objective of this investigation was to evaluate whether the incorporation of ChatGPT as a pedagogical aid could add to the ability of literature students to weave intermedial elements into their essays, compared to their counterparts who did not utilize this technology. The results suggest that the GenAI feedback fostered a discernible improvement in the students’ capacity to traverse and concatenate mediums in their writing, particularly in the latter stages of the intervention span. Nonetheless, juxtaposing these findings with the existing body of literature on GenAI in writing pedagogy presents a unique challenge. A comprehensive literature review reveals a noticeable dearth of research explicitly focusing on AI-supported intermedial writing development within educational contexts. This scarcity is likely attributable to the somewhat niche status of intermediality within educational research, which traditionally leans towards more conventional linguistic and literary analyses. Furthermore, the user-friendly generative platforms capable of supporting such endeavors have only recently emerged, leaving scant opportunity for extensive scholarly investigation. Consequently, direct comparisons with precedent studies are constrained since their foci lie on general writing improvement and L2 contexts, diverging from our specific exploration of intermedial skills in L1 settings. For instance, Meyer et al. (2024) found that chatbot-generated feedback boosted revision performance in English as a foreign language students, offering a parallel in demonstrating AI’s potential to augment writing quality. However, the nuances of intermedial integration differ from the argumentative essay construction examined in their study. Similarly, Escalante et al. (2023) detected no difference in learning outcomes between students receiving feedback from ChatGPT and human tutors in an English as a second language setting. This sheds light on the viability of AI as a feedback provider but does not address the specific challenges of incorporating intermedial elements. Thus, while these studies provide useful reference points, the unique focus on AI-facilitated intermedial writing in the present research carves a new path, necessitating further exploration in this nascent area. To further understand the potential mechanisms underlying these improvements, it is beneficial to draw parallels with adjacent fields. In particular, Richard Mayer’s cognitive theory of multimedia learning articulated in his principles of multimedia learning (Mayer, 2024) details how people learn more effectively from words and pictures. This theory offers a conceptual framework for understanding how the AI’s targeted feedback might optimize students’ cognitive processing of intermedial concepts, even when they are the ones creating the intermedial output.

The observed improvements in the relevance and insight criterion within the GenAI-assisted group could be attributed to several intertwined factors. Firstly, the cognitive apprenticeship framework suggests that learning is facilitated through guided participation and observation (Ostovar-Namaghi et al., 2024). ChatGPT, in this context, acted as a more knowledgeable other, providing tailored feedback that scaffolded students’ understanding of intermedial connections. This mentorship-like interaction likely prompted students to internalize the process of critical analysis and creative juxtaposition (Cutillas et al., 2023; Tise et al., 2023; Wang & Shibayama, 2022), leading to more insightful references. This also resonates with Mayer’s multimedia principle, where learning is enhanced when information is presented through multiple modalities; here, the AI encouraged students to think across modalities (literary text and other art forms) to generate deeper insights. The AI’s ability to suggest specific points for intermedial incorporation, as per the prompt, effectively acted as a “signaling principle,” guiding students’ attention to critical areas for boosting their analysis. Furthermore, the dialogic nature of the interaction with the virtual assistant likely encouraged a deeper reflection on the literary text and the referenced media (Cao & Liu, 2024). The conversational format prompted students to articulate their thoughts, receive feedback, and iterate, creating a recursive process of meaning-making that fostered richer insights. However, it is also plausible that the AI simply provided a wider range of potential intermedial connections or ideas than students would generate on their own, increasing the likelihood of identifying more relevant and insightful options. The frequent, iterative feedback process itself, regardless of its theoretical underpinnings, might have compelled students to spend more time revising and refining their ideas, naturally leading to improved insight.

For the integration criterion, where the bot-driven group showed significant within-group improvement, the principles of constructivism offer a compelling explanation. By actively engaging with the AI, students constructed their own understanding of how intermedial elements could be seamlessly woven into their literary analysis, with the dialog system serving as a facilitative tool that allowed them to experiment and build upon their prior knowledge (Ng et al., 2024). This process aligns with Mayer’s coherence principle, which suggests that learners benefit when extraneous material is excluded and relevant material is well-integrated. The AI’s feedback likely helped students to refine their intermedial references, ensuring they were not merely added superficially but contributed meaningfully to the argument, thereby fostering better integration. Perhaps this process was incited by the essential components of social constructivism, including feedback and reflection (Mulhim & Ismaeel, 2024), which allegedly enabled students to refine their understanding and integrate new knowledge in a meaningful way. Alternatively, the significant within-group improvement could simply be a result of focused practice and increased awareness. The AI's feedback, even if just flagging awkward transitions or weak links, repeatedly drew students' attention to the act of integration across six essays, providing more opportunities for iterative refinement and skill development than the control group received.

In the specificity and depth domain, the impact of the generative instrument can be understood through the lens of information processing theory. ChatGPT’s ability to propose avenues for deeper exploration might amend students’ cognitive schemas, leading to greater depth in their analysis (Kitt & Sanders, 2024). The AI’s guidance towards exploring the “why and how” of connections, rather than just superficial mentions, can be likened to applying aspects of Mayer’s redundancy principle by encouraging students to focus on non-redundant, meaningful details, thereby deepening their understanding. The significant effect of the year of study suggests that this advancement was particularly beneficial for students with a more developed foundational understanding of literature, aligning with the schema theory which posits that learning is contingent upon prior knowledge structures (Xia et al., 2024). Beyond these theoretical perspectives, a simpler explanation is that the AI’s prompts effectively acted as a checklist or reminder system for students. By asking them to delve into the “why and how” and “check the accuracy,” the conversational agent nudged students to seek out and include the specific details required for higher scores on this criterion, which they might have otherwise overlooked, especially benefiting those with more developed research skills (higher years).

The lack of significant group differences in balance and focus implies that this dimension of intermedial writing, which hinges on maintaining a coherent thematic center, may rely more on macro-level organizational skills and rhetorical awareness. While generative tools can provide granular feedback, the holistic task of balancing various elements within an essay may require more extensive experience with writing and literary analysis.

The findings from this exploration illuminate the promising potential of integrating artificial conversation agents into literary studies curriculum, specifically to bolster students’ understanding and application of intermedial concepts. It suggests that generative technology can serve as a dynamic intermediary, encouraging a more nuanced and sophisticated engagement with literary texts through the lens of other media. This study contributes to the expanding compendium on the use of artificial intelligence in education, particularly within the humanities. It underscores the potential of GenAI products not only as grammatical or stylistic aids but as instruments for fostering conceptual learning and creative engagement. For educators, this study provides empirical evidence supporting the integration of such tools into instructional practices, potentially transforming how writing and analytical skills are cultivated. Furthermore, it broadens the dialogue on intermedial literacy, providing a tangible method for developing this competency. Beyond the confines of academia, this research offers insights for creatives and professionals in fields where intermedial understanding and application are valued, suggesting that AI tools can be effectively leveraged for skill enhancement and innovative expression. It offers a glimpse into the future of education, where AI-powered tools can personalize and enrich the learning experience, empowering students to develop sophisticated skills for a rapidly evolving world.

Conclusion

To resume, the investigation described here offers pioneering empirical evidence for the efficacy of GenAI in honing literature students’ intermedial writing skills. It demonstrates how generative technology can serve as a dynamic pedagogical aid, fostering conceptual learning and creative engagement with literary texts. While these findings highlight a promising pedagogical strategy, their interpretation is subject to limitations including the use of a historical control group and reliance on a single AI model. Nevertheless, this exploration not only offers a concrete pedagogical strategy but also stimulates further scholarly conversation on the potential and parameters of AI-assisted creative and analytical skills development. This fusion of technology and tradition, of machine learning and human interpretation, holds the promise of reshaping the terrain of intermedial education, equipping students with tools that not only aid in understanding existing narratives but also empower them to construct new intermedial realities.

Recommendations

Based on the findings of this study, students are advised to approach human-chatbot interactions with an experimental mindset, viewing the conversational agent as a collaborator rather than a sole authority. This can facilitate deeper exploration of connections between literature and other media. For teachers, integrating intelligent dialog systems into the curriculum should be considered as a potential method for fostering writing proficiency. Instructors can design activities that encourage students to experiment with their texts, using GenAI for immediate feedback and iterative refinement. Ultimately, the pedagogical approach should aim to balance the efficiency and accessibility of AI tools with the nuanced, human-guided process of learning and creative expression.

The present study opens up several avenues for future research. First, corroboration through research harnessing a contemporaneous, randomized controlled design would be helpful for strengthening these findings. This would allow for clearer causal inferences regarding the impact of GenAI feedback on intermedial writing development. Second, comparative studies analyzing the efficacy of various AI platforms, such as Gemini or Claude, in supporting intermedial writing could provide educators with a broader range of pedagogical options. For example, some models might excel in providing contextually rich feedback, leveraging advanced natural language understanding to identify nuanced literary themes, while others might offer more innovative suggestions for intermedial references, encouraging creative connections across media. These differences could affect how students engage with and integrate intermedial elements into their essays, helping educators select the most suitable tool for their specific teaching objectives. Third, interdisciplinary collaborations involving computer scientists, linguists, literary scholars, and educators could lead to the development of more sophisticated AI-powered solutions geared specifically toward amplifying intermedial literacy (or any other writing mastery domains) in various educational settings. Additionally, to gain a deeper understanding of students’ experiences and the effectiveness of GenAI feedback, it could be fruitful to incorporate qualitative methods such as student interviews or chat logs. These data sources can potentially uncover how students perceive the model’s role in their learning, what aspects of the feedback they find most beneficial or challenging, and how they integrate chatbot suggestions into their writing. Lastly, conducting writing interventions that combine LLM mentorship with teacher feedback presents an intriguing area for further inquest. This hybrid approach could potentially offer the best of both worlds: the immediacy and scalability of AI feedback alongside the nuanced guidance and personal touch of human instructors.

Limitations

While the present study offers valuable insights, several limitations merit recognition. A primary limitation of the current study is the use of a historical control group, with data gathered in the year preceding the experimental intervention. This approach means that participants could not be randomly assigned to the chatbot-assisted and no-treatment conditions. Such a non-equivalent control group design tempers the ability to draw firm causal conclusions about the effectiveness of the GenAI feedback. Although efforts were made to control for year of study as a covariate, other latent factors such as prior exposure to intermedial concepts or general motivation for engaging with technology may have influenced the outcomes. Second, the study’s reliance on a single language model restricts the generalizability of findings. Other AI models with different architectures and training datasets could yield varied results. Furthermore, the specific nature of the prompt provided to the experimental group could have shaped the type and quality of feedback received, potentially skewing results. The decision to utilize a self-crafted prompt, although necessary for standardization, may not represent the full spectrum of interaction patterns possible with GenAI tools. Lastly, the linguistic and cultural specificity of the study, which focused on local language and literature students, may limit its applicability across diverse educational contexts. Future research should aim to replicate the manipulations with different language groups and cultural backgrounds to elevate external validity.

Ethics Statements

Prior to the study commencement, researchers received ethical approval from the [BLENDED]. Informed consent was obtained from students. Throughout the study, confidentiality, participant well-being, and respect for participants’ rights were diligently upheld.

Acknowledgements

Not applicable.

Conflict of Interest

The authors report no actual or potential conflicts of interest.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Generative AI Statement

As the authors of this work, we used the AI tool ChatGPT for the purpose of proofreading. After using this AI tool, we reviewed and verified the final version of our work. We, as the authors, take full responsibility for the content of our published work.

Authorship Contribution Statement

Smailova: Conceptualization, supervision, technical support, writing, final approval. Abisheva: Design, data acquisition, data interpretation, writing. Zhapparkulova: Data acquisition, drafting manuscript. Junissova: Data acquisition, drafting manuscript. Kaskabassova: Data analysis, critical revision of manuscript.

References

Aulia, U. Y., & Oktaviani, L. (2024). Enhancing civic engagement through podcasting: A modern approach for higher education. Media Practice and Education. Advance online publication. https://doi.org/10.1080/25741136.2024.2426074

Caldwell, T. G. (2023). Jazzthetic technique: Oralizing fiction and jazz strategies in Toni Morrison’s Jazz. Humanities, 12(4), Article 79. https://doi.org/10.3390/h12040079

Cao, J., & Liu, X. (2024). The melody of language learning at intermediate and upper levels: An emphasis on free discussion panels as an indispensable part of language classes and the effects on willingness to communicate, growth mindfulness, and autonomy. BMC Psychology, 12, Article 159. https://doi.org/10.1186/s40359-024-01645-5

Chen, B., Bao, L., Zhang, R., Zhang, J., Liu, F., Wang, S., & Li, M. (2023). A multi-strategy computer-assisted EFL writing learning system with deep learning incorporated and its effects on learning: A writing feedback perspective. Journal of Educational Computing Research, 61(8), 1596-1638. https://doi.org/10.1177/07356331231189294

Cong, Y. (2025). Demystifying large language models in second language development research. Computer Speech and Language, 89, Article 101700. https://doi.org/10.1016/j.csl.2024.101700

Cutillas, A., Benolirao, E., Camasura, J., Golbin, R., Jr., Yamagishi, K., & Ocampo, L. (2023). Does mentoring directly improve students’ research skills? Examining the role of information literacy and competency development. Education Sciences, 13(7), Article 694. https://doi.org/10.3390/educsci13070694

Escalante, J., Pack, A., & Barrett, A. (2023). AI-generated feedback on writing: Insights into efficacy and ENL student preference. International Journal of Educational Technology in Higher Education, 20, Article 57. https://doi.org/10.1186/s41239-023-00425-2

Formosa, P., Bankins, S., Matulionyte, R., & Ghasemi, O. (2024). Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure. AI and Society. Advance online publication. https://doi.org/10.1007/s00146-024-02081-0

Fu, Q.-K., Zou, D., Xie, H., & Cheng, G. (2024). A review of AWE feedback: Types, learning outcomes, and implications. Computer Assisted Language Learning, 37(1-2), 179-221. https://doi.org/10.1080/09588221.2022.2033787

Fusillo, M., & Lino, M. (2024). Inter-mediality in digital media environment. New Techno-Humanities, 4(1), 58-64. https://doi.org/10.1016/j.techum.2024.11.002

Hankin, C. D. (2021). “Enraizados da letra”: Lyrics and the letter in Brazilian, Cuban, and Haitian rap. Journal of Latin American Cultural Studies, 30(4), 619-640. https://doi.org/10.1080/13569325.2021.2017270

He, C., & Bruhn, J. (2023). Introduction to re-considering intermediality across disciplines: New directions. European Review, 31(S1), S1-S6. https://doi.org/10.1017/S1062798723000467

Hidayat, M. T. (2024). Effectiveness of AI-based personalised reading platforms in enhancing reading comprehension. Journal of Learning for Development, 11(1), 115-125. https://doi.org/10.56059/jl4d.v11i1.955

Jackaria, P. M., Hajan, B. H., & Mastul, A.-R. H. (2024). A comparative analysis of the rating of college students’ essays by ChatGPT versus human raters. International Journal of Learning Teaching and Educational Research, 23(2), 478-492. https://doi.org/10.26803/ijlter.23.2.23

Jensen, S. K., & Schirrmacher, B. (2024). Stronger together: Moving towards a combined multimodal and intermedial model. Multimodality and Society, 4(4), 445-467. https://doi.org/10.1177/26349795241259606

Kim, J., Yu, S., Detrick, R., & Li, N. (2025). Exploring students’ perspectives on generative AI-assisted academic writing. Education and Information Technologies, 30, 1265-1300. https://doi.org/10.1007/s10639-024-12878-7

Kitt, A., & Sanders, K. (2024). Imprinting in HR process research: A systematic review and integrative conceptual model. International Journal of Human Resource Management, 35(12), 2057-2100. https://doi.org/10.1080/09585192.2022.2131457

Law, L. (2024). Application of generative artificial intelligence (GenAI) in language teaching and learning: A scoping literature review. Computers and Education Open, 6, Article 100174. https://doi.org/10.1016/j.caeo.2024.100174

Li, H., Wang, Y., Luo, S., & Huang, C. (2024). The influence of GenAI on the effectiveness of argumentative writing in higher education: Evidence from a quasi-experimental study in China. Journal of Asian Public Policy. Advance online publication. https://doi.org/10.1080/17516234.2024.2363128

Lu, Q., Zhu, X., Zhu, S., & Yao, Y. (2024). Effects of writing feedback literacies on feedback engagement and writing performance: A cross-linguistic perspective. Assessing Writing, 62, Article 100889. https://doi.org/10.1016/j.asw.2024.100889

Marzuki, Widiati, U., Rusdin, D., Darwin, & Indrawati, I. (2023). The impact of AI writing tools on the content and organization of students’ writing: EFL teachers’ perspective. Cogent Education, 10(2), Article 2236469. https://doi.org/10.1080/2331186x.2023.2236469

Mayer, R. E. (2024). The past, present, and future of the Cognitive Theory of Multimedia Learning. Educational Psychology Review, 36, Article 8. https://doi.org/10.1007/s10648-023-09842-1

Meyer, J., Jansen, T., Schiller, R., Liebenow, L. W., Steinbach, M., Horbach, A., & Fleckenstein, J. (2024). Using LLMs to bring evidence-based feedback into the classroom: AI-generated feedback increases secondary students’ text revision, motivation, and positive emotions. Computers and Education Artificial Intelligence, 6, Article 100199. https://doi.org/10.1016/j.caeai.2023.100199

Mulhim, E. N. A., & Ismaeel, D. A. (2024). Learning sustainability: Post-graduate students’ perceptions on the use of social media platforms to enhance academic writing. Sustainability, 16(13), Article 5587. https://doi.org/10.3390/su16135587

Ng, D. T. K., Tan, C. W., & Leung, J. K. L. (2024). Empowering student self‐regulated learning and science education through ChatGPT: A pioneering pilot study. British Journal of Educational Technology, 55(4), 1328-1353. https://doi.org/10.1111/bjet.13454

Ostovar-Namaghi, S. A., Morady Moghaddam, M., & Veysmorady, K. (2024). Empowering EFL learners through cognitive apprenticeship: A pathway to success in IELTS speaking proficiency. Language Teaching Research. Advance online publication. https://doi.org/10.1177/13621688241227896

Riu-Comut, L. (2025). Beyond the déjà-vu: Intermedial tranpositions of the film noir motif of the automobile in contemporary Anglo-American and French novel. Journal of Transport History. Advance online publication. https://doi.org/10.1177/00225266251327366

Sengar, S. S., Hasan, A. B., Kumar, S., & Carroll, F. (2024). Generative artificial intelligence: A systematic review and applications. Multimedia Tools and Applications. Advance online publication. https://doi.org/10.1007/s11042-024-20016-1

Sharkey, A., Kovács, B., & Hsu, G. (2023). Expert critics, rankings, and review aggregators: The changing nature of intermediation and the rise of markets with multiple intermediaries. Academy of Management Annals, 17(1), 1-36. https://doi.org/10.5465/annals.2021.0025

Shi, H., & Aryadoust, V. (2024). A systematic review of AI-based automated written feedback research. ReCALL, 36(2), 187-209. https://doi.org/10.1017/S0958344023000265

Tise, J. C., Hernandez, P. R., & Schultz, P. W. (2023). Mentoring underrepresented students for success: Self-regulated learning strategies as a critical link between mentor support and educational attainment. Contemporary Educational Psychology, 75, Article 102233. https://doi.org/10.1016/j.cedpsych.2023.102233

Tsang, S. C. S., Lam, C. Y., & Cheng, L. (2022). Intermedia and interculturalism: practitioners’ perspectives on an interactive theatre for young ethnic minority students in Hong Kong. Language and Intercultural Communication, 22(2), 141-154. https://doi.org/10.1080/14708477.2021.2016789

Wang, J., & Shibayama, S. (2022). Mentorship and creativity: Effects of mentor creativity and mentoring style. Research Policy, 51(3), Article 104451. https://doi.org/10.1016/j.respol.2021.104451

Xia, L., Shen, W., Fan, W., & Wang, G. A. (2024). Knowledge-aware learning framework based on schema theory to complement large learning models. Journal of Management Information Systems, 41(2), 453-486. https://doi.org/10.1080/07421222.2024.2340827

Yang, J., Jin, H., Tang, R., Han, X., Feng, Q., Jiang, H., Zhong, S., Yin, B., & Hu, X. (2024). Harnessing the power of LLMs in practice: A survey on ChatGPT and beyond. ACM Transactions on Knowledge Discovery From Data, 18(6), 1-32. https://doi.org/10.1145/3649506

...