logo logo European Journal of Educational Research

EU-JER is is a, peer reviewed, online academic research journal.

Subscribe to

Receive Email Alerts

for special events, calls for papers, and professional development opportunities.

Subscribe

Publisher (HQ)

Eurasian Society of Educational Research
Eurasian Society of Educational Research
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Eurasian Society of Educational Research
Headquarters
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Research Article

Synergy of Voluntary GenAI Adoption in Flexible Learning Environments: Exploring Facets of Student-Teacher Interaction Through Structural Equation Modeling

Alfeo B. Tulang

Integrating generative artificial intelligence (GenAI) in education has gained significant attention, particularly in flexible learning environments (.


  • Pub. date: July 15, 2025
  • Online Pub. date: May 29, 2025
  • Pages: 829-845
  • 46 Downloads
  • 217 Views
  • 0 Citations

How to Cite

Abstract:

I

Integrating generative artificial intelligence (GenAI) in education has gained significant attention, particularly in flexible learning environments (FLE). This study investigates how students’ voluntary adoption of GenAI influences their perceived usefulness (PU), perceived ease of use (PEU), learning engagement (LE), and student-teacher interaction (STI). This study employed a structural equation modeling (SEM) approach, using data from 480 students across multiple academic levels. The findings confirm that voluntary GenAI adoption significantly enhances PU and PEU, reinforcing established technology acceptance models (TAM). However, PU did not directly impact LE at the latent level—an unexpected finding that underscores students’ engagement’s complex and multidimensional nature in AI-enriched settings. Conversely, PEU positively influenced LE, which in turn significantly predicted STI. These findings suggest that usability, rather than perceived utility alone, drives deeper engagement and interaction in autonomous learning contexts. This research advances existing knowledge of GenAI adoption by proposing a structural model that integrates voluntary use, learner engagement, and teacher presence. Future research should incorporate variables such as digital literacy, self-regulation, and trust and apply longitudinal approaches to better understand the evolving role of GenAI inequitable, human-centered education.

Keywords: Flexible learning environments, generative artificial intelligence adoption, structural equation modeling, student-teacher interaction, technology acceptance.

description PDF
file_save XML
Article Metrics
Views
46
Download
217
Citations
Crossref
0

Scopus
0

Introduction

Picture a flexible learning environment (FLE) where every student can access a personalized generative artificial intelligence (GenAI) tutor, an always-available assistant providing continuous academic support. In this environment, the learning process is easy and intuitive, as is using a digital platform with learning materials adapted to each student's specific requirements. This scenario is no longer science fiction. It reflects a fast-growing shift in education fueled by advances in artificial intelligence (AI).

AI’s conceptual roots go back to Alan Turing in the 1950s and the Dartmouth conference 1956, which formally introduced the field. Since then, AI has matured into a powerful force transforming industries — including education (Gumbs et al., 2021; McCarthy et al., 2006). Studies show that students today often display higher proficiency and more positive attitudes toward learning when supported by AI (Chai et al., 2021). Within flexible learning settings, GenAI tools show promise in fostering adaptability and enhancing the overall student experience (Bhatt & Muduli, 2023; Holstein & Aleven, 2022; Lim et al., 2023; Müller & Mildenberger, 2021).

However, ongoing ethical concerns accompany these opportunities. Questions around academic integrity, data privacy, and the appropriate role of technology in learning remain front and center (e.g., Eager & Brunton, 2023; Nguyen et al., 2023; Okulich-Kazarin et al., 2024; Wang et al., 2024; Wong, 2024). As GenAI tools become more integrated into classrooms, researchers and teachers continue to explore how these technologies affect pedagogy, especially in terms of teaching roles, human interaction, and learning engagement (LE) (Akgun & Greenhow, 2022; Batista et al., 2024; Pesovski et al., 2024; Raj & Renumol, 2022).

A critical question is emerging: How do students adopt GenAI in proactive, intentional, and personal ways? Students increasingly use these tools independently—outside formal instruction—to meet learning needs. This signals a shift toward voluntary adoption (VA) of AI, where the learner, not the institution, initiates the use of technology. While this trend empowers students, it raises concerns: Could GenAI replace teachers? What happens to the student-teacher connection in classrooms where AI is heavily used (Chan & Tsi, 2024)?

The role of the teacher remains central in FLEs. Although GenAI can assist with tasks and automate certain forms of feedback, it lacks the human capacities for emotional intelligence, creative guidance, and contextual understanding—qualities that define effective teaching (Chan & Tsi, 2024). Some studies have flagged tensions between the rise of automation and the value students place on human interaction, especially in environments that depend on collaboration and support (Sun et al., 2022).

Several well-established frameworks guide instruction in online and flexible learning spaces. The Community of Inquiry model (COI) of Garrison (2009) emphasizes three core dimensions: teacher presence, cognitive presence, and social presence. Building on this, scholars such as Kourkouli (2024) and Kreijns et al. (2022) highlight the importance of sustained interaction, purposeful communication, and authentic student-teacher relationships. “How flexible is flexible learning“ (2017) emphasized that flexible learning is not just about online delivery or distance education—it is anchored in a student-centered paradigm that must be implemented contextually and thoughtfully. He further argued that flexible learning is a valued-oriented principle grounded in learner autonomy, equity, and responsiveness to context and its effectiveness depends on how well it is adapted across dimensions such as engagement, assessment, and institutional support, particularly in diverse and resource-limited educational settings.

These models suggest that learning is most effective when STI remains strong—even in tech-mediated environments. Yet, we still know little about how STI evolves when students adopt GenAI tools independently. While researchers have explored teacher adoption of AI, few have examined the reverse: how student-initiated use of GenAI affects the dynamics of communication, support, and engagement in learning (Li & Yang, 2021). As Breukelman et al. (2023) emphasize, STI is not merely transactional; it involves deeper pedagogical relationships that shape students’ learning.

The present study seeks to advance understanding of how student-initiated use of GenAI informs their perceptions of instructional interaction and LE within FLEs. Informed by the framework of Seo et al. (2021) investigates how GenAI tools may influence key dimensions of learning, specifically communication, social presence, and the quality of instructional support (Lee et al., 2024). At the same time, it builds on studies showing how content quality and system adaptability shape students’ expectations of AI tools (Neuhüttler et al., 2020; Noor et al., 2021).

This study builds upon the foundations of the Technology Acceptance Model (TAM) and recent research on GenAI in education (Kong et al., 2024). It examines how VA, perceived usefulness (PU), perceived ease of use (PEU), LE, and STI are connected, developing a structural model to describe their interrelationships.

This research proposes a framework that equips teachers and policymakers with a practical and theoretical framework for designing adaptive, student-centered learning experiences that preserve meaningful human connections. It contributes to the current debate about ethics, agency, and interaction in AI-supported learning. More broadly, the study responds to the UN Sustainable Development Goals on quality education and innovations and aligns with UNESCO’s call for a human-centered approach to AI integration in schools.

To guide this investigation, the study poses the following research questions: (1) To what extent do students voluntarily adopt GenAI in flexible learning classes? (2) To what extent do they accept GenAI technology? (3) What factors contribute to the synergy of voluntary adoption, technology acceptance, LE, and multidimensional STI? (4) What structural model best describes these synergies?

Literature Review

The rapid integration of GenAI tools into educational contexts has introduced new dimensions of student engagement, autonomy, and instructional support (e.g., Lee et al., 2024). As students increasingly experiment with these tools on their initiative, understanding the voluntary adoption of GenAI has become a critical area of inquiry, particularly in FLE, where students exercise greater control over their pace, pathway, and learning mode. Although GenAI is widely celebrated for boosting productivity, sparking creativity, and making learning more personalized, it also brings uncertainty about how it might affect the traditional student-teacher relationship—something long considered essential to effective teaching. This study investigates the synergistic relationships among students’ voluntary use of GenAI, their perceptions of its usefulness and ease of use, and the role of human instructional presence in flexible learning settings. Grounded in the TAM (Davis, 1989), this research also explores how STI influences the behavioral intention to adopt GenAI tools and how this dynamic unfolds within the broader flexible learning framework (e.g., Kelly et al., 2023). The foregoing review is organized into six subsections to position this investigation within the current body of knowledge. First, it defines and contextualizes FLE, highlighting its role in shaping student autonomy (Katona & Gyonyoru, 2025) and digital tool experimentation. It also examines the nature and motivations behind the voluntary adoption of GenAI tools, incorporating relevant insights from the diffusion of innovations theory (DIT). Second, it provides a critical analysis of TAM and its application to student-initiated technology use. Third, the review discusses LE in AI-supported environments, emphasizing its pedagogical significance. Fifth, it discusses FLEs in the context of innovation. Finally, the synthesis subsection identifies these conceptual gaps and establishes the current study’s rationale.

Voluntary Adoption of GenAI Tools

The growing use of GenAI in education has drawn attention to how students choose to use these tools on their own to support their learning (Diwan et al., 2023). Unlike technologies that schools or teachers assign, students often choose GenAI tools driven by their goals, challenges (e.g., Pang et al., 2021), and sense of usefulness (e.g., Moravec et al., 2024). These tools offer various forms of support, like helping them with writing, problem-solving, coding, brainstorming ideas, or simulating conversation. What makes them especially appealing is how fast, and flexible, and seemingly intelligent they are, which is particularly useful in FLEs, where students often navigate their learning with limited direct guidance. The decision to voluntarily adopt GenAI tools is influenced by personal factors, such as “AI receptivity” (e.g., Watson et al., 2024) and digital literacy (e.g., Lyu & Salam, 2025), and perceived value of GenAI (e.g., Jose et al., 2024). Students may adopt these tools to enhance productivity or overcome learning challenges. However, the informal nature of GenAI use creates a research gap: most existing models of educational technology adoption focus on structured implementation rather than student-initiated use. The DIT offers valuable insights into how to frame this voluntary behavior. As initially proposed by Rogers (2003), five key attributes influence the adoption of innovations: relative advantage, compatibility, complexity, trialability, and observability. These attributes have been reaffirmed in recent educational technology studies (e.g., Abdalla et al., 2024; Menzli et al., 2022; Patnaik & Bakkar, 2024). GenAI tools—being easily accessible, user-friendly, and yielding immediate outputs—often fulfill these criteria, particularly for early adopters and innovators within the student population. Nevertheless, while DIT explains general adoption tendencies, it does not fully account for the behavioral intentions or perception dynamics required for a predictive model of GenAI adoption. For this reason, the current study utilizes the TAM as its primary theoretical foundation, integrating DIT as a complementary background.

TAM and Its GenAI Application

The TAM is one of the most widely used frameworks for examining how users accept and use technology. It posits those two cognitive factors—perceived usefulness (PU) and perceived ease of use (PEU)—predict a user’s behavioral intention to use a technology, which in turn influences actual use. TAM has been extensively validated in diverse educational settings (e.g., Akyürek, 2019; Fearnley & Amora, 2020), including online learning, mobile platforms, and intelligent tutoring systems (e.g., Alhumaid et al., 2023; Hanham et al., 2021; Huang & Mizumoto, 2024; Ortiz-López et al., 2025). However, the model’s traditional application assumes formal or institutionalized use of technology. When students choose to use GenAI tools independently, they decide whether the tool is useful or easy to use—often without any direction from their teachers or school policies. This changes how we understand technology adoption and raises the question of whether usefulness and ease of use alone can explain their decisions. Research suggests that additional factors such as trust, task-technology fit, or social influence may also be relevant in AI-rich environments (Alhumaid et al., 2023; Herlambang & Rachmadi, 2024). Yet, introducing too many variables complicates parsimony. Therefore, this study retains TAM’s core constructs but situates them within the unique context of student-driven GenAI use. PU reflects whether students believe GenAI enhances their academic performance, while PEU captures their experience with interface simplicity and output reliability. Importantly, TAM in this context is examined alongside the moderating influence of LE and STI as relational outcomes.

Learning Engagement in AI-supported Contexts

LE involves more than just participation—it reflects how students think, feel, and act as they engage with academic tasks. Personal motivations, instructional approaches, and the broader learning environment shape it (Yang, 2025). Kahu and Nelson (2018) introduced a framework that situates student engagement within a wider educational interface—where institutional contexts and psychosocial development interact. Henrie et al. (2015) demonstrated that engagement varies across course levels and is affected more by the relevance and clarity of instruction than the delivery format alone. In the context of technology acceptance, LE is often influenced by students’ perceptions of a GenAI tool’s usefulness and ease of use (Hanham et al., 2021; Huang & Mizumoto, 2024). When students perceive GenAI tools as practical and accessible, they are more likely to participate actively, remain motivated, and sustain cognitive development in academic tasks. Thus, this study hypothesizes that PE and PEU will positively impact LE. Furthermore, given the role of engagement in fostering meaningful educational experiences, higher levels of engagement with GenAI tools are expected to correspond with stronger perceptions of instructional interaction, reflecting student-teacher relationships.

Flexible Learning Environments as the Context of Innovation

FLEs have emerged as transformative spaces that allow students to access content anytime, anywhere, and often at their own pace and preference. Rooted in student-centered pedagogy, flexible learning emphasizes autonomy, choice, and responsiveness to individual learner needs (see Collis & Moonen, 2002). These learning environments usually mix real-time and self-paced activities, use various digital tools, and offer students different ways to learn—all of which create space for trying new approaches and encouraging innovation in how students engage with their studies. The shift toward flexibility has been accelerated by the growing accessibility of digital resources, cloud-based learning platforms, and AI (Rangel-de Lázaro & Duart, 2023; Simms, 2025; Tawil & Miao, 2024). In these contexts, students are no longer passive recipients of instruction but active participants who make strategic decisions about how and when to engage with content and tools. As such, FLEs serve as fertile ground for the voluntary adoption of emerging technologies, including GenAI tools such as ChatGPT, Gemini, and other AI-driven writing tools or problem-solving assistants. Research has shown that FLEs enhance self-regulated learning, critical thinking, and digital fluency (Boelens et al., 2017; Chang & Sun, 2024). At the same time, these environments require students to examine when it comes to how students choose to use GenAI—not because they’re told to, but because they discover it themselves and decide how students are part of their learning. In this study, flexible learning is a contextual condition that enables the expression of voluntary technology behaviors and potentially shapes traditional forms of instructional interaction. Understanding how this environment supports or complicates GenAI adoption—and the evolving role of STI within it—is essential for evaluating technology acceptance and pedagogical outcomes in contemporary education.

STI in AI-supported Learning

STI remains a foundational element of effective learning, particularly within a constructivist and socio-cultural. The concept of the zone of proximal development (ZPD) by Vygotsky (1978) emphasizes the importance of guided support, while Moore’s (1989) typology of interaction (learner-instructor, learner-content, learner-learner) underscores the instructional value of meaningful dialogue. Depending on design, frequency, and modality, STI can be reduced and enhanced in FLEs. The growing use of GenAI tools has made STIs more complex. While these technologies can reduce the need for students to turn to their teachers for help with understanding content, brainstorming, or getting feedback, they also introduce new expectations and challenges that still require thoughtful instructional support. In particular, students may need more instructional support to critically assess AI-generated content—especially when evaluating accuracy, ethical considerations, and critical thinking. Thus, rather than replacing the teacher, GenAI technologies such as those examined by Holstein et al. (2018) may support a shift in the teacher’s role toward learning orchestration and responsive, data-informed decision-making. Empirical studies indicate that STI is critical in enhancing student engagement, learning satisfaction, and perceived instructional value (see Bolliger & Martin, 2018; Miao et al., 2021). However, little research has studied how STI interacts with AI usage patterns, particularly in self-directed contexts. This study explores how STI could be a moderating factor—shaping how students’ perceptions of GenAI (via TAM) translate into actual behavioral intentions. It assumes that students may use GenAI more strategically and responsibly in settings with strong teacher presence, while low-STI contexts may lead to over-dependence or misuse (e.g., Akanzire et al., 2025).

Synthesis

Across the reviewed literature, several key themes emerge. First, FLEs enable students to explore and adopt technologies like GenAI at their initiative. Second, while DIT explains early adoption, it lacks the predictive specificity to model behavioral intention. Third, the TAM provides a parsimonious framework to predict intention through PU and PEU. Fourth, STI—often underexplored in TAM-based research—may significantly shape how students perceive and adopt GenAI tools. Despite these insights, existing research has yet to integrate these elements into a single model. This study addresses that gap by proposing a structural model that links TAM constructs, STI, and voluntary GenAI use within FLEs. Drawing the preceding review, this study proposes a structural model that investigates the relationships between students’ voluntary adoption of GenAI, their perceptions of its usefulness and ease of use, their LE, and their perceived quality of STI. The hypotheses are formally stated in the next section.

Methodology

Research Design

The study employed a quantitative, non-experimental, correlational research design, using structural equation modeling (SEM) to analyze the relationships among key variables. The analysis of moment structure (AMOS) was applied to evaluate the model fit and relationships across factors (Byrne, 2013; Shah et al., 2023). A variance-based modeling technique was adopted to enhance the robustness of results (Chatterjee et al., 2021; Hair et al., 2021).

Sample and Data Collection

This study included a diverse sample of students selected based on inclusion, exclusion, and withdrawal criteria. Ethical considerations were rigorously followed by the policies and guidelines of the Research Ethics Committee (REC) of Bukidnon State University (BukSU). Participants were recruited from college, high school, and graduate programs across multiple State Universities and Colleges (SUCs). College and high school students were drawn from three SUCs, while graduate students were selected from one SUC. Upon receiving the invitation, participants reviewed the informed consent statement and voluntarily completed the survey questionnaire online via Google Forms. A total of 504 students were invited to participate, yielding a high response rate of 97.62% (492 out of 504). However, 12 students declined participation, resulting in a final sample of 480 students. These participants were explicitly asked whether they had voluntarily integrated GenAI into their learning activities. Their demographic characteristics are shown in Table 1.
Table 1. Socio-Demographic Characteristics of the Respondents

No Demographic Characteristics Teachers
    Total %
1. Gender    
  Male/Man 155 32.29
  Female/Woman 309 64.38
  Other 16 3.33
2. Educational Level    
  Graduate 72 15.00
  College 239 49.79
  High School 169 35.21
3. Computer Skill Proficiency    
  Digital Literacy (Moderate to High) 427 88.95
  Software Application (Moderate to High) 425 88.54
  Creating Multimedia (Moderate to High) 386 80.41
  Coding and Programming (Moderate to High) 273 56.87
  GenAI Utilized in Learning    
  Adaptive Learning (ChatGPT) 220 45.83
4. Intelligent Content (Grammarly) 55 11.45
  Intelligent Tutoring (Khan Academy, 245 51.04
  Duolingo, Math Assist)    

Inclusion and Exclusion Criteria

Students who participated in this study actively engaged in FLE through synchronous or asynchronous learning. Eligible participants included senior high school, college, and graduate students who demonstrated computing proficiency and voluntarily used at least one GenAI tool without teacher-specific requirements in their learning activities. However, if the participants provided incomplete data, their responses could not support generating the desired structural model, and in addition, those who chose not to participate or withdrew at any stage were removed from the dataset.

Development of the Survey Instrument

The development of the instrument drew upon several theoretical frameworks and empirically grounded concepts. The study hypothesized that students who voluntarily adopt GenAI in flexible learning classrooms are more likely to demonstrate higher levels of LE and experience enriched STI (Neo et al., 2022; Seo et al., 2021; Sharma & Harkishan, 2022). Bandura’s (1977) social learning theory supports this idea by highlighting the importance of observation, imitation, and modeling in learning, which often occurs within social contexts (Shirkhani & Ghaemi, 2011). Similarly, constructivist learning theory posits that learners construct meaning through active participation; in this context, students who willingly engage with GenAI tools benefit from self-guided exploration, iterative experimentation, and interactive learning environments.

Following an extensive literature review, the researcher identified five latent constructs relevant to the proposed structural model: VA, PU, PEU, LE, and STI. The initial items were developed based on validated scales in existing studies (e.g., Bolliger & Martin, 2018; Davis, 1989; Hanham et al., 2021), and items were added to align with the specific context of GenAI integration in FLEs.

The study underwent expert peer review, which included an oral examination of research proposals and a detailed technical evaluation for research funding. Subsequent revisions to the manuscript were made for refinements and enhancements, guided by the panel’s feedback. It was then reviewed and approved by the university research ethics committee (BukSU-REC) under the protocol document code 2023-0410-TULANG-TSV. Appendix A provides the final list of survey items categorized by construct to ensure transparency and address validity concerns.

Procedure

The researcher conducted a pilot survey with 184 college students in one SUC through face-to-face administration to ensure reliability and validity. The final version of the questionnaire employed a 5-point Likert scale, where 5 (strongly agree) to 1 (strongly disagree) with 3 (neutral). The online survey was administered from January to March 2024. The instrument underwent construct validity testing to assess its suitability for factor analysis.

The study is grounded in social learning theory and constructivism, emphasizing the role of observation, interaction, and self-guided exploration in learning. Additionally, the TAM provides a framework for understanding students’ perceived usefulness (PU) and perceived ease of use (PEU) of GenAI in FLE. The study examines five key latent variables hypothesized within the SEM model: (1) voluntary GenAI adoption measured by adaptive quality, personalized recommendations, AI-based assessment accuracy, intention to adopt, and trust in AI tools, (2) PU, (3) PEU, (4) LE as measured in terms of intrinsic/extrinsic motivation, emotional, cognitive, and behavioral engagement, and (5) STI as measured in terms of communication, instructional support, and social presence. The relationships between these variables underwent testing through SEM, presented in Figure 1 as the hypothesized model.

Figure 2

Figure 1. Hypothesized Model

Hypotheses

This study proposes the following hypotheses:

H1. Students’ voluntary adoption of GenAI positively impacts their perceived usefulness of the technology.

H2. Students' voluntary adoption of GenAI positively impacts their perceived ease of use.

H3. Students' perceived usefulness of GenAI positively affects their learning engagement.

H4. Students' perceived ease of use of GenAI positively affects their learning engagement.

H5. Students' learning engagement with GenAI positively correlateswith their perceived quality of student-teacher interaction.

Data Analysis

The dataset, consisting of 480 completed questionnaires, was analyzed using the software IBM SPSS Statistics version 25 and IBM Amos version 26, and it was used to generate the best-fit structural model (Arbuckle, 2019). This study is guided by Byrne’s (2013) methods to ensure the appropriateness of statistical techniques to address the research questions. Data analyses include descriptive statistics, correlation analysis, and regression tests to explore trends and preliminary relationships among the considered variables.

Before the primary analysis, the dataset was screened for missing values and outliers using the SPSS software. The SPSS output was examined; frequency distributions and descriptive statistics indicated that missing data were minimal and randomly distributed. Given the low missingness, mean distribution was deemed unnecessary, and a complete case analysis was applied. We examined Mahalanobis distance based on the critical chi-squared values to detect univariate and multivariate outliers via IBM Amos output (e.g., Byrne, 2013). No extreme cases violated assumptions or significantly distorted the data’s distribution. As a result, all valid responses were retained for analysis.

Structural equation modeling (SEM) in Amos allowed us to test the hypothesized pathways within the model to examine the more complex interactions. We also performed factor analysis to refine the measurement structure and validate the underlying constructs. After running these analyses, we carefully interpreted the results to identify meaningful patterns, significant correlations, and relationships that supported the study’s hypotheses.

Findings/Results

Table 2 presents each construct’s descriptive statistics and Cronbach’s alpha coefficients. Internal consistency values ranged from .791 to .883 exceeding the acceptable threshold of .70 (Byrne, 2013; Hair et al., 2021; Kennedy, 2022) and indicating reliable measurement. To ensure suitability for SEM, we also evaluated skewness and kurtosis, which fell within recommended ranges (skewness:-.958 to -.452); kurtosis:-.268 to 2.446), supporting the normality assumption (Kong et al., 2024; Mardia, 1970; Mertler et al., 2021; Savalei & Bentler, 2005). These findings confirm that the data meet key psychometric and statistical assumptions necessary for structural modeling.

Table 2. The Variables’ Descriptive Statistics & Cronbach’s Alpha

No Variables Mean SD Kurtosis Skewness Cronbach’s Alpha
1. AI-based assessment Accuracy         .827
  AAA1 3.59 0.7858 0.533 -0.650  
  AAA2 3.64 0.7930 0.784 -0.741  
  AAA3 3.63 0.7483 0.873 -0.860  
2. Personalized recommendation         .834
  PRE1 4.01 0.6113 2.446 -0.719  
  PRE2 3.83 0.7405 0.969 -0.704  
  PRE3 3.85 0.7643 1.610 -0.958  
3. Adaptive Content Quality         .845
  ACQ1 3.87 0.6777 1.551 -0.806  
  ACQ2 3.88 0.7274 1.922 -0.896  
  ACQ3 3.97 0.7291 1.368 -0.770  
4. Intention to Adopt         .882
  ITA1 3.87 0.7338 1.007 -0.647  
  ITA2 3.79 0.7290 0.643 -0.602  
  ITA3 3.76 0.7331 1.026 -0.678  
  ITA4 3.63 0.8696 0.444 -0.634  
5. Trust & Reliance on AI         .848
  TIA1 3.55 0.8182 0.305 -0.570  
  TIA2 3.73 0.7149 1.139 -0.721  
  TIA3 3.46 0.8659 0.150 -0.452  
6. Perceived Usefulness         .874
  PU1 4.04 0.6545 1.139 -0.581  
  PU2 3.97 0.6832 0.794 -0.555  
  PU3 3.84 0.7377 1.189 -0.708  
7. Perceived Ease of Use         .877
  PEU1 3.93 0.6601 1.637 -0.713  
  PEU2 3.89 0.7066 0.641 -0.482  
  PEU3 3.86 0.7081 1.247 -0.686  
  PEU4 3.66 0.8122 0.450 -0.567  
8. Learning Engagement:         .791
  Communication          
  LEC1 3.84 0.7367 1.447 -0.748  
  LEC2 4.04 0.6657 1.335 -0.599  
  LEC3 4.05 0.6700 1.548 -0.681  
  LEC4 3.94 0.7300 0.555 -0.521  
9. Learning Engagement:         .825
  Motivation          
  LEM1 3.54 0.8658 0.353 -0.562  
  LEM2 3.68 0.7943 0.353 -0.562  
  LEM3 3.59 0.8357 0.163 -0.563  
  LEM4 4.12 0.7649 0.717 -0.767  
10. Learning Engagement:          
  Self-Efficacy          
  LESE1 3.89 0.7107 1.622 -0.851  
  LESE2 3.84 0.7127 1.042 -0.726  
  LESE3 3.76 0.7631 0.722 -0.646  
11. Social Presence         .859
  SP1 3.70 0.7541 1.813 -0.591  
  SP2 3.72 0.7550 1.202 -0.769  
  SP3 3.80 0.7483 1.241 -0.831  
  SP4 3.79 0.7815 0.736 -0.675  
12. Institutional Support         .883
  IS1 3.92 0.6560 2.417 -0.899  
  IS2 3.85 0.7319 1.498 -0.746  
  IS3 3.80 0.7013 1.802 -0.802  
  IS4 3.82 0.7328 1.689 -0.790  
13. STI AI-Adoption Outcomes         .871
  STIAO1 3.45 0.9615 -0.096 -0.706  
  STIAO2 3.46 0.9598 -0.268 -0.577  
  STIAO3 3.62 0.8702 0.610 -0.792  

Sampling Adequacy

The survey questionnaire underwent construct validity testing to assess its suitability for factor analysis. The Kaiser-Meyer-Olkin (KMO) value was 0.961, exceeding the 0.8 thresholds, indicating excellent sampling adequacy (Kaiser, 1970). Additionally, Bartlett’s test of sphericity yielded a Chi-square value of 16,449.108, df = 990,p<.001, confirming that the dataset contained significant correlations among variables, justifying the use of factor analysis. Furthermore, all factor loadings exceeded 0.5, further supporting the instrument’s construct validity.

Measurement Model

The measurement model of this study demonstrates strong psychometric properties, which confirms the reliability and validity of the constructs. Factor loadings across all constructs exceeded the recommended threshold of .70 (Hair et al., 2021), ensuring robust indicator validity. Table 3 shows loadings for the variable VA ranged from .708 to .836, indicating a reliable measurement of this construct. Similarly, PU loadings ranged from .702 to .808, while PEU showed consistently high loadings between .788 and .826. The LE construct exhibited loadings between .727 and .840, reinforcing its alignment with the theoretical framework. The STI factor loading ranged from .740 to .850, further supporting the reliability of its indicators.

Further, in Table 3, the composite reliability (CR) and the average variance extracted (AVE) values were computed to evaluate the internal validity of the constructs. The variable VA demonstrated excellent internal consistency with CR = .833 and alpha = .955. Similarly, PU had CR = .711 and alpha = .874, and PEU had CR = .649 and alpha = .877. The LE construct maintained a CR of .627 and an alpha of .834. The STI construct also confirmed its high reliability with CR = .843.

Convergent validity was assessed using AVE, ensuring that each construct explained more variance than measurement error (Fornell & Larcker, 1981). As shown in Table 3, all AVE values exceeded the recommended 0.50 threshold, ensuring theoretical coherence through convergent validity. The variables VA (AVE = .603), PU (AVE = .711), PEU (AVE = .649), LE (AVE = .627), and STI (AVE = .642) demonstrated that their respective indicators explained a substantial portion of the variance. Hence, the measurement model meets reliability and validity requirements, supporting its appropriateness for structural analysis. These findings confirm that the model is a valid tool for investigating the adoption of GenAI in STI within FLEs.

Table 3. The Values for Factor Loadings, Composite Reliability, and Average Variance Extracted

No Variable Items Factor Loadings Composite Reliability Average Variance Extracted
1. Voluntary Adoption      
  AAA .751 .883 .603
  PRE .836    
  ACQ .791    
  ITA .791    
  TIA .708    
2. Perceived Usefulness      
  PU1 .745 .880 .711
  PU2 .808    
  PU3 .702    
3. Perceived Ease of Use      
  PEU1 .799 .881 .649
  PEU2 .826    
  PEU3 .809    
  PEU4 .788    
4. Learning Engagement      
  LEC .727 .834 .627
  LEM .840    
  LESE .804    
5. STI      
  SP .810 .843 .642
  IS .850    
  STIAO .740    

Discriminant Validity

The discriminant validity of the study was assessed using the Fornell-Larcker criterion. It emphasized that the root of AVE for each construct should be greater than its largest correlation coefficient with any other construct in the model. It ensures that each construct is statistically distinct and captures unique variance rather than overlapping with different constructs. As depicted in Table 4, the diagonals represent the square root of the AVE for each construct, while the off-diagonals indicate the construct’s highest correlation with another variable. All square root AVE values exceeded their inter-construct correlations, confirming that the constructs maintain sufficient discriminant validity. Specifically, the square root of AVE for the variable VA is .777, greater than its highest correlation with PU at .750, ensuring that the variable VA and PU are distinct constructs. Similarly, the AVE for PEU is .806, exceeding its correlation with LE (.791), confirming that these constructs measure separate dimensions. As shown, the STI construct has an AVE = .801, which is greater than its correlation with PU at .797, reinforcing the discriminant validity of these constructs. Thus, the findings offer compelling empirical evidence supporting discriminant validity, demonstrating that each construct uniquely represents a specific aspect of GenAI adoption and STI. Establishing discriminant validity is crucial to verify that the constructs capture distinct concepts, minimizing overlap and ensuring their theoretical and statistical independence within the model. This distinction strengthens the theoretical integrity of the measurement insights (Hair et al., 2021).

Table 4. Discriminant Validity Matrix

  VA PU PEU Engage STI
VA .777        
PU .705 .843      
PEU .767 .835 .806    
Engage .689 .815 .791 .792  
STI .675 .797 .788 .711 .801

Structural Model Evaluation

This study assessed the hypothesized structural model (i.e., Fig. 1) using SEM to examine the theoretical relationships between VA, PU, PEU, LE, and STI variables. Model fit was evaluated using multiple indices and aligned with established SEM benchmarks. Following the initial model estimation, the hypothesized path PULE (i.e., H3) was excluded during model respecification. According to Byrne (2013), model refinement is warranted when a path lacks empirical support, fails to contribute to model fit, and can be theoretically justified. In this case, the path PU LE was statistically non-significant and exhibited low standardized estimates, suggesting it did not meaningfully explain variance in LE. Theoretically, this is consistent with findings in the literature suggesting that usefulness perceptions alone may not drive student engagement, particularly in autonomous learning contexts like voluntary GenAI adoption. After removing the path, the model was re-estimated, yielding improved fit indices as presented in Table 5 (e.g., CFI = 0.925; RMSEA = 0.045), validating the refined structure and supporting its theoretical parsimony. This finding is notable, as it challenges a standard TAM expectation that PU predicts LE. Instead, results indicate that students’ engagement with GenAI may depend more strongly on other factors, such as usability and relational support, than perceived utility alone. Conversely, the model retained and confirmed several significant paths. The relationship between PEU and LE (i.e., H4) was statistically significant, indicating that students who find GenAI tools accessible and manageable are more likely to demonstrate active learning involvement. Likewise, LE significantly predicted STI (i.e., H5), reinforcing that student engagement may enhance perceptions of instructional presence and communication in flexible learning settings. The results underscore the strength of the revised model and emphasize the unique behavioral dynamics that emerge in student-initiated GenAI use within FLEs.

Table 5. Model Respecification Fit Summary

No. Fit Indices Threshold Obtained Value Source
1. CMIN/df 3 – 5 4.815 Byrne (2013)
2. GFI > 0.90 0.962 Byrne (2013); Hu and Bentler (1999)
3. CFI > 0.90 0.925 Byrne (2013)
4. TLI > 0.90 0.908 Byrne (2013)
5. SRMR < 0.08 0.041 Byrne (2013); Hu and Bentler (1999)
6. STI < 0.08 0.045 Byrne (2013); Hu and Bentler (1999)

After confirming the model fit and refining the theory, this section reports the statistical outcomes of the retained hypotheses and explores notable patterns observed at the indicator level.

Path Analysis and Test of Hypothesis

This study assessed the direct theoretical relationships among the constructs using SEM-based path analysis. Based on the respecified structural model, it examined the effects of the variables VA, PU, PEU, LE, and STI. The study evaluated standardized path coefficients (β) and corresponding significance values (p-values) to determine each retained hypothesis’s strength and statistical relevance. A β closer to 1.0 indicates a more substantial predictive influence, while a p-value below .05 suggests statistical significance. Figure 2 presents the path diagram, and Table 6 summarizes the path coefficients, p-values, and hypothesis outcomes. The model supports all retained hypotheses, with one notable exception: the hypothesized path PU LE (i.e., H3) was excluded during model respecification. Interestingly, despite the exclusion of H3 at the latent level, further examination revealed that one specific observed indicator of PU (“AI technology helps me do my school tasks faster and more efficiently.”) significantly correlated with an observed LE component (“AI tools enable personalized learning for enhanced understanding”). While these item-level interactions were not modeled directly, they highlight the possibility of targeted effects between specific aspects of PU and LE. Future studies may benefit from modeling indicator-level interactions, exploring cross-loading structures, or estimating indirect effects to deepen our understanding of how GenAI features influence student engagement within FLEs.

Table 6. Path Coefficients, Significance Level

No. Hypothesis Path Relationship   p-value Result
1. H1 VA→PU 0.831 .000 Supported
2. H2 VA→PEU 0.985 .000 Supported
3. H3 PU→Engage -   No path. Hypothesis is not supported.
4. H4 PEU→Engage 0.953 .000 Supported
5. H5 Engage→STI 0.951 .000 Supported
Figure 8
Figure 8

Figure 8

Figure 2. Path Diagram and the Structural Model

Discussion

This research examined the synergistic relationships among VA of GenAI, PU, PEU, LE, and STI within FLEs. The findings generally aligned with TAM but also revealed theoretically significant deviations that offer new directions for understanding GenAI integration in autonomous learning settings (e.g. Al-Momani & Ramayah, 2024). One of the most theoretically significant findings was the absence of a direct relationship between PU and LE (i.e., H3) in the final structural model. This unexpected result diverges from prior TAM-based studies that position usefulness as a central predictor of engagement or behavioral intention (e.g., Bancoro, 2024; Gunness et al., 2023; Kelly et al., 2023). In this study, students’ perceptions that GenAI is useful did not necessarily translate into increased learning engagement when other contextual and relational factors were considered. This suggests that utility alone may be insufficient to sustain engagement in learner-initiated contexts, especially when instructional structure or intention is limited. By contrast, the expected relationships were well-supported. The positive association between PEU and LE (i.e., H4) reaffirms the importance of interface simplicity and tool accessibility in sustaining student motivation and focus. Students who can easily navigate GenAI tools are more likely to persist, self-regulate, and experiment meaningfully. Similarly, the strong relationship between LE and STI (i.e., H5) (e.g. Kustova et al., 2025; Seo et al., 2021) aligns with theories of social constructivism and the Community of Inquiry framework, suggesting that students who are more engaged also perceive richer instructional interactions—even when AI tools are involved.

An insightful nuance emerged at the indicator level: one observed PU item (“AI helps me do school tasks faster”) was significantly related to an observed LE indicator (“AI enables personalized learning”). Although the latent path was not supported, this micro-level relationship suggests that students may value GenAI for specific, targeted benefits, even if these do not shape holistic engagement across all domains (e.g., emotional, behavioral, cognitive) (Bognár & Khine, 2025). This supports Bond and Bergdahl’s (2022) claim that engagement is a multidimensional construct requiring differentiated support across domains.

The results also provide perspective on prior claims about self-regulation in AI-enhanced learning. While self-directed learning is undoubtedly part of FLEs, this study avoids overstating its role. Instead of generalizing AI as a blanket enhancer of self-regulation, the findings point to usability (PEU) and instructional interaction (STI) as stronger drivers of engagement. Though GenAI tools offer personalized feedback and scaffolding, their impact may be constrained by students’ prior experience, digital literacy, or instructional context—potentially acting as moderators that warrant future investigation.

The implications for practice are notable. Rather than assuming that all GenAI tools promote deep engagement, educators should emphasize ease of integration, guided use, and relational support. Designing learning experiences with intuitive and socially embedded GenAI improves engagement and maintains a human instructional presence. At the same time, instructional designers and policymakers should focus on supporting adaptive learning without replacing interaction, especially in FLEs.

Finally, the findings encourage new avenues of research. Future work should explore indicator-level interactions, indirect effects, or moderated mediation models, given the nuanced item-level patterns observed. Longitudinal studies may also clarify how students’ perceptions of usefulness evolve over time and whether these perceptions eventually contribute to sustained engagement.

Conclusion

This study examined how students’ voluntary adoption of GenAI tools influences their perception of usefulness and ease of use and how these perceptions shape their learning engagement and student-teacher interaction within flexible learning environments. The findings confirmed key TAM assumptions, particularly the role of PEU in supporting LE. However, the findings indicate no discernable correlation between PU and LE. This result offers a novel insight that challenges conventional TAM interpretations. The divergence underscores the complexity of engagement in AI-enhanced learning and suggests that utility alone may not suffice in autonomous learning environments. In contrast, the robust relationship between LE and STI highlights the enduring importance of teacher presence, even as students independently engage with GenAI.

These findings point to a shared responsibility among educators, researchers, and policymakers to cultivate learning environments where GenAI enhances, rather than replaces, meaningful human interaction. Teachers play a crucial role in modeling intentional, ethical GenAI use; researchers must continue to investigate its evolving pedagogical dynamics, and policymakers should frame guidelines that balance innovation with educational equity.

This research adds to the expanding literature on GenAI tools in education by presenting a structural model that captures the synergy between voluntary adoption of GenAI, core TAM constructs, learning engagement, and relational outcomes. Unlike prior studies focused on top-down, this creative work foregrounds student agency and autonomy as central to effective GenAI integration. This study advances theoretical understanding and practical design by empirically validating the importance of teacher presence in GenAI-enhanced learning.

Future initiatives in AI and education should prioritize human-centered design, context-sensitive pedagogies, and inclusive policy frameworks. Doing so will ensure that GenAI becomes a powerful tool and a meaningful ally in cultivating responsive, engaging, ang equitable learning experiences.

Recommendations

Building on the findings of this study, future research should further investigate the nuanced dynamics of GenAI adoption, learning engagement, and student-teacher interaction in flexible learning environments. One critical area of exploration involves observed indicator-level effect between perceived usefulness (i.e., “AI helps me do tasks faster”) and instructional support. To capture these micro-level effects more effectively, researchers are encouraged to apply refined SEM techniques or alternative analytical methods capable of testing indicator-to-indicator pathways. This may reveal whether similar patterns emerge across varied learning contexts and student groups.

Future models should also consider incorporating additional constructs that may influence GenAI adoption and engagement outcomes. In particular, digital literacy could moderate or mediate how PEU translates into meaningful learning behaviors. Similarly, trust in AI systems may shape whether students perceive GenAI tools as reliable and safe for academic use. Self-regulation—a core tenet of learner autonomy—may also function as a critical driver or mediator of LE in AI-enhanced settings. Explicitly modeling these variables can enrich theoretical frameworks like TAM and provide a more context-sensitive understanding of how GenAI functions in real-world classrooms.

Finally, longitudinal research must confirm and track changes in students’ perceptions and behaviors over time and establish clearer causal pathways between technology adoption, engagement, and instructional interaction. Employing such research designs can inform the development of more sustainable, personalized, and human-centered GenAI learning platforms.

Limitations

This study has taught a lot about how students choose to use GenAI in flexible learning contexts, but it also recognizes that this study has some limitations. First, using a cross-sectional design capture only a snapshot in time, limiting the ability to draw causal conclusions. Future longitudinal studies could better illuminate how students’ perceptions and behaviors around GenAI evolve. Second, it relied on responses from self-reports, which could incur response bias; other data collection methods may be used to strengthen data validity, e.g. mixed methods might be utilized. Third, although the sample size was statistically adequate, generalizability remains limited, as participants were drawn from only three SUCs. Including a more diverse sample across regions, institutions, and academic programs would strengthen the applicability of the findings. Finally, the study revealed an unexpected indicator-level relationship between a perceived usefulness item and an instructional support item (PU IS). However, this interaction was not part of the original hypothesized model. Hence, we need a refined structural modeling technique indicator-level dynamics and potential moderating effects.

While limitations exist, this research provides a springboard for future studies and teaching innovations that can harness GenAI’s potential in learner-driven contexts.

Ethics Statements

This study underwent rigorous review and approval by the Bukidnon State University Research Ethics Committee (REC) under the protocol document code 2023-041-TULANG-TSV. The research adhered to the highest ethical standards, guided by the principles of respect, beneficence, non-maleficence, and autonomy.

This study complied with privacy laws, including the Data Privacy Act (2012), and international ethical standards. The researcher provided the participants with the consent form and fully informed them about the study’s objectives, data collection procedures, and their right to withdraw at any stage without consequences. Strict anonymity and confidentiality were upheld, and no identifiable information was collected.

The REC review process provided additional oversight, reinforcing adherence to ethical standards and research practices that uphold the trust and integrity of its participants and contribute to the broader academic community’s ethical research standards.

Acknowledgements

The author extends deep gratitude and appreciation to the administration of Bukidnon State University for funding this research. Special thanks go to the students who participated in the study, whose valuable insights contribute to advancing knowledge on integrating GenAI in education.

The author also acknowledges the Center for Educational Analytics as the study’s funding source and expresses appreciation to the University Statistical Center for thoroughly reviewing the data analysis procedures.

Finally, I would like to express my sincere appreciation to family, friends, colleagues, and research mentors for their unwavering support, guidance, and encouragement throughout this study.

Conflict of Interest

The author declares no conflict of interest related to this study.

Funding

This study was funded by Bukidnon State University through the Center for Educational Analytics. It was approved on December 5, 2023, based on the Joint Memorandum Circular 2013 of the Department of Budget and Management, Department of Science and Technology, and institutional guidelines and policies for research funds.

Generative AI Statement

As the author of this work, I used the AI tool Grammarly for writing assistance, especially for basic grammar, paraphrasing, and spelling checks. The manuscript underwent thorough review, verification, and refinement to ensure accuracy and integrity. The author solely developed all ideas, concepts, and arguments. I take full responsibility for all the content of this published work.

References

Abdalla, A. A., Bhat, M. A., Tiwari, C. K., Khan, S. T., & Wedajo, A. D. (2024). Exploring ChatGPT adoption among business and management students through the lens of diffusion of innovation theory. Computers and Education: Artificial Intelligence, 7, Article 100257. https://doi.org/10.1016/j.caeai.2024.100257

Akanzire, B. N., Nyaaba, M., & Nabang, M. (2025). Generative AI in teacher education: Teacher educators’ perception and preparedness. Journal of Digital Educational Technology, 5(1), Article ep2508. https://doi.org/10.30935/jdet/15887

Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI and Ethics, 2, 431-440. https://doi.org/10.1007/s43681-021-00096-7

Akyürek, E. (2019). Impacts of using technology on teacher-student communication/interaction: Improve students learning. World of Journal of Education, 9(4), 30-40. https://doi.org/10.5430/wje.v9n4p30

Alhumaid, K., Al Naqbi, S., Elsori, D., & Al Mansoori, M. (2023). The adoption of artificial intelligence applications in education. International Journal of Data and Network Science, 7(1), 457-466. https://doi.org/10.5267/j.ijdns.2022.8.013

Al-Momani, A. M., & Ramayah, T. (2024). Adoption of artificial intelligence in education: A systematic literature review. In M. A. Al-Sahafi, M. Al-Emran, G. W.-H., & K.-B. Ooi, Current and Future Trends on Intelligent Technology Adoption (Vol. 2 pp. 117-135). Springer. https://doi.org/10.1007/978-3-031-61463-7_7

Arbuckle, J. L. (2019). IBM® SPSS® Amos™26 user’s guide. IBM. https://bit.ly/44TAWmx

Bancoro, J. C. (2024). Exploring the influence of perceived usefulness and perceived ease of use on technology engagement of business administration instructors. International Journal of Asian Business and Management, 3(2), 149-168. https://doi.org/10.55927/ijabm.v3i2.8714

Bandura, A. (1977). Social learning theory. Prentice-Hall.  

Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and higher education: Trends, challenges, and future directions from a systematic literature review. Information15(11), Article 676. https://doi.org/10.3390/info15110676

Bhatt, P., & Muduli, A. (2023). Artificial intelligence in learning and development: A systematic literature review. European Journal of Training and Development. 47(7/8),677-694. https://doi.org/10.1108/EJTD-09-2021-0143

Boelens, R., De Wever, B., & Voet, M. (2017). Four key challenges to the design of blended learning: A systematic literature review. Educational Research Review, 22, 1-18. https://doi.org/10.1016/j.edurev.2017.06.001

Bognár, L., & Khine, M. S. (2025). The shifting landscape of student engagement: A pre-post semester analysis in AI-enhanced classrooms. Computers and Education: Artificial Intelligence, 8, Article 100395. https://doi.org/10.1016/j.caeai.2025.100395

Bolliger, D. U., & Martin, F. (2018). Instructor and student perceptions of online student engagement strategies. Distance Education39(4), 568-583. https://doi.org/10.1080/01587919.2018.1520041

Bond, M., & Bergdahl, N. (2022). Student engagement in open, distance, and digital education. In O. Zawacki-Richter, & I. Jung (Eds.), Handbook of open, distance and digital education (pp. 1-16). Springer. https://doi.org/10.1007/978-981-19-0351-9_79-1

Breukelman, M., Gosen, M. N., Koole, T., & van de Pol, J. (2023). The workings of multiple principles in student-teacher interactions: Orientations to both mundane interaction and pedagogical context. Linguistics and Education76, Article 101188. https://doi.org/10.1016/j.linged.2023.101188

Byrne, B. M. (2013). Structural equation modeling with AMOS. Psychology Press. https://doi.org/10.4324/9781410600219

Chai, C. S., Lin, P.-Y., Jong, M. S.-Y., Dai, Y., Chiu, T. K. F., & Qin, J. (2021). Perceptions and behavioral intentions towards learning artificial intelligence in primary school students. Educational Technology and Society, 24(3), 89-101. https://www.jstor.org/stable/27032858

Chan, C. K. Y., & Tsi, L. H. Y. (2024). Will generative AI replace teachers in higher education? A study of teacher and student perceptions. Studies in Educational Evaluation, 83, Article 101395. http://dx.doi.org/10.1016%2Fj.stueduc.2024.101395

Chang, W.-L., & Sun, J. C.-Y. (2024). Evaluating AI's impact on self-regulated language learning: A systematic review. System, 126, Article 103484. https://doi.org/10.1016/j.system.2024.103484

Chatterjee, S., Rana, N. P., Dwivedi, Y. K., & Baabdullah, A. M. (2021). Understanding AI adoption in manufacturing and production firms using an integrated TAM-TOE model. Technological Forecasting and Social Change170, Article 120880. https://doi.org/10.1016/j.techfore.2021.120880

Collis, B., & Moonen, J. (2002). Flexible learning in a digital world. Open Learning: The Journal of Open, Distance and e-Learning17(3), 217-230. https://doi.org/10.1080/0268051022000048228

Data Privacy Act, 10173. (2012). https://privacy.gov.ph/data-privacy-act/

Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. Management Information System Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008  

Diwan, C., Srinivasa, S., Suri, G., Agarwal, S., & Ram, P. (2023). AI-based learning content generation and learning pathway augmentation to increase learner engagement. Computers and Education: Artificial Intelligence4, Article 100110. https://doi.org/10.1016/j.caeai.2022.100110

Eager, B., & Brunton, R. (2023). Prompting higher education towards AI-augmented teaching and learning practice. Journal of University Teaching and Learning Practice20(5), Article 02. https://doi.org/10.53761/1.20.5.02

Fearnley, M. R., & Amora, J. T. (2020). Learning management system adoption in higher education using the extended technology acceptance model. IAFOR Journal of Education: Technology in Education, 8(2), 89-106. https://doi.org/10.22492/ije.8.2.05

Fornell, C., & Larcker, D. F. (1981). Structural equation models with unobservable variables and measurement error: Algebra and statistics. Journal of Marketing Research, 18(3), 382-388. https://doi.org/10.1177/002224378101800313

Garrison, D. R. (2009). Communities of inquiry in online learning. In P. Rogers, G. Berg, J., Boettcher, C, Howard, L. Justice, & K. Scheck (Eds.), Encyclopedia of distance learning (2nd ed., pp. 352-355). IGI Global. https://doi.org/10.4018/978-1-60566-198-8.ch052

Gumbs, A. A., Perretta, S., d’Allemagne, B., & Chouillard, E. (2021). What is artificial intelligence surgery? Artificial. Intelligence Surgery1, 1-10. https://doi.org/10.20517/ais.2021.01

Gunness, A., Matanda, M. J., & Rajaguru, R. (2023). Effect of student responsiveness to instructional innovation on student engagement in semi-synchronous online learning environments: The mediating role of personal technological innovativeness and perceived usefulness. Computers and Education, 205, Article 104884. https://doi.org/10.1016/j.compedu.2023.104884

Hair, J. F., Jr., Hult., G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). Partial least squares structural equations modeling (PLS-SEM) using R (3rd ed.). Sage Publications. https://doi.org/10.1007/978-3-030-80519-7

Hanham, J., Lee, C. B., & Teo, T. (2021). The influence of technology acceptance, academic self-efficacy, and gender on academic achievement through online tutoring. Computers and Education, 172, Article 104252. https://doi.org/10.1016/j.compedu.2021.104252

Henrie, C. R., Bodily, R., Manwaring, K. C., & Graham, C. R. (2015). Exploring intensive longitudinal measures of student engagement in blended learning. The International Review of Research in Open and Distributed Learning16(3), 131-151. https://doi.org/10.19173/irrodl.v16i3.2015

Herlambang, A. D., & Rachmadi, A. (2024). Student's perception of technology-rich classrooms usage to support conceptual and procedural knowledge delivery in higher education computer science course. Procedia Computer Science, 234, 1500-1509. https://doi.org/10.1016/j.procs.2024.03.151

Holstein, K., & Aleven, V. (2022). Designing for human-AI complementarity in K-12 education. AI Magazine43(2), 239-248. https://doi.org/10.1002/aaai.12058

Holstein, K., Hong, G., Tegene, M., McLaren, B. M., & Aleven, V. (2018). The classroom as a dashboard: Co-designing wearable cognitive augmentation for K-12 teachers. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK ’18) (pp. 79-88). Association for Computing Machinery. https://doi.org/10.1145/3170358.3170377

How flexible is flexible learning, who is to decide and what are its implications? [Editorial]. (2017). Distance Education, 38(3), 269-272. https://doi.org/10.1080/01587919.2017.1371831

Hu, L.-T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1-55. https://doi.org/10.1080/10705519909540118

Huang, J., & Mizumoto, A. (2024). Examining the relationship between the L2 motivational self-system and technology acceptance model post ChatGPT introduction and utilization. Computers and Education: Artificial Intelligence7, Article 100302. https://doi.org/10.1016/j.caeai.2024.100302

Jose, E. M. K., Prasanna, A., Kushwaha, B. P., & Das, M. (2024). Can generative AI motivate management students? The role of perceived value and information literacy. The International Journal of Management Education, 22(3), Article 101082. https://doi.org/10.1016/j.ijme.2024.101082

Kahu, E. R., & Nelson, K. (2018). Student engagement in the educational interface: Understanding the mechanisms of student success. Higher Education Research and Development, 37(1), 58-71. https://doi.org/10.1080/07294360.2017.1344197

Kaiser, H. F. (1970). A second generation little jiffy. Psychometrika, 35(4), 401-415. https://doi.org/10.1007/BF02291817

Katona, J., & Gyonyoru, K. I. K. (2025). Integrating AI-based adaptive learning into flipped classroom model to enhance engagement and learning outcomes. Computers and Education: Artificial Intelligence, 8, Article 100392. https://doi.org/10.1016/j.caeai.2025.100392

Kelly, S., Kaye, S.-A., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics77, Article 101925. https://doi.org/10.1016/j.tele.2022.101925

Kennedy, I. (2022). Sample size determination in test-retest and Cronbach alpha reliability estimates. British Journal of Contemporary Education2(1), 17-29. https://doi.org/10.52589/BJCE-FY266HK9

Kong, S. C., Yang, Y., & Hou, C. (2024). Examining teachers’ behavioral intention of using generative artificial intelligence tools for teaching and learning based on the extended technology acceptance model. Computers and Education: Artificial Intelligence, 7, Article 100328. https://doi.org/10.1016/j.caeai.2024.100328

Kourkouli, K. (2024). Unlocking in-depth forum discussion and perceived effectiveness: Teaching and social presence categories in online teacher communities. Teaching and Teacher Education146, Article 104630. https://doi.org/10.1016/j.tate.2024.104630

Kreijns, K., Xu, K., & Weidlich, J. (2022). Social presence: Conceptualization and measurement. Educational Psychology Review, 34, 139-170. https://dx.doi.org/10.1007/s10648-021-09623-8

Kustova, T., Vodneva, A., Tcepelevich, M., Tkachenko, I., Oreshina, G., Zhukova, M. A., Golovanova, I., & Grigorenko, E. L. (2025). Psychophysiological correlates of learner-instructor interaction: A scoping review. International Journal of Psychophysiology, 211, Article 112556. https://doi.org/10.1016/j.ijpsycho.2025.112556

Lee, D., Arnold, M., Srivastava, A., Plastow, K., Strelan, P., Ploeckl, F., Lekkas, D., & Palmer, E. (2024). The impact of generative AI on higher education learning and teaching: A study of educators’ perspectives. Computers and Education: Artificial Intelligence6, Article 100221. https://doi.org/10.1016/j.caeai.2024.100221

Li, L., & Yang, S. (2021). Exploring the influence of teacher-student interaction on university students’ self-efficacy in the flipped classroom. Journal of Education and Learning, 10(2), 84-90. https://doi.org/10.5539/jel.v10n2p84

Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. International Journal of Management Education21(2), Article 100790. https://doi.org/10.1016/j.ijme.2023.100790

Lyu, W., & Salam, Z. A. (2025). AI-powered personalized learning: Enhancing self-efficacy, motivation, and digital literacy in adult education through expectancy-value theory. Learning and Motivation, 90, Article 102129. https://doi.org/10.1016/j.lmot.2025.102129

Mardia, K. V. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika57(3), 519-530. https://doi.org/10.1093/biomet/57.3.519

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine27(4), 12-14. https://doi.org/10.1609/aimag.v27i4.1904

Menzli, L. J., Smirani, L. K., Boulahia, J. A., & Hadjouni, M. (2022). Investigation of open educational resources adoption in higher education using Rogers’ diffusion of innovation theory. Heliyon, 8(7), Article e09885. https://doi.org/10.1016/j.heliyon.2022.e09885

Mertler, C. A., Vannatta, R. A., & LaVenia, K. N. (2021). Advanced and multivariate statistical methods: Practical application and interpretation (7th ed.). Routledge. https://doi.org/10.4324/9781003047223

Miao, F., Holmes, W., Huang, R., & Zhang, H. (2021). AI and education: Guidance for policy-makers. United Nations Educational, Scientific and Cultural Organization. https://doi.org/10.54675/PCSP7350

Moore, M. G. (1989). Editorial: Three types of interaction. American Journal of Distance Education3(2), 1-7. https://doi.org/10.1080/08923648909526659

Moravec, V., Hynek, V., Gavurova, B., & Rigelsky, M. (2024). Who uses it and for what purpose? The role of digital literacy in ChatGPT adoption and utilization. Journal of Innovation and Knowledge, 9(4), Article 100602. https://doi.org/10.1016/j.jik.2024.100602

Müller, C., & Mildenberger, T. (2021). Facilitating flexible learning by replacing classroom time with an online learning environment: A systematic review of blended learning in higher education. Educational Research Review, 34, Article 100394. https://doi.org/10.1016/j.edurev.2021.100394

Neo, M., Lee, C. P., Tan, H. Y.-J., Neo, T. K., Tan, Y. X., Mahendru, N., & Ismat, Z. (2022). Enhancing students’ online learning experiences with artificial intelligence (AI): The MERLIN project. International Journal of Technology, 13(5), 1023-1034. https://doi.org/10.14716/ijtech.v13i5.5843

Neuhüttler, J., Fischer, R., Ganz, W., & Urmetzer, F. (2020). Perceived quality of artificial intelligence in smart service systems: A structured approach. In M. Shepperd, F. Brito e Abreu, A. R. da Silva, & R. Pérez-Castillo (Eds.), Quality of information and communications technology (pp. 3-16). Springer. https://doi.org/10.1007/978-3-030-58793-2_1

Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B.-P. T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies28, 4221-4241. https://doi.org/10.1007/s10639-022-11316-w

Noor, N., Rao Hill, S. R., & Troshani, I. (2021). Recasting service quality for AI-based service. Australasian Marketing Journal, 30(4), 297-312. https://doi.org/10.1177/18393349211005056

Okulich-Kazarin, V., Artyukhov, A., Skowron, Ł., Artyukhova, N., Dluhopolskyi, O., & Cwynar, W. (2024). Sustainability of higher education: Study of student opinions about the possibility of replacing teachers with AI technologies. Sustainability16(1), Article 55. https://doi.org/10.3390/su16010055

Ortiz-López, A., Olmos-Migueláñez, S., & Sánchez-Prieto, J. C. (2025). Mobile-based assessment acceptance: A systematic literature review in the educational context. International Journal of Educational Research, 130, Article 102551. https://doi.org/10.1016/j.ijer.2025.102551

Pang, C., Wang, Z. C., McGrenere, J., Leung, R., Dai, J., & Moffatt, K. (2021). Technology adoption and learning preferences for older adults: Evolving perceptions, ongoing challenges, and emerging design opportunities. In CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Article 490). Association for Computing Machinery. https://doi.org/10.1145/3411764.3445702

Patnaik, P., & Bakkar, M. (2024). Exploring determinants influencing artificial intelligence adoption, reference to diffusion of innovation theory. Technology in Society, 79, Article 102750. https://doi.org/10.1016/j.techsoc.2024.102750

Pesovski, I., Santos, R., Henriques, R., & Trajkovik, V. (2024). Generative AI for customizable learning experiences. Sustainability16(7), Article 3034. https://doi.org/10.3390/su16073034

Raj, N. S., & Renumol, V. G. (2022). A systematic literature review on adaptive content recommenders in personalized learning environments from 2015 to 2020. Journal of Computers in Education, 9, 113-148. https://doi.org/10.1007/s40692-021-00199-4

Rangel-de Lázaro, G., & Duart, J. M. (2023). You can handle, you can teach it: Systematic review on the use of extended reality and artificial intelligence technologies for online higher education. Sustainability, 15(4), Article 3507. https://doi.org/10.3390/su15043507

Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.  

Savalei, V., & Bentler, P. M. (2005). A statistically justified pairwise ML method for incomplete nonnormal data: A comparison with direct ML and pairwise ADF. Structural Equation Modeling, 12(2), 183-214. https://dx.doi.org/10.1207/s15328007sem1202_1

Seo, K., Tang, J., Roll, I., Fels, S., & Yoon, D. (2021). The impact of artificial intelligence on learner-instructor interaction in online learning. International Journal of Educational Technology in Higher Education, 18, Article 54. https://doi.org/10.1186/s41239-021-00292-9

Shah, B. A., Zala, L. B., & Desai, N. A. (2023). Structural equation modelling for segmentation analysis of latent variables responsible for environment-friendly feeder mode choice. International Journal of Transportation Science and Technology, 12(1), 173-186. https://doi.org/10.1016/j.ijtst.2022.01.003

Sharma, P., & Harkishan, M. (2022). Designing an intelligent tutoring system for computer programing in the Pacific. Educational and Information Technologies, 27, 6197-6209. https://doi.org/10.1007/s10639-021-10882-9

Shirkhani, S., & Ghaemi, F. (2011). Barriers to self-regulation of language learning: Drawing on Bandura's ideas. Procedia - Social and Behavioral Sciences, 29, 107-110. https://doi.org/10.1016/j.sbspro.2011.11.213

Simms, R. C. (2025). Generative artificial intelligence (AI) literacy in nursing education: A crucial call to action. Nurse Education Today146, Article 106544. https://doi.org/10.1016/j.nedt.2024.106544

Sun, H.-L., Sun, T., Sha, F.-Y., Gu, X.-Y., Hou, X.-R., Zhu, F.-Y., & Fang, P.-T. (2022). The influence of teacher-student interaction on the effects of online learning: Based on a serial mediating model. Frontiers in Psychology, 13, Article 779217. https://doi.org/10.3389/fpsyg.2022.779217

Tawil, S., & Miao, F. (2024). Steering the digital transformation of education: UNESCO’s human-centered approach. Frontiers in Digital Education, 1, 51-58. https://doi.org/10.1007/s44366-024-0020-0

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. https://doi.org/10.2307/j.ctvjf9vz4

Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in higher education: Seeing ChatGPT through universities' policies, resources, and guidelines. Computers and Education: Artificial Intelligence, 7, Article100326. https://doi.org/10.1016/j.caeai.2024.100326

Watson, J., Valsesia, F., & Segal, S. (2024). Assessing AI receptivity through a persuasion knowledge lens. Current Opinion in Psychology, 58, Article 101834. https://doi.org/10.1016/j.copsyc.2024.101834

Wong, W. K. O. (2024). The sudden disruptive rise of generative artificial intelligence? An evaluation of their impact on higher education and the global workplace. Journal of Open Innovation: Technology, Market, and Complexity10(2), Article 100278. https://doi.org/10.1016/j.joitmc.2024.100278

Yang, H. (2025). Harnessing generative AI: Exploring its impact on cognitive engagement, emotional engagement, learning retention, reward sensitivity, and motivation through reinforcement theory. Learning and Motivation, 90, Article 102136. https://doi.org/10.1016/j.lmot.2025.102136

...