Introduction
Picture a flexible learning environment (FLE) where every student can access a personalized generative artificial intelligence (GenAI) tutor, an always-available assistant providing continuous academic support. In this environment, the learning process is easy and intuitive, as is using a digital platform with learning materials adapted to each student's specific requirements. This scenario is no longer science fiction. It reflects a fast-growing shift in education fueled by advances in artificial intelligence (AI).
AI’s conceptual roots go back to Alan Turing in the 1950s and the Dartmouth conference 1956, which formally introduced the field. Since then, AI has matured into a powerful force transforming industries — including education (Gumbs et al., 2021; McCarthy et al., 2006). Studies show that students today often display higher proficiency and more positive attitudes toward learning when supported by AI (Chai et al., 2021). Within flexible learning settings, GenAI tools show promise in fostering adaptability and enhancing the overall student experience (Bhatt & Muduli, 2023; Holstein & Aleven, 2022; Lim et al., 2023; Müller & Mildenberger, 2021).
However, ongoing ethical concerns accompany these opportunities. Questions around academic integrity, data privacy, and the appropriate role of technology in learning remain front and center (e.g., Eager & Brunton, 2023; Nguyen et al., 2023; Okulich-Kazarin et al., 2024; Wang et al., 2024; Wong, 2024). As GenAI tools become more integrated into classrooms, researchers and teachers continue to explore how these technologies affect pedagogy, especially in terms of teaching roles, human interaction, and learning engagement (LE) (Akgun & Greenhow, 2022; Batista et al., 2024; Pesovski et al., 2024; Raj & Renumol, 2022).
A critical question is emerging: How do students adopt GenAI in proactive, intentional, and personal ways? Students increasingly use these tools independently—outside formal instruction—to meet learning needs. This signals a shift toward voluntary adoption (VA) of AI, where the learner, not the institution, initiates the use of technology. While this trend empowers students, it raises concerns: Could GenAI replace teachers? What happens to the student-teacher connection in classrooms where AI is heavily used (Chan & Tsi, 2024)?
The role of the teacher remains central in FLEs. Although GenAI can assist with tasks and automate certain forms of feedback, it lacks the human capacities for emotional intelligence, creative guidance, and contextual understanding—qualities that define effective teaching (Chan & Tsi, 2024). Some studies have flagged tensions between the rise of automation and the value students place on human interaction, especially in environments that depend on collaboration and support (Sun et al., 2022).
Several well-established frameworks guide instruction in online and flexible learning spaces. The Community of Inquiry model (COI) of Garrison (2009) emphasizes three core dimensions: teacher presence, cognitive presence, and social presence. Building on this, scholars such as Kourkouli (2024) and Kreijns et al. (2022) highlight the importance of sustained interaction, purposeful communication, and authentic student-teacher relationships. “How flexible is flexible learning“ (2017) emphasized that flexible learning is not just about online delivery or distance education—it is anchored in a student-centered paradigm that must be implemented contextually and thoughtfully. He further argued that flexible learning is a valued-oriented principle grounded in learner autonomy, equity, and responsiveness to context and its effectiveness depends on how well it is adapted across dimensions such as engagement, assessment, and institutional support, particularly in diverse and resource-limited educational settings.
These models suggest that learning is most effective when STI remains strong—even in tech-mediated environments. Yet, we still know little about how STI evolves when students adopt GenAI tools independently. While researchers have explored teacher adoption of AI, few have examined the reverse: how student-initiated use of GenAI affects the dynamics of communication, support, and engagement in learning (Li & Yang, 2021). As Breukelman et al. (2023) emphasize, STI is not merely transactional; it involves deeper pedagogical relationships that shape students’ learning.
The present study seeks to advance understanding of how student-initiated use of GenAI informs their perceptions of instructional interaction and LE within FLEs. Informed by the framework of Seo et al. (2021) investigates how GenAI tools may influence key dimensions of learning, specifically communication, social presence, and the quality of instructional support (Lee et al., 2024). At the same time, it builds on studies showing how content quality and system adaptability shape students’ expectations of AI tools (Neuhüttler et al., 2020; Noor et al., 2021).
This study builds upon the foundations of the Technology Acceptance Model (TAM) and recent research on GenAI in education (Kong et al., 2024). It examines how VA, perceived usefulness (PU), perceived ease of use (PEU), LE, and STI are connected, developing a structural model to describe their interrelationships.
This research proposes a framework that equips teachers and policymakers with a practical and theoretical framework for designing adaptive, student-centered learning experiences that preserve meaningful human connections. It contributes to the current debate about ethics, agency, and interaction in AI-supported learning. More broadly, the study responds to the UN Sustainable Development Goals on quality education and innovations and aligns with UNESCO’s call for a human-centered approach to AI integration in schools.
To guide this investigation, the study poses the following research questions: (1) To what extent do students voluntarily adopt GenAI in flexible learning classes? (2) To what extent do they accept GenAI technology? (3) What factors contribute to the synergy of voluntary adoption, technology acceptance, LE, and multidimensional STI? (4) What structural model best describes these synergies?
Literature Review
The rapid integration of GenAI tools into educational contexts has introduced new dimensions of student engagement, autonomy, and instructional support (e.g., Lee et al., 2024). As students increasingly experiment with these tools on their initiative, understanding the voluntary adoption of GenAI has become a critical area of inquiry, particularly in FLE, where students exercise greater control over their pace, pathway, and learning mode. Although GenAI is widely celebrated for boosting productivity, sparking creativity, and making learning more personalized, it also brings uncertainty about how it might affect the traditional student-teacher relationship—something long considered essential to effective teaching. This study investigates the synergistic relationships among students’ voluntary use of GenAI, their perceptions of its usefulness and ease of use, and the role of human instructional presence in flexible learning settings. Grounded in the TAM (Davis, 1989), this research also explores how STI influences the behavioral intention to adopt GenAI tools and how this dynamic unfolds within the broader flexible learning framework (e.g., Kelly et al., 2023). The foregoing review is organized into six subsections to position this investigation within the current body of knowledge. First, it defines and contextualizes FLE, highlighting its role in shaping student autonomy (Katona & Gyonyoru, 2025) and digital tool experimentation. It also examines the nature and motivations behind the voluntary adoption of GenAI tools, incorporating relevant insights from the diffusion of innovations theory (DIT). Second, it provides a critical analysis of TAM and its application to student-initiated technology use. Third, the review discusses LE in AI-supported environments, emphasizing its pedagogical significance. Fifth, it discusses FLEs in the context of innovation. Finally, the synthesis subsection identifies these conceptual gaps and establishes the current study’s rationale.
Voluntary Adoption of GenAI Tools
The growing use of GenAI in education has drawn attention to how students choose to use these tools on their own to support their learning (Diwan et al., 2023). Unlike technologies that schools or teachers assign, students often choose GenAI tools driven by their goals, challenges (e.g., Pang et al., 2021), and sense of usefulness (e.g., Moravec et al., 2024). These tools offer various forms of support, like helping them with writing, problem-solving, coding, brainstorming ideas, or simulating conversation. What makes them especially appealing is how fast, and flexible, and seemingly intelligent they are, which is particularly useful in FLEs, where students often navigate their learning with limited direct guidance. The decision to voluntarily adopt GenAI tools is influenced by personal factors, such as “AI receptivity” (e.g., Watson et al., 2024) and digital literacy (e.g., Lyu & Salam, 2025), and perceived value of GenAI (e.g., Jose et al., 2024). Students may adopt these tools to enhance productivity or overcome learning challenges. However, the informal nature of GenAI use creates a research gap: most existing models of educational technology adoption focus on structured implementation rather than student-initiated use. The DIT offers valuable insights into how to frame this voluntary behavior. As initially proposed by Rogers (2003), five key attributes influence the adoption of innovations: relative advantage, compatibility, complexity, trialability, and observability. These attributes have been reaffirmed in recent educational technology studies (e.g., Abdalla et al., 2024; Menzli et al., 2022; Patnaik & Bakkar, 2024). GenAI tools—being easily accessible, user-friendly, and yielding immediate outputs—often fulfill these criteria, particularly for early adopters and innovators within the student population. Nevertheless, while DIT explains general adoption tendencies, it does not fully account for the behavioral intentions or perception dynamics required for a predictive model of GenAI adoption. For this reason, the current study utilizes the TAM as its primary theoretical foundation, integrating DIT as a complementary background.
TAM and Its GenAI Application
The TAM is one of the most widely used frameworks for examining how users accept and use technology. It posits those two cognitive factors—perceived usefulness (PU) and perceived ease of use (PEU)—predict a user’s behavioral intention to use a technology, which in turn influences actual use. TAM has been extensively validated in diverse educational settings (e.g., Akyürek, 2019; Fearnley & Amora, 2020), including online learning, mobile platforms, and intelligent tutoring systems (e.g., Alhumaid et al., 2023; Hanham et al., 2021; Huang & Mizumoto, 2024; Ortiz-López et al., 2025). However, the model’s traditional application assumes formal or institutionalized use of technology. When students choose to use GenAI tools independently, they decide whether the tool is useful or easy to use—often without any direction from their teachers or school policies. This changes how we understand technology adoption and raises the question of whether usefulness and ease of use alone can explain their decisions. Research suggests that additional factors such as trust, task-technology fit, or social influence may also be relevant in AI-rich environments (Alhumaid et al., 2023; Herlambang & Rachmadi, 2024). Yet, introducing too many variables complicates parsimony. Therefore, this study retains TAM’s core constructs but situates them within the unique context of student-driven GenAI use. PU reflects whether students believe GenAI enhances their academic performance, while PEU captures their experience with interface simplicity and output reliability. Importantly, TAM in this context is examined alongside the moderating influence of LE and STI as relational outcomes.
Learning Engagement in AI-supported Contexts
LE involves more than just participation—it reflects how students think, feel, and act as they engage with academic tasks. Personal motivations, instructional approaches, and the broader learning environment shape it (Yang, 2025). Kahu and Nelson (2018) introduced a framework that situates student engagement within a wider educational interface—where institutional contexts and psychosocial development interact. Henrie et al. (2015) demonstrated that engagement varies across course levels and is affected more by the relevance and clarity of instruction than the delivery format alone. In the context of technology acceptance, LE is often influenced by students’ perceptions of a GenAI tool’s usefulness and ease of use (Hanham et al., 2021; Huang & Mizumoto, 2024). When students perceive GenAI tools as practical and accessible, they are more likely to participate actively, remain motivated, and sustain cognitive development in academic tasks. Thus, this study hypothesizes that PE and PEU will positively impact LE. Furthermore, given the role of engagement in fostering meaningful educational experiences, higher levels of engagement with GenAI tools are expected to correspond with stronger perceptions of instructional interaction, reflecting student-teacher relationships.
Flexible Learning Environments as the Context of Innovation
FLEs have emerged as transformative spaces that allow students to access content anytime, anywhere, and often at their own pace and preference. Rooted in student-centered pedagogy, flexible learning emphasizes autonomy, choice, and responsiveness to individual learner needs (see Collis & Moonen, 2002). These learning environments usually mix real-time and self-paced activities, use various digital tools, and offer students different ways to learn—all of which create space for trying new approaches and encouraging innovation in how students engage with their studies. The shift toward flexibility has been accelerated by the growing accessibility of digital resources, cloud-based learning platforms, and AI (Rangel-de Lázaro & Duart, 2023; Simms, 2025; Tawil & Miao, 2024). In these contexts, students are no longer passive recipients of instruction but active participants who make strategic decisions about how and when to engage with content and tools. As such, FLEs serve as fertile ground for the voluntary adoption of emerging technologies, including GenAI tools such as ChatGPT, Gemini, and other AI-driven writing tools or problem-solving assistants. Research has shown that FLEs enhance self-regulated learning, critical thinking, and digital fluency (Boelens et al., 2017; Chang & Sun, 2024). At the same time, these environments require students to examine when it comes to how students choose to use GenAI—not because they’re told to, but because they discover it themselves and decide how students are part of their learning. In this study, flexible learning is a contextual condition that enables the expression of voluntary technology behaviors and potentially shapes traditional forms of instructional interaction. Understanding how this environment supports or complicates GenAI adoption—and the evolving role of STI within it—is essential for evaluating technology acceptance and pedagogical outcomes in contemporary education.
STI in AI-supported Learning
STI remains a foundational element of effective learning, particularly within a constructivist and socio-cultural. The concept of the zone of proximal development (ZPD) by Vygotsky (1978) emphasizes the importance of guided support, while Moore’s (1989) typology of interaction (learner-instructor, learner-content, learner-learner) underscores the instructional value of meaningful dialogue. Depending on design, frequency, and modality, STI can be reduced and enhanced in FLEs. The growing use of GenAI tools has made STIs more complex. While these technologies can reduce the need for students to turn to their teachers for help with understanding content, brainstorming, or getting feedback, they also introduce new expectations and challenges that still require thoughtful instructional support. In particular, students may need more instructional support to critically assess AI-generated content—especially when evaluating accuracy, ethical considerations, and critical thinking. Thus, rather than replacing the teacher, GenAI technologies such as those examined by Holstein et al. (2018) may support a shift in the teacher’s role toward learning orchestration and responsive, data-informed decision-making. Empirical studies indicate that STI is critical in enhancing student engagement, learning satisfaction, and perceived instructional value (see Bolliger & Martin, 2018; Miao et al., 2021). However, little research has studied how STI interacts with AI usage patterns, particularly in self-directed contexts. This study explores how STI could be a moderating factor—shaping how students’ perceptions of GenAI (via TAM) translate into actual behavioral intentions. It assumes that students may use GenAI more strategically and responsibly in settings with strong teacher presence, while low-STI contexts may lead to over-dependence or misuse (e.g., Akanzire et al., 2025).
Synthesis
Across the reviewed literature, several key themes emerge. First, FLEs enable students to explore and adopt technologies like GenAI at their initiative. Second, while DIT explains early adoption, it lacks the predictive specificity to model behavioral intention. Third, the TAM provides a parsimonious framework to predict intention through PU and PEU. Fourth, STI—often underexplored in TAM-based research—may significantly shape how students perceive and adopt GenAI tools. Despite these insights, existing research has yet to integrate these elements into a single model. This study addresses that gap by proposing a structural model that links TAM constructs, STI, and voluntary GenAI use within FLEs. Drawing the preceding review, this study proposes a structural model that investigates the relationships between students’ voluntary adoption of GenAI, their perceptions of its usefulness and ease of use, their LE, and their perceived quality of STI. The hypotheses are formally stated in the next section.
Methodology
Research Design
The study employed a quantitative, non-experimental, correlational research design, using structural equation modeling (SEM) to analyze the relationships among key variables. The analysis of moment structure (AMOS) was applied to evaluate the model fit and relationships across factors (Byrne, 2013; Shah et al., 2023). A variance-based modeling technique was adopted to enhance the robustness of results (Chatterjee et al., 2021; Hair et al., 2021).
Sample and Data Collection
This study included a diverse sample of students selected based on inclusion, exclusion, and withdrawal criteria. Ethical considerations were rigorously followed by the policies and guidelines of the Research Ethics Committee (REC) of Bukidnon State University (BukSU). Participants were recruited from college, high school, and graduate programs across multiple State Universities and Colleges (SUCs). College and high school students were drawn from three SUCs, while graduate students were selected from one SUC. Upon receiving the invitation, participants reviewed the informed consent statement and voluntarily completed the survey questionnaire online via Google Forms. A total of 504 students were invited to participate, yielding a high response rate of 97.62% (492 out of 504). However, 12 students declined participation, resulting in a final sample of 480 students. These participants were explicitly asked whether they had voluntarily integrated GenAI into their learning activities. Their demographic characteristics are shown in Table 1.
Table 1. Socio-Demographic Characteristics of the Respondents
No | Demographic Characteristics | Teachers | |
Total | % | ||
1. | Gender | ||
Male/Man | 155 | 32.29 | |
Female/Woman | 309 | 64.38 | |
Other | 16 | 3.33 | |
2. | Educational Level | ||
Graduate | 72 | 15.00 | |
College | 239 | 49.79 | |
High School | 169 | 35.21 | |
3. | Computer Skill Proficiency | ||
Digital Literacy (Moderate to High) | 427 | 88.95 | |
Software Application (Moderate to High) | 425 | 88.54 | |
Creating Multimedia (Moderate to High) | 386 | 80.41 | |
Coding and Programming (Moderate to High) | 273 | 56.87 | |
GenAI Utilized in Learning | |||
Adaptive Learning (ChatGPT) | 220 | 45.83 | |
4. | Intelligent Content (Grammarly) | 55 | 11.45 |
Intelligent Tutoring (Khan Academy, | 245 | 51.04 | |
Duolingo, Math Assist) |
Inclusion and Exclusion Criteria
Students who participated in this study actively engaged in FLE through synchronous or asynchronous learning. Eligible participants included senior high school, college, and graduate students who demonstrated computing proficiency and voluntarily used at least one GenAI tool without teacher-specific requirements in their learning activities. However, if the participants provided incomplete data, their responses could not support generating the desired structural model, and in addition, those who chose not to participate or withdrew at any stage were removed from the dataset.
Development of the Survey Instrument
The development of the instrument drew upon several theoretical frameworks and empirically grounded concepts. The study hypothesized that students who voluntarily adopt GenAI in flexible learning classrooms are more likely to demonstrate higher levels of LE and experience enriched STI (Neo et al., 2022; Seo et al., 2021; Sharma & Harkishan, 2022). Bandura’s (1977) social learning theory supports this idea by highlighting the importance of observation, imitation, and modeling in learning, which often occurs within social contexts (Shirkhani & Ghaemi, 2011). Similarly, constructivist learning theory posits that learners construct meaning through active participation; in this context, students who willingly engage with GenAI tools benefit from self-guided exploration, iterative experimentation, and interactive learning environments.
Following an extensive literature review, the researcher identified five latent constructs relevant to the proposed structural model: VA, PU, PEU, LE, and STI. The initial items were developed based on validated scales in existing studies (e.g., Bolliger & Martin, 2018; Davis, 1989; Hanham et al., 2021), and items were added to align with the specific context of GenAI integration in FLEs.
The study underwent expert peer review, which included an oral examination of research proposals and a detailed technical evaluation for research funding. Subsequent revisions to the manuscript were made for refinements and enhancements, guided by the panel’s feedback. It was then reviewed and approved by the university research ethics committee (BukSU-REC) under the protocol document code 2023-0410-TULANG-TSV. Appendix A provides the final list of survey items categorized by construct to ensure transparency and address validity concerns.
Procedure
The researcher conducted a pilot survey with 184 college students in one SUC through face-to-face administration to ensure reliability and validity. The final version of the questionnaire employed a 5-point Likert scale, where 5 (strongly agree) to 1 (strongly disagree) with 3 (neutral). The online survey was administered from January to March 2024. The instrument underwent construct validity testing to assess its suitability for factor analysis.
The study is grounded in social learning theory and constructivism, emphasizing the role of observation, interaction, and self-guided exploration in learning. Additionally, the TAM provides a framework for understanding students’ perceived usefulness (PU) and perceived ease of use (PEU) of GenAI in FLE. The study examines five key latent variables hypothesized within the SEM model: (1) voluntary GenAI adoption measured by adaptive quality, personalized recommendations, AI-based assessment accuracy, intention to adopt, and trust in AI tools, (2) PU, (3) PEU, (4) LE as measured in terms of intrinsic/extrinsic motivation, emotional, cognitive, and behavioral engagement, and (5) STI as measured in terms of communication, instructional support, and social presence. The relationships between these variables underwent testing through SEM, presented in Figure 1 as the hypothesized model.

Figure 1. Hypothesized Model
Hypotheses
This study proposes the following hypotheses:
H1. Students’ voluntary adoption of GenAI positively impacts their perceived usefulness of the technology.
H2. Students' voluntary adoption of GenAI positively impacts their perceived ease of use.
H3. Students' perceived usefulness of GenAI positively affects their learning engagement.
H4. Students' perceived ease of use of GenAI positively affects their learning engagement.
H5. Students' learning engagement with GenAI positively correlateswith their perceived quality of student-teacher interaction.
Data Analysis
The dataset, consisting of 480 completed questionnaires, was analyzed using the software IBM SPSS Statistics version 25 and IBM Amos version 26, and it was used to generate the best-fit structural model (Arbuckle, 2019). This study is guided by Byrne’s (2013) methods to ensure the appropriateness of statistical techniques to address the research questions. Data analyses include descriptive statistics, correlation analysis, and regression tests to explore trends and preliminary relationships among the considered variables.
Before the primary analysis, the dataset was screened for missing values and outliers using the SPSS software. The SPSS output was examined; frequency distributions and descriptive statistics indicated that missing data were minimal and randomly distributed. Given the low missingness, mean distribution was deemed unnecessary, and a complete case analysis was applied. We examined Mahalanobis distance based on the critical chi-squared values to detect univariate and multivariate outliers via IBM Amos output (e.g., Byrne, 2013). No extreme cases violated assumptions or significantly distorted the data’s distribution. As a result, all valid responses were retained for analysis.
Structural equation modeling (SEM) in Amos allowed us to test the hypothesized pathways within the model to examine the more complex interactions. We also performed factor analysis to refine the measurement structure and validate the underlying constructs. After running these analyses, we carefully interpreted the results to identify meaningful patterns, significant correlations, and relationships that supported the study’s hypotheses.
Findings/Results
Table 2 presents each construct’s descriptive statistics and Cronbach’s alpha coefficients. Internal consistency values ranged from .791 to .883 exceeding the acceptable threshold of .70 (Byrne, 2013; Hair et al., 2021; Kennedy, 2022) and indicating reliable measurement. To ensure suitability for SEM, we also evaluated skewness and kurtosis, which fell within recommended ranges (skewness:-.958 to -.452); kurtosis:-.268 to 2.446), supporting the normality assumption (Kong et al., 2024; Mardia, 1970; Mertler et al., 2021; Savalei & Bentler, 2005). These findings confirm that the data meet key psychometric and statistical assumptions necessary for structural modeling.
Table 2. The Variables’ Descriptive Statistics & Cronbach’s Alpha
No | Variables | Mean | SD | Kurtosis | Skewness | Cronbach’s Alpha |
1. | AI-based assessment Accuracy | .827 | ||||
AAA1 | 3.59 | 0.7858 | 0.533 | -0.650 | ||
AAA2 | 3.64 | 0.7930 | 0.784 | -0.741 | ||
AAA3 | 3.63 | 0.7483 | 0.873 | -0.860 | ||
2. | Personalized recommendation | .834 | ||||
PRE1 | 4.01 | 0.6113 | 2.446 | -0.719 | ||
PRE2 | 3.83 | 0.7405 | 0.969 | -0.704 | ||
PRE3 | 3.85 | 0.7643 | 1.610 | -0.958 | ||
3. | Adaptive Content Quality | .845 | ||||
ACQ1 | 3.87 | 0.6777 | 1.551 | -0.806 | ||
ACQ2 | 3.88 | 0.7274 | 1.922 | -0.896 | ||
ACQ3 | 3.97 | 0.7291 | 1.368 | -0.770 | ||
4. | Intention to Adopt | .882 | ||||
ITA1 | 3.87 | 0.7338 | 1.007 | -0.647 | ||
ITA2 | 3.79 | 0.7290 | 0.643 | -0.602 | ||
ITA3 | 3.76 | 0.7331 | 1.026 | -0.678 | ||
ITA4 | 3.63 | 0.8696 | 0.444 | -0.634 | ||
5. | Trust & Reliance on AI | .848 | ||||
TIA1 | 3.55 | 0.8182 | 0.305 | -0.570 | ||
TIA2 | 3.73 | 0.7149 | 1.139 | -0.721 | ||
TIA3 | 3.46 | 0.8659 | 0.150 | -0.452 | ||
6. | Perceived Usefulness | .874 | ||||
PU1 | 4.04 | 0.6545 | 1.139 | -0.581 | ||
PU2 | 3.97 | 0.6832 | 0.794 | -0.555 | ||
PU3 | 3.84 | 0.7377 | 1.189 | -0.708 | ||
7. | Perceived Ease of Use | .877 | ||||
PEU1 | 3.93 | 0.6601 | 1.637 | -0.713 | ||
PEU2 | 3.89 | 0.7066 | 0.641 | -0.482 | ||
PEU3 | 3.86 | 0.7081 | 1.247 | -0.686 | ||
PEU4 | 3.66 | 0.8122 | 0.450 | -0.567 | ||
8. | Learning Engagement: | .791 | ||||
Communication | ||||||
LEC1 | 3.84 | 0.7367 | 1.447 | -0.748 | ||
LEC2 | 4.04 | 0.6657 | 1.335 | -0.599 | ||
LEC3 | 4.05 | 0.6700 | 1.548 | -0.681 | ||
LEC4 | 3.94 | 0.7300 | 0.555 | -0.521 | ||
9. | Learning Engagement: | .825 | ||||
Motivation | ||||||
LEM1 | 3.54 | 0.8658 | 0.353 | -0.562 | ||
LEM2 | 3.68 | 0.7943 | 0.353 | -0.562 | ||
LEM3 | 3.59 | 0.8357 | 0.163 | -0.563 | ||
LEM4 | 4.12 | 0.7649 | 0.717 | -0.767 | ||
10. | Learning Engagement: | |||||
Self-Efficacy | ||||||
LESE1 | 3.89 | 0.7107 | 1.622 | -0.851 | ||
LESE2 | 3.84 | 0.7127 | 1.042 | -0.726 | ||
LESE3 | 3.76 | 0.7631 | 0.722 | -0.646 | ||
11. | Social Presence | .859 | ||||
SP1 | 3.70 | 0.7541 | 1.813 | -0.591 | ||
SP2 | 3.72 | 0.7550 | 1.202 | -0.769 | ||
SP3 | 3.80 | 0.7483 | 1.241 | -0.831 | ||
SP4 | 3.79 | 0.7815 | 0.736 | -0.675 | ||
12. | Institutional Support | .883 | ||||
IS1 | 3.92 | 0.6560 | 2.417 | -0.899 | ||
IS2 | 3.85 | 0.7319 | 1.498 | -0.746 | ||
IS3 | 3.80 | 0.7013 | 1.802 | -0.802 | ||
IS4 | 3.82 | 0.7328 | 1.689 | -0.790 | ||
13. | STI AI-Adoption Outcomes | .871 | ||||
STIAO1 | 3.45 | 0.9615 | -0.096 | -0.706 | ||
STIAO2 | 3.46 | 0.9598 | -0.268 | -0.577 | ||
STIAO3 | 3.62 | 0.8702 | 0.610 | -0.792 |
Sampling Adequacy
The survey questionnaire underwent construct validity testing to assess its suitability for factor analysis. The Kaiser-Meyer-Olkin (KMO) value was 0.961, exceeding the 0.8 thresholds, indicating excellent sampling adequacy (Kaiser, 1970). Additionally, Bartlett’s test of sphericity yielded a Chi-square value of 16,449.108, df = 990,p<.001, confirming that the dataset contained significant correlations among variables, justifying the use of factor analysis. Furthermore, all factor loadings exceeded 0.5, further supporting the instrument’s construct validity.
Measurement Model
The measurement model of this study demonstrates strong psychometric properties, which confirms the reliability and validity of the constructs. Factor loadings across all constructs exceeded the recommended threshold of .70 (Hair et al., 2021), ensuring robust indicator validity. Table 3 shows loadings for the variable VA ranged from .708 to .836, indicating a reliable measurement of this construct. Similarly, PU loadings ranged from .702 to .808, while PEU showed consistently high loadings between .788 and .826. The LE construct exhibited loadings between .727 and .840, reinforcing its alignment with the theoretical framework. The STI factor loading ranged from .740 to .850, further supporting the reliability of its indicators.
Further, in Table 3, the composite reliability (CR) and the average variance extracted (AVE) values were computed to evaluate the internal validity of the constructs. The variable VA demonstrated excellent internal consistency with CR = .833 and alpha = .955. Similarly, PU had CR = .711 and alpha = .874, and PEU had CR = .649 and alpha = .877. The LE construct maintained a CR of .627 and an alpha of .834. The STI construct also confirmed its high reliability with CR = .843.
Convergent validity was assessed using AVE, ensuring that each construct explained more variance than measurement error (Fornell & Larcker, 1981). As shown in Table 3, all AVE values exceeded the recommended 0.50 threshold, ensuring theoretical coherence through convergent validity. The variables VA (AVE = .603), PU (AVE = .711), PEU (AVE = .649), LE (AVE = .627), and STI (AVE = .642) demonstrated that their respective indicators explained a substantial portion of the variance. Hence, the measurement model meets reliability and validity requirements, supporting its appropriateness for structural analysis. These findings confirm that the model is a valid tool for investigating the adoption of GenAI in STI within FLEs.
Table 3. The Values for Factor Loadings, Composite Reliability, and Average Variance Extracted
No | Variable Items | Factor Loadings | Composite Reliability | Average Variance Extracted |
1. | Voluntary Adoption | |||
AAA | .751 | .883 | .603 | |
PRE | .836 | |||
ACQ | .791 | |||
ITA | .791 | |||
TIA | .708 | |||
2. | Perceived Usefulness | |||
PU1 | .745 | .880 | .711 | |
PU2 | .808 | |||
PU3 | .702 | |||
3. | Perceived Ease of Use | |||
PEU1 | .799 | .881 | .649 | |
PEU2 | .826 | |||
PEU3 | .809 | |||
PEU4 | .788 | |||
4. | Learning Engagement | |||
LEC | .727 | .834 | .627 | |
LEM | .840 | |||
LESE | .804 | |||
5. | STI | |||
SP | .810 | .843 | .642 | |
IS | .850 | |||
STIAO | .740 |
Discriminant Validity
The discriminant validity of the study was assessed using the Fornell-Larcker criterion. It emphasized that the root of AVE for each construct should be greater than its largest correlation coefficient with any other construct in the model. It ensures that each construct is statistically distinct and captures unique variance rather than overlapping with different constructs. As depicted in Table 4, the diagonals represent the square root of the AVE for each construct, while the off-diagonals indicate the construct’s highest correlation with another variable. All square root AVE values exceeded their inter-construct correlations, confirming that the constructs maintain sufficient discriminant validity. Specifically, the square root of AVE for the variable VA is .777, greater than its highest correlation with PU at .750, ensuring that the variable VA and PU are distinct constructs. Similarly, the AVE for PEU is .806, exceeding its correlation with LE (.791), confirming that these constructs measure separate dimensions. As shown, the STI construct has an AVE = .801, which is greater than its correlation with PU at .797, reinforcing the discriminant validity of these constructs. Thus, the findings offer compelling empirical evidence supporting discriminant validity, demonstrating that each construct uniquely represents a specific aspect of GenAI adoption and STI. Establishing discriminant validity is crucial to verify that the constructs capture distinct concepts, minimizing overlap and ensuring their theoretical and statistical independence within the model. This distinction strengthens the theoretical integrity of the measurement insights (Hair et al., 2021).
Table 4. Discriminant Validity Matrix
VA | PU | PEU | Engage | STI | |
VA | .777 | ||||
PU | .705 | .843 | |||
PEU | .767 | .835 | .806 | ||
Engage | .689 | .815 | .791 | .792 | |
STI | .675 | .797 | .788 | .711 | .801 |
Structural Model Evaluation
This study assessed the hypothesized structural model (i.e., Fig. 1) using SEM to examine the theoretical relationships between VA, PU, PEU, LE, and STI variables. Model fit was evaluated using multiple indices and aligned with established SEM benchmarks. Following the initial model estimation, the hypothesized path PU→LE (i.e., H3) was excluded during model respecification. According to Byrne (2013), model refinement is warranted when a path lacks empirical support, fails to contribute to model fit, and can be theoretically justified. In this case, the path PU→ LE was statistically non-significant and exhibited low standardized estimates, suggesting it did not meaningfully explain variance in LE. Theoretically, this is consistent with findings in the literature suggesting that usefulness perceptions alone may not drive student engagement, particularly in autonomous learning contexts like voluntary GenAI adoption. After removing the path, the model was re-estimated, yielding improved fit indices as presented in Table 5 (e.g., CFI = 0.925; RMSEA = 0.045), validating the refined structure and supporting its theoretical parsimony. This finding is notable, as it challenges a standard TAM expectation that PU predicts LE. Instead, results indicate that students’ engagement with GenAI may depend more strongly on other factors, such as usability and relational support, than perceived utility alone. Conversely, the model retained and confirmed several significant paths. The relationship between PEU and LE (i.e., H4) was statistically significant, indicating that students who find GenAI tools accessible and manageable are more likely to demonstrate active learning involvement. Likewise, LE significantly predicted STI (i.e., H5), reinforcing that student engagement may enhance perceptions of instructional presence and communication in flexible learning settings. The results underscore the strength of the revised model and emphasize the unique behavioral dynamics that emerge in student-initiated GenAI use within FLEs.
Table 5. Model Respecification Fit Summary
No. | Fit Indices | Threshold | Obtained Value | Source |
1. | CMIN/df | 3 – 5 | 4.815 | Byrne (2013) |
2. | GFI | > 0.90 | 0.962 | Byrne (2013); Hu and Bentler (1999) |
3. | CFI | > 0.90 | 0.925 | Byrne (2013) |
4. | TLI | > 0.90 | 0.908 | Byrne (2013) |
5. | SRMR | < 0.08 | 0.041 | Byrne (2013); Hu and Bentler (1999) |
6. | STI | < 0.08 | 0.045 | Byrne (2013); Hu and Bentler (1999) |
After confirming the model fit and refining the theory, this section reports the statistical outcomes of the retained hypotheses and explores notable patterns observed at the indicator level.
Path Analysis and Test of Hypothesis
This study assessed the direct theoretical relationships among the constructs using SEM-based path analysis. Based on the respecified structural model, it examined the effects of the variables VA, PU, PEU, LE, and STI. The study evaluated standardized path coefficients (β) and corresponding significance values (p-values) to determine each retained hypothesis’s strength and statistical relevance. A β closer to 1.0 indicates a more substantial predictive influence, while a p-value below .05 suggests statistical significance. Figure 2 presents the path diagram, and Table 6 summarizes the path coefficients, p-values, and hypothesis outcomes. The model supports all retained hypotheses, with one notable exception: the hypothesized path PU→ LE (i.e., H3) was excluded during model respecification. Interestingly, despite the exclusion of H3 at the latent level, further examination revealed that one specific observed indicator of PU (“AI technology helps me do my school tasks faster and more efficiently.”) significantly correlated with an observed LE component (“AI tools enable personalized learning for enhanced understanding”). While these item-level interactions were not modeled directly, they highlight the possibility of targeted effects between specific aspects of PU and LE. Future studies may benefit from modeling indicator-level interactions, exploring cross-loading structures, or estimating indirect effects to deepen our understanding of how GenAI features influence student engagement within FLEs.
Table 6. Path Coefficients, Significance Level
No. | Hypothesis | Path Relationship | p-value | Result | |
1. | H1 | VA→PU | 0.831 | .000 | Supported |
2. | H2 | VA→PEU | 0.985 | .000 | Supported |
3. | H3 | PU→Engage | - | No path. Hypothesis is not supported. | |
4. | H4 | PEU→Engage | 0.953 | .000 | Supported |
5. | H5 | Engage→STI | 0.951 | .000 | Supported |

Figure 2. Path Diagram and the Structural Model
Discussion
This research examined the synergistic relationships among VA of GenAI, PU, PEU, LE, and STI within FLEs. The findings generally aligned with TAM but also revealed theoretically significant deviations that offer new directions for understanding GenAI integration in autonomous learning settings (e.g. Al-Momani & Ramayah, 2024). One of the most theoretically significant findings was the absence of a direct relationship between PU and LE (i.e., H3) in the final structural model. This unexpected result diverges from prior TAM-based studies that position usefulness as a central predictor of engagement or behavioral intention (e.g., Bancoro, 2024; Gunness et al., 2023; Kelly et al., 2023). In this study, students’ perceptions that GenAI is useful did not necessarily translate into increased learning engagement when other contextual and relational factors were considered. This suggests that utility alone may be insufficient to sustain engagement in learner-initiated contexts, especially when instructional structure or intention is limited. By contrast, the expected relationships were well-supported. The positive association between PEU and LE (i.e., H4) reaffirms the importance of interface simplicity and tool accessibility in sustaining student motivation and focus. Students who can easily navigate GenAI tools are more likely to persist, self-regulate, and experiment meaningfully. Similarly, the strong relationship between LE and STI (i.e., H5) (e.g. Kustova et al., 2025; Seo et al., 2021) aligns with theories of social constructivism and the Community of Inquiry framework, suggesting that students who are more engaged also perceive richer instructional interactions—even when AI tools are involved.
An insightful nuance emerged at the indicator level: one observed PU item (“AI helps me do school tasks faster”) was significantly related to an observed LE indicator (“AI enables personalized learning”). Although the latent path was not supported, this micro-level relationship suggests that students may value GenAI for specific, targeted benefits, even if these do not shape holistic engagement across all domains (e.g., emotional, behavioral, cognitive) (Bognár & Khine, 2025). This supports Bond and Bergdahl’s (2022) claim that engagement is a multidimensional construct requiring differentiated support across domains.
The results also provide perspective on prior claims about self-regulation in AI-enhanced learning. While self-directed learning is undoubtedly part of FLEs, this study avoids overstating its role. Instead of generalizing AI as a blanket enhancer of self-regulation, the findings point to usability (PEU) and instructional interaction (STI) as stronger drivers of engagement. Though GenAI tools offer personalized feedback and scaffolding, their impact may be constrained by students’ prior experience, digital literacy, or instructional context—potentially acting as moderators that warrant future investigation.
The implications for practice are notable. Rather than assuming that all GenAI tools promote deep engagement, educators should emphasize ease of integration, guided use, and relational support. Designing learning experiences with intuitive and socially embedded GenAI improves engagement and maintains a human instructional presence. At the same time, instructional designers and policymakers should focus on supporting adaptive learning without replacing interaction, especially in FLEs.
Finally, the findings encourage new avenues of research. Future work should explore indicator-level interactions, indirect effects, or moderated mediation models, given the nuanced item-level patterns observed. Longitudinal studies may also clarify how students’ perceptions of usefulness evolve over time and whether these perceptions eventually contribute to sustained engagement.
Conclusion
This study examined how students’ voluntary adoption of GenAI tools influences their perception of usefulness and ease of use and how these perceptions shape their learning engagement and student-teacher interaction within flexible learning environments. The findings confirmed key TAM assumptions, particularly the role of PEU in supporting LE. However, the findings indicate no discernable correlation between PU and LE. This result offers a novel insight that challenges conventional TAM interpretations. The divergence underscores the complexity of engagement in AI-enhanced learning and suggests that utility alone may not suffice in autonomous learning environments. In contrast, the robust relationship between LE and STI highlights the enduring importance of teacher presence, even as students independently engage with GenAI.
These findings point to a shared responsibility among educators, researchers, and policymakers to cultivate learning environments where GenAI enhances, rather than replaces, meaningful human interaction. Teachers play a crucial role in modeling intentional, ethical GenAI use; researchers must continue to investigate its evolving pedagogical dynamics, and policymakers should frame guidelines that balance innovation with educational equity.
This research adds to the expanding literature on GenAI tools in education by presenting a structural model that captures the synergy between voluntary adoption of GenAI, core TAM constructs, learning engagement, and relational outcomes. Unlike prior studies focused on top-down, this creative work foregrounds student agency and autonomy as central to effective GenAI integration. This study advances theoretical understanding and practical design by empirically validating the importance of teacher presence in GenAI-enhanced learning.
Future initiatives in AI and education should prioritize human-centered design, context-sensitive pedagogies, and inclusive policy frameworks. Doing so will ensure that GenAI becomes a powerful tool and a meaningful ally in cultivating responsive, engaging, ang equitable learning experiences.
Recommendations
Building on the findings of this study, future research should further investigate the nuanced dynamics of GenAI adoption, learning engagement, and student-teacher interaction in flexible learning environments. One critical area of exploration involves observed indicator-level effect between perceived usefulness (i.e., “AI helps me do tasks faster”) and instructional support. To capture these micro-level effects more effectively, researchers are encouraged to apply refined SEM techniques or alternative analytical methods capable of testing indicator-to-indicator pathways. This may reveal whether similar patterns emerge across varied learning contexts and student groups.
Future models should also consider incorporating additional constructs that may influence GenAI adoption and engagement outcomes. In particular, digital literacy could moderate or mediate how PEU translates into meaningful learning behaviors. Similarly, trust in AI systems may shape whether students perceive GenAI tools as reliable and safe for academic use. Self-regulation—a core tenet of learner autonomy—may also function as a critical driver or mediator of LE in AI-enhanced settings. Explicitly modeling these variables can enrich theoretical frameworks like TAM and provide a more context-sensitive understanding of how GenAI functions in real-world classrooms.
Finally, longitudinal research must confirm and track changes in students’ perceptions and behaviors over time and establish clearer causal pathways between technology adoption, engagement, and instructional interaction. Employing such research designs can inform the development of more sustainable, personalized, and human-centered GenAI learning platforms.
Limitations
This study has taught a lot about how students choose to use GenAI in flexible learning contexts, but it also recognizes that this study has some limitations. First, using a cross-sectional design capture only a snapshot in time, limiting the ability to draw causal conclusions. Future longitudinal studies could better illuminate how students’ perceptions and behaviors around GenAI evolve. Second, it relied on responses from self-reports, which could incur response bias; other data collection methods may be used to strengthen data validity, e.g. mixed methods might be utilized. Third, although the sample size was statistically adequate, generalizability remains limited, as participants were drawn from only three SUCs. Including a more diverse sample across regions, institutions, and academic programs would strengthen the applicability of the findings. Finally, the study revealed an unexpected indicator-level relationship between a perceived usefulness item and an instructional support item (PU→ IS). However, this interaction was not part of the original hypothesized model. Hence, we need a refined structural modeling technique indicator-level dynamics and potential moderating effects.
While limitations exist, this research provides a springboard for future studies and teaching innovations that can harness GenAI’s potential in learner-driven contexts.
Ethics Statements
This study underwent rigorous review and approval by the Bukidnon State University Research Ethics Committee (REC) under the protocol document code 2023-041-TULANG-TSV. The research adhered to the highest ethical standards, guided by the principles of respect, beneficence, non-maleficence, and autonomy.
This study complied with privacy laws, including the Data Privacy Act (2012), and international ethical standards. The researcher provided the participants with the consent form and fully informed them about the study’s objectives, data collection procedures, and their right to withdraw at any stage without consequences. Strict anonymity and confidentiality were upheld, and no identifiable information was collected.
The REC review process provided additional oversight, reinforcing adherence to ethical standards and research practices that uphold the trust and integrity of its participants and contribute to the broader academic community’s ethical research standards.
Acknowledgements
The author extends deep gratitude and appreciation to the administration of Bukidnon State University for funding this research. Special thanks go to the students who participated in the study, whose valuable insights contribute to advancing knowledge on integrating GenAI in education.
The author also acknowledges the Center for Educational Analytics as the study’s funding source and expresses appreciation to the University Statistical Center for thoroughly reviewing the data analysis procedures.
Finally, I would like to express my sincere appreciation to family, friends, colleagues, and research mentors for their unwavering support, guidance, and encouragement throughout this study.
Conflict of Interest
The author declares no conflict of interest related to this study.
Funding
This study was funded by Bukidnon State University through the Center for Educational Analytics. It was approved on December 5, 2023, based on the Joint Memorandum Circular 2013 of the Department of Budget and Management, Department of Science and Technology, and institutional guidelines and policies for research funds.
Generative AI Statement
As the author of this work, I used the AI tool Grammarly for writing assistance, especially for basic grammar, paraphrasing, and spelling checks. The manuscript underwent thorough review, verification, and refinement to ensure accuracy and integrity. The author solely developed all ideas, concepts, and arguments. I take full responsibility for all the content of this published work.