- Research
- Open access
- Published:
Examining the effects of engagement with an app-based mental health intervention: a secondary analysis of a randomized control trial with treatment non-compliance
International Journal of Mental Health Systems volume 19, Article number: 30 (2025)
Abstract
Background
Minder is a mental health and substance use mobile application found to have a small but significant effects in a recent randomized trial. Poor engagement has been identified as a common threat to the effectiveness of digital mental health tools that is not accounted for in intention-to-treat analyses. The objective of this study is to conduct a prespecified secondary analyses to identify factors associated with engagement and examine the impact of engagement on trial outcomes.
Methods
1489 students were randomized to either the intervention (n = 743) or waitlist control (n = 746). Primary outcomes were changes in anxiety (General Anxiety Disorder 7 (GAD-7)), depression (Patient Health Questionnaire 9 (PHQ-9)), and alcohol consumption (US Alcohol Use Disorders Identification Test-Consumption Scale (USAUDIT-CS)) at 30-days. Secondary outcomes included frequency of substance use and mental wellbeing (Short Warwick-Edinburgh Mental Wellbeing Scale (SWEMWS)). A Complier Average Causal Effect (CACE) analysis was conducted using 3 separate criteria reflecting differing engagement levels: (1) a binary measure: use of any app component, (2) a continuous measure: number of unique days of app use, and (3) an ordinal measure: number of components accessed within the app.
Results
80.4% of participants used at least one app feature. Statistically significant differences were observed in app utilization across gender, ethnicity, having a history of depression or anxiety, higher baseline PHQ-9, higher SWEMWS, and poor/fair overall self-assessed mental and physical health. Any use of Minder was associated with significantly lower scores on the GAD-7 (adjusted group mean difference = − 1.09, 95% CI − 1.60 to − 0.57; P < .01) and PHQ-9 (adjusted group mean difference = − 0.84, 95% CI − 1.41 to − 0.27; P < .01) with increasing number of unique utilization days or components accessed associated with increased reductions. Any use of Minder was associated with significantly higher scores on the SWEMWS (adjusted group mean difference = 0.93, 95% CI 0.46 to 1.39; P < .01) and lower frequency of cannabis use (adjusted group mean difference = − 0.15, 95% CI − 0.23 to − 0.06; P < .01) with increased app utilization associated with larger improvements.
Conclusions
The CACE analysis identified significant dose-response relationships indicating that increased use of the Minder app leads to larger effects that can reach levels of clinical significance.
Trial registration
ClinicalTrials.gov NCT05606601 (November 3, 2022); https://clinicaltrials.gov/ct2/show/NCT05606601.
Background
The beginning of post-secondary education coincides with an important developmental period between adolescence and early adulthood which often includes experiencing novel stressors at a time when many mental disorders have their onset [1, 2]. Although post-secondary students have been found to have high rates of mental health problems, many do not receive treatment [3, 4]. This missed opportunity for early intervention has led to calls for the reform of campus mental health services to better meet the needs of students [5]. One way to help address these mental health needs is through the implementation of digital tools which have been shown to be effective in post-secondary populations [6, 7]. Despite promising findings on the potential benefit of digital tools, maintaining user engagement remains a major challenge that limits their impact [8].
Co-development methods that involve end-users throughout the development process have been proposed as a way to improve user engagement with digital tools [9, 10]. In response, the Minder mobile application was developed through extensive student and stakeholder engagement to address the mental health and substance use concerns of a general university student population in a self-directed manner [11]. It includes the following 4 main components: (1) an automated chatbot delivering evidence-based content related to emotions, general wellbeing, student life, and substance use; (2) a services component that includes a validated screening tool which links students to needs-based resources on campus and in the community [12]; (3) a community component which consists of a list of campus student groups sorted by interest; and 4), a peer coaching component where students are provided with non-clinician guidance, which has been shown to improve user engagement [13], in the form of empathetic listening and recommendations on app use.
The Minder app was found to be effective in reducing symptoms of anxiety and depression in a randomized controlled trial (RCT) in a general student population using an intention-to-treat (ITT) analysis; however, the overall average effects were relatively small [14]. One reason contributing to the small average effects may be variation in intervention utilization within this non-clinical sample. ITT analyses are conducted based on the assigned randomization group and do not factor in the level of compliance or engagement with an intervention [15, 16]. This type of analysis may be useful for gaining a sense of the overall average effectiveness of being assigned to an intervention, but can differ significantly from the intervention’s impact on participants who engage with the intervention. This is because the ITT analysis compares outcomes between treatment and control groups regardless of the level of intervention engagement (i.e., the results of an ITT analysis typically represent a lower bound of the true effect under ideal conditions because the inclusion in the analysis of individuals who did not receive or fully adhere to the intervention may dilute the observed treatment effects). In an effort to address the impact of compliance in randomized trials, some researchers evaluate the effectiveness of interventions by using comparison groups defined by the amount of treatment received in the trial or by conducting per-protocol analyses that include only those participants who fully adhered to the treatment protocol as originally planned. However, these approaches risk compromising the internal validity of the trial because the comparison groups are no longer randomized and thus may not be balanced on known and unknown confounders [15].
In order to account for compliance without compromising internal validity, researchers have advocated for the inclusion of a Complier Average Causal Effect (CACE) analysis as a complement to traditional ITT analyses of randomized trials [15]. Contrary to analyses based on treatment received or per-protocol approaches, CACE can provide an unbiased estimate of the effect of assignment to the intervention for participants who receive the desired treatment or “comply” with the intervention [15, 17, 18]. Conceptually, CACE analysis classifies the study participants into groups based on the intervention usage under both treatment conditions (intervention and control). As such, compliance type and levels are unaffected by treatment assignment. Consequently, group differences evaluated by conditioning on compliance type/level can be interpreted as being causal estimates. It has been suggested that CACE analyses may be particularly useful for digital interventions given the common issue of nonadherence and the structured way utilization data is collected that enables researchers to determine whether participants have been adequately exposed to the intervention [15]. With respect to the Minder intervention, a CACE analysis was included in our a priori published trial protocol to complement findings of the ITT analysis and to provide further information on the effectiveness of the intervention by allowing for a focus on participants who engaged with the intervention [19]. For the purposes of this study, we are considering engagement with the app as an indicator of compliance, and we will be using these terms interchangeably [20]. Thus, this study seeks to utilize the CACE analysis to better understand the effect of the Minder intervention for those who engaged with the app and to leverage the accumulated data to highlight areas of focus for continuous improvement of Minder. More specifically, this study’s objectives are to examine the impact of engagement with the Minder intervention on mental health and substance use outcomes using a CACE analysis and identify factors associated with compliance.
Methods
Procedure
The Minder study was a two-arm 30-day single blinded randomized controlled trial, which had one intervention group with full access to the Minder app and one waitlist control group that had access to a limited version of the app, only allowing participants to complete baseline and follow-up surveys and view a short introductory video. The statistician was blinded while completing the ITT analysis. Participants were recruited from the University of British Columbia Point Grey campus where interested students were directed to an online Qualtrics survey consisting of an eligibility screening questionnaire and the informed consent form. The eligibility criteria for this study were: (1) being a student currently enrolled at the University of British Columbia Point Grey Campus, (2) being at least 17 years old, (3) having access to a smartphone with Wi-Fi or cellular data, and (4) speaking English. Students who indicated they had a current suicidal plan were excluded from the study and provided a list of crisis resources. Participants who completed the eligibility screening and consent process were contacted by email with a link to download the app and provided with personalized login information. As the Minder app was part of a study for students, it was not available to the general public and could only be accessed via an invitation from the research team.
Participants in both study groups downloaded the app and completed the baseline survey. They were then randomized in blocks of 10 directly through the app using a pre-determined list which was stratified based on any prior drug (stimulants or opioids) use. Participants in the intervention group then gained full access to the Minder intervention, which is described in detail in a previous publication [14]. Participants saw a pop-up tutorial video that explained the different components of the app and were instructed to use the app in a self-directed manner. There were no requirements on which components or how frequently the app should be used. Participants in the control group received a pop-up message that they would be notified when it was time to complete the next survey and were blocked from accessing any other components of the app. After 14 days, participants in the intervention group were sent push notifications and automated emails to complete an interim follow-up survey. After 30 days, both groups were reminded via push notifications and automated emails to complete the follow-up survey in the Minder app. Participants received a CAD$10 gift card for completion of the baseline survey and an additional CAD$10 gift card for completing the 30-day follow-up survey. Ethics approval for this study was obtained from the University of British Columbia Behavioural Research Ethics Board on January 6, 2022 (ethics ID: H21-03248) and was performed in accordance with the Declaration of Helsinki. The study was registered at ClinicalTrials.gov (NCT05606601) on November 3, 2022, and the full study protocol has been also published [19].
Measures
Data on the following socio-demographics and outcomes were collected via an in-app survey at baseline and 30-day follow-up. Data on compliance was assessed via the app backend (i.e., the computer server hosting the app) which recorded app utilization information for each participant.
Compliance
We defined compliance based on three criteria reflecting different levels of engagement with the app: (1) a binary indicator variable for compliers who accessed or used any component of the app (Yes/No), (2) a continuous measure of compliance (engagement) levels: the number of unique days participants used the app (0–12 observed range), and (3) an ordinal measure of compliance (engagement) levels: the number of main components accessed within the app (0–4). As previously mentioned, the Minder app includes 4 main components: (1) evidence-based chatbot content; (2) needs-based service recommendations; (3) campus community groups; (4) peer coaching support. The automated chatbot component contains multiple activities related to emotions, general wellbeing, student life, and substance use across over 30 different chats. The chatbot activities, as well as the services, community, and peer coaching can be used multiple times throughout the study. Additional details on the co-development of the app and its components can be found in our co-development paper and publications related to the RCT [11, 14, 19]. As participants used the app in a self-directed manner, compliance was defined in terms of varying degrees of engagement because of participants’ ability to use any of the four components as needed as well as any of the content within the four components to any degree of frequency or intensity. The most general form of engagement is defined as accessing or using any component of the app, such that participants who engaged were captured regardless of the intensity of engagement. The number of unique days of app use and the number of core components accessed within the app were added as more refined definitions of engagement in order to capture individuals who engaged more frequently with app content over a length of time or across a greater number of unique components.
Socio-demographics
Socio-demographics captured included age, year of study, gender, and ethnicity. Participants were also asked whether they had a history of diagnosis or treatment (yes/no) of anxiety, depression, substance (alcohol/drug) use, or other mental health disorders.
Outcomes
Three primary outcome measures were assessed—anxiety symptoms, depression symptoms, and alcohol consumption risk. Anxiety symptoms were assessed using the 7-item General Anxiety Disorder (GAD-7) scale, with each question scored from 0 to 3 and total scores ranging from 0 to 21, with higher scores indicating greater frequency of symptoms [21]. Depressive symptoms were assessed using the 9-item Patient Health Questionnaire (PHQ-9), with each question scored from 0 to 3 and total scores ranging from 0 to 27, with higher scores indicating greater frequency of depressive symptoms [22]. Alcohol consumption risk was assessed using the 3-item US Alcohol Use Disorders Identification Test–Consumption Scale (USAUDIT-C) with total scores ranging from 0 to 18, with higher scores indicating greater alcohol consumption and related risk [23].
There were also a range of secondary outcomes included in the survey assessments. Mental wellbeing was assessed using the 7-item Short Warwick-Edinburgh Mental Wellbeing Scale (SWEMWS) with scores ranging from 7 to 35 and higher scores indicating a better outcome [24]. The number of alcoholic drinks consumed during a typical drinking session was assessed using the second question of the USAUDIT-C, which asks participants how many drinks containing alcohol they have on a typical day when drinking. Scores ranged from 0 (1 drink or Never drink) to 6 (10 or more drinks). The frequency of cannabis use was assessed using a single self-report question on frequency of cannabis consumption in the previous 30 days, with responses ranging from 0 (not in the past 30 days) to 7 (everyday: 3 or more times a day). The frequency of opioid use in the previous 30 days was assessed using self-reported questions that asked about the following types of use: any pharmaceutical opioid with a physician’s prescription and taken as prescribed; any pharmaceutical opioid either taken without a physician’s prescription or in larger doses than prescribed to get high, buzzed, or numbed out; or any street opioid. The opioid use outcome used in this investigation was the highest frequency reported for any of the types of prescribed, nonprescribed, and street opioids. More information on the socio-demographic questions and the outcomes can be found in the published protocol [19].
Statistical analyses
Descriptive statistics were used to characterize the sample and unadjusted logistic regression (with fixed effects for randomization block) was used to examine the relationships between demographic and psychosocial characteristics assessed at baseline and compliance (any use of the Minder app: yes/no) during the 30-day study period. To estimate the causal effect of the intervention for those participants who used the app (i.e., compliers), we utilized a CACE analysis. This approach allowed us to account for non-compliance and estimate the effect of treatment among those who used the app. As described by Hesser [15]several core assumptions of a CACE analysis approach need to be met. These assumptions include:
-
1.
Stable Unit Treatment Value Assumption SUTVA: We assumed that the intervention received by one participant did not affect the outcomes of another participant, and that there were no different versions of the treatment (all participants used the same version of the app).
-
2.
Random Assignment: Participants were randomly assigned to either the intervention or control group, ensuring that the assignment was independent of potential outcomes.
-
3.
Exclusion Restriction: For non-compliers, the treatment had no effect. This assumption is justified as the intervention’s effect is mediated through engagement with the app, which non-compliers did not experience. Thus, the outcome for non-compliers is the same regardless of treatment group assignment [15].
-
4.
Monotonicity: We assumed that there were no defiers; that is, no participants who would do the opposite of what ever group they were randomly assigned to. For example, if assigned to the control group they would use the app components or if assigned to the intervention group would avoid accessing the intervention. This assumption of no defiers is justified in this study because participants assigned to the control group had no access or ability to use the app components during the 30-day study period.
-
5.
Independence of Instruments: The randomization process served as an instrument for treatment compliance, which was independent of potential outcomes, conditional on the random assignment.
We utilized instrumental variable techniques using the random assignment as an instrument for actual treatment received (engagement with the Minder app). We estimated the effects using two-stage least squares (2SLS) regression [25], where the first stage predicted compliance with the treatment based on the assigned group, and the second stage predicted the outcome based on the predicted compliance from the first stage. We analyzed each primary and secondary outcome with the 2SLS using the ivregress 2sls in Stata [26] adjusting for randomization block and the baseline scores for the outcome. The 2SLS estimation method has the advantages of imposing no outcomes distributional assumptions (i.e., robustness) and being applicable to binary, ordinal, and continuous measures of compliance [25]. This application of CACE provides an unbiased estimate of the treatment effect among compliers as well as that of the dose-response relationships, assuming the assumptions hold. It also aligns with the CACE analysis methods used by Angrist, Imbens, and Rubin [27,28,29] and extends the analysis techniques demonstrated in Hesser’s [15] examination of internet-delivered interventions.
Results
Sample description
Of the 2293 individuals who met the eligibility criteria and consented to participate in the RCT, 1489 participants completed the baseline survey and were subsequently randomized—743 to the intervention group and 746 to the control group. At the 30-day follow-up survey, conducted within a 44-day window (from the baseline completion), completion rates were 80% (591 participants) for the intervention group and 83% (619 participants) for the control group. 279 participants (19%) did not complete the follow-up within the specified time and were considered lost to follow-up. As described in the publication of the ITT analyses [14], the median age of participants was 20 years, 70% self-identified as women, 34% reported a history of anxiety, and 39% reported moderate or greater levels of recent anxiety (i.e., total score of 10 or higher) based on the GAD-7 at baseline. A history of depression was reported by 28% of participants, with 44% reporting moderate or greater levels of depressive symptoms (i.e., total score of 10 or higher) on the PHQ-9 at baseline. Additionally, as noted in the ITT analyses, the intervention group had higher baseline scores on both GAD-7 (P =.02) and PHQ-9 (P =.02), but no other statistically significant differences on baseline characteristics were found between the intervention and control groups. Additional details on the trial flow and participant characteristics can be found in the published ITT analysis [14].
Rates of compliance
Of the 591 who completed the follow-up assessment in the intervention group, 475 (80.4%) were classified a complier based on the definition of compliance as having used at least one feature of the app. The median number of unique days in which the app was used was 2 (Interquartile Range (IQR):1–3) with 19.6% using no app components, 33.8% using only one app component, 30.3% using 2 app components, 12.9% using 3 app components and 3.4% using all 4 app components. A break-down of the rates and associated odds of compliance (any use of the Minder app: yes/no) by demographic and psychosocial characteristics can be found in Table 1.
Statistically significant differences were observed in app utilization (compliance) between gender, ethnicity, having a history of depression or anxiety, higher baseline PHQ-9, SWEMWS score, poor/fair overall self-assessed mental and physical health.
CACE analyses
The results of the CACE analysis of primary outcomes are reported in Table 2 where it can be seen that any use of the Minder app was associated with significantly lower scores on the GAD-7 (adjusted group mean difference = − 1.09, 95% CI − 1.60 to − 0.57; P <.01;) and PHQ-9 (adjusted group mean difference = − 0.84, 95% CI − 1.41 to − 0.27; P <.01). A similar pattern of results was found when examining the impact of the intervention using the number of unique days of intervention use and the number of unique app components used as the exposure variable. More specifically, the results indicated that an increase in the number of unique days of utilization or an increase in the number of unique components accessed was associated with an increase in the reduction of GAD-7 and PHQ-9 scores at follow-up in a dose-response manner. For example, using the app on 12 days would be associated with a decrease of 4.92 points on the GAD-7 and 3.84 points on the PHQ-9.
The results of the CACE analysis of secondary outcomes are reported in Table 3. Any use of the Minder app (yes/no) was associated with significantly higher scores on the SWEMWS (adjusted group mean difference = 0.93, 95% CI 0.46 to 1.39; P <.01) and significantly lower frequency of cannabis use (adjusted group mean difference = − 0.15, 95% CI − 0.23 to − 0.06; P <.01). A similar pattern of results was found when examining the impact of the intervention using the number of unique days of intervention use and the number of unique app components used as the exposure variable. As reported for the primary outcomes, increased app utilization was also associated with larger improvements in SWEMWS scores and decreased frequency of cannabis use.
Discussion
The goal of this study was to utilize a series of CACE analyses to quantify the effect of the Minder intervention for those who engaged with the app and to leverage the accumulated data to highlight areas of focus for continuous improvement of Minder. These results complement the previously reported ITT analysis and show that participants who used any part of the Minder intervention had significant reductions in symptoms of anxiety and depression, though the effect was modest. This is likely because the complier group includes a substantial number of low users of the app for whom the intervention effect would be modest. When considering the degree of engagement with the intervention, participants who used either more components of the intervention or used the intervention for a greater number of days had increasingly greater reductions in symptoms of anxiety and depression and frequency of cannabis use, as well as greater improvements in wellbeing. At the high end of observed levels of compliance (e.g., using the intervention for 12 days), participants could achieve a clinically significant amount of change in symptoms of anxiety (i.e., 4 points on the GAD-7 [30]). These findings indicate that greater compliance or utilization of the Minder app leads to greater improvements in mental health outcomes in students.
The Minder app was designed to improve the mental health of university students and help them manage substance use using a multi-faceted intervention that addresses a range of topics relevant to student life. It was designed for a general population of university students, some of which may be experiencing clinical level mental health symptoms and some of which may have no symptoms of mental distress. The Minder intervention is also self-directed and relies on users to identify the topics they think would be helpful as opposed to having a prescribed set of topics or modules that they are directed to complete. Because of the lack of explicit recommendations for participants to engage in any specific parts of the app, the measures of compliance used to evaluate this study were generic and simply considered as any use of the app, the number of days the app was used, or the number of unique components accessed. Participants who chose to use the intervention differed from non-users on several factors. For example, those who used any part of the Minder app were more likely to be women or non-binary. They were also more likely to have a history of depression and currently have at least moderate symptoms of depression. They were also more likely to report having fair or poor mental or physical health and have lower SWEMWS scores than participants who did not use the app at all. These findings indicate there may be different typologies of people who may be inclined to use this type of intervention; for example, it may appeal to some students who have more serious mental health symptoms they want support with, as well as students who may appear to have poor overall mental or physical health and wish to improve their overall wellbeing. The findings also highlight that a significant proportion of people may not engage with the intervention when it is offered in a self-directed manner.
Similar to our findings, other studies of digital mental health interventions have found differences in patterns of engagement across participants based on age, gender, race, education, insurance status and previous mental health status [31, 32]. Notably, one study of a similar chatbot-based intervention found that a type of user characterized as “efficient engagers” had lower levels of behavioural engagement with the intervention compared to other groups but were able to more effectively apply what they learned to their everyday life, resulting in greater improvements on certain mental health outcomes [32]. This suggests that future research on digital interventions should not only focus on increasing overall engagement, but also try to understand how to optimize this engagement to improve outcomes. Given the heterogeneity in participant characteristics related to engagement and improvements from digital mental health interventions, determining which individuals are most likely to benefit from a certain type of intervention could be used to appropriately match needs to specific services while also increasing scalability of supports. For example, a review of internet-based cognitive behavioural therapy found that while those with greater depression severity at baseline had greater improvements when offered guided versus unguided interventions, those with sub-threshold depression did not have greater benefits with guidance [33]. Precision medicine approaches have also been proposed as a way of identifying which individuals are most likely to benefit from an intervention based on a comprehensive profile of characteristics and which individuals may be better suited for another treatment option, such as treatment involving traditional in-person supports [34].
Overall, both the previously reported ITT analysis and the current CACE analysis have shown promising results in terms of the effectiveness of the Minder app in improving mental health outcomes in a general population of university students. Although the overall average effects have been modest, our analysis identified significant heterogeneity in levels of app engagement and strong dose-response relationships. We observed significant improvements in mental health among participants who use the app through the CACE analysis, with benefits increasing as their engagement with the content increases. It has been suggested that the Persuasive Systems Design (PSD) framework be used to guide the development of tools intended to change behaviour or attitudes [35, 36]. A recent review of smartphone apps for depression and anxiety found that apps which included more engagement features of PSD had greater reductions in mental health outcomes [37]. Similarly, features of PSD have been found to increase adherence to web-based interventions, particularly more use of dialogue support (e.g., Reminders, Social role, Liking, Suggestion,) [38]. In considering ways to improve both the effects and compliance with the Minder intervention, it seems that elements of PSD may prove helpful.
Based on the co-development process with students, many PSD framework elements outlined by Oinas-Kukkonen and Harjuma [36] have already been included in the Minder intervention, such as “Reminders” to use the app, “Personalization” of avatars and usernames, a “Social role” of the Minder chatbot, “Rewards” through the unlocking of app content after completing activities and focus on the “Liking” of the app by using a visually appealing design with animated characters. Several additional features of PSD may be helpful to incorporate into future versions of the app, specifically “Tailoring” and “Suggestion.” Many digital mental health interventions for young people use modular formats that require users to work through content in a certain order [39]; however, the Minder app is self-directed and does not currently provide specific content recommendations to users. Suggestion and Tailoring may be particularly relevant because the Minder app has over 30 different chatbot activities to choose from which may make it difficult for some users to identify which elements of the app they might benefit most from. Although the app has already been tailored to university students, further tailoring of the intervention through specific content suggestions based on participant needs may make engaging with the app feel more personalized. Researchers working on another digital mental health intervention found that providing content recommendations to users based on their onboarding assessment increased their engagement with the recommended material [40]. A recent meta-analysis also found that internet studies with more frequent intended usage (i.e., how often participants were instructed to use the intervention) had better adherence [38]. In the current study, participants were not given specific instructions on how often to use the app; however, providing more tailored guidance to users on usage represents a promising means of increasing engagement with the app and achieving better outcomes.
There are several limitations that should be considered in relation to the current study. Firstly, the CACE analysis requires a measure of compliance that is used to evaluate the impact of using the intervention. In this study, participants viewed a pop-up tutorial video that explained the different components of the app and then proceeded to use the app in a self-directed manner, so defining compliance was a challenge. There was not necessarily a “right” or “wrong” way for participants to use the app since they may have varying levels of need. The intervention also contained many different components which addressed a range of student life challenges but not all of these supports may have been needed by all participants. For example, the Services component provides participants with recommendations for traditional health services; however, if the student had no current need for support it likely would not have been beneficial. Similarly, the Community component provided a searchable directory of student groups which could be useful for those who did not have many social connections but may be less useful for students who were already engaged in many clubs. To accommodate this complexity, we utilized a generic measure for compliance based on any use of the app, along with measures for days of app use and number of components used. This method of defining compliance post-study has been used in several other intervention evaluations using CACE analysis [17, 41, 42]. Researchers planning randomized trials of digital mental health interventions should consider how they will measure compliance before beginning the trial so that a CACE analysis can be conducted to supplement traditional ITT analyses and improve the understanding of compliance and the factors that influence it.
Conclusions
In this study we have demonstrated that the Minder app can improve mental health outcomes in participants who use the app in the current self-directed format. However, a subgroup of participants did not engage with the intervention content at all, highlighting the need to consider compliance when developing and assessing the effectiveness of digital interventions. The CACE analysis demonstrated the presence of strong dose-response relationships such that increased use of the intervention leads to larger effects that can reach levels of clinical significance. Future research is needed to explore how to maximize the impact of the Minder intervention through increasing engagement, such as including more personalization tactics such as tailored recommendations for app use.
Data availability
The datasets used and/or analyzed for the current study are available from the corresponding author on reasonable request.
Abbreviations
- ITT:
-
Intention-to-treat
- CACE:
-
Complier Average Causal Effect
- GAD-7:
-
General Anxiety Disorder 7-Item Scale
- PHQ-9:
-
Patient Health Questionnaire 9-Item Scale
- USAUDIT-C:
-
US Alcohol Use Disorders Identification Test-Consumption Scale
- SWEMWS:
-
Short Warwick-Edinburgh Mental Wellbeing Scale
- CI:
-
Confidence Interval
- RCT:
-
Randomized Control Trial
- 2SLS:
-
Two-Stage Least Squares
- IQR:
-
Interquartile Range
- SD:
-
Standard Deviation
- PSD:
-
Persuasive Systems Design
References
Cunningham S, Duffy A. Investing in our future: importance of postsecondary student mental health research. Can J Psychiatry. 2019;64(2):79–81.
Chan VMD, Moore JMD, Derenne JMD, Fuchs DCMD. Transitional age youth and college mental health. Child Adolesc Psychiatr Clin N Am. 2019;28(3):363–75.
Auerbach RP, Mortier P, Bruffaerts R, Alonso J, Benjet C, Cuijpers P, WHO World Mental Health Surveys International College Student Project. Prevalence and Distribution of Mental Disorders. MacDonald A, editor. J Abnorm Psychol. : 1965. 2018;127(7):623–38.
McLafferty M, Lapsley CR, Ennis E, Armour C, Murphy S, Bunting BP et al. Mental health, behavioural problems and treatment seeking among students commencing university in Northern Ireland. Sasayama D, editor. PloS One. 2017;12(12):e0188785–e0188785.
Duffy A, Saunders KEA, Malhi GS, Patten S, Cipriani A, McNevin SH, et al. Mental health care for university students: a way forward? Lancet Psychiatry. 2019;6(11):885–7.
Lattie EG, Adkins EC, Winquist N, Stiles-Shields C, Wafford QE, Graham AK. Digital mental health interventions for depression, anxiety, and enhancement of psychological well-being among college students: systematic review. JMIR J Med Internet Res. 2019;21(7):e12869-12869.
Lehtimaki S, Martic J, Wahl B, Foster KT, Schwalbe N. Evidence on digital mental health interventions for adolescents and young people: systematic overview. JMIR Ment Health. 2021;8(4):e25847.
Melcher J, Camacho E, Lagan S, Torous J. College student engagement with mental health apps: analysis of barriers to sustained use. J Am Coll Health. 2022;70(6):1819–25.
Torous J, Nicholas J, Larsen ME, Firth J, Christensen H. Clinical review of user engagement with mental health smartphone apps: evidence, theory and improvements. Evid Based Ment Health. 2018;21(3):116–9.
Oti O, Pitt I. Online mental health interventions designed for students in higher education: a user-centered perspective. Internet Interv. 2021;26:100468.
Vereschagin M, Wang AY, Leung C, Richardson CG, Hudec KL, Doan Q, et al. Co-developing tools to support student mental health and substance use: minder app development from conceptualization to realization. Journal of Behavioral and Cognitive Therapy. 2023;33(1):35–49.
Virk P, Arora R, Burt H, Gadermann A, Barbic S, Nelson M, et al. Heartsmap-U: adapting a psychosocial self-screening and resource navigation support tool for use by post-secondary students. Front Psychiatry. 2022;13(Journal Article):812965–812965.
Leung C, Pei J, Hudec K, Shams F, Munthali R, Vigo D. The effects of nonclinician guidance on effectiveness and process outcomes in digital mental health interventions: systematic review and meta-analysis. J Med Internet Res. 2022;24(6):e36004-36004.
Vereschagin M, Wang AY, Richardson CG, Xie H, Munthali RJ, Hudec KL, et al. Effectiveness of the Minder mobile mental health and substance use intervention for university students: randomized controlled trial. JMIR J Med Internet Res Med Internet Res. 2024;26(2):e54287–54287.
Hesser H. Estimating causal effects of internet interventions in the context of nonadherence. Internet Interv. 2020;21:100346–100346.
Sheiner LB, Rubin DB. Intention-to-treat analysis and the goals of clinical trials. Clin Pharmacol Ther. 1995;57(1):6–15.
Guo L, Qian Y, Xie H. Assessing complier average causal effects from longitudinal trials with multiple endpoints and treatment noncompliance: an application to a study of arthritis health journal. Stat Med. 2022;41(13):2448–65.
Imbens GW, Rubin DB. Bayesian inference for causal effects in randomized experiments with noncompliance. Ann Stat. 1997;25(1):305–27.
Wang AY, Vereschagin M, Richardson CG, Xie H, Hudec KL, Munthali RJ et al. Evaluating the effectiveness of a codeveloped e-Mental health intervention for university students: protocol for a randomized controlled trial. JMIR Res Protoc. 2023;12:e49364. https://doi.org/10.2196/49364
Connell AM. Employing complier average causal effect analytic methods to examine effects of randomized encouragement trials. Am J Drug Alcohol Abuse. 2009;35(4):253–9.
Spitzer RL, Kroenke K, Williams JBW, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med 1960. 2006;166(10):1092–7.
Kroenke K, Spitzer RL, Williams JBW. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606–13.
Higgins-Biddle JC, Babor TF. A review of the alcohol use disorders identification test (AUDIT), AUDIT-C, and USAUDIT for screening in the United States: past issues and future directions. Am J Drug Alcohol Abuse. 2018;44(6):578–86.
Stewart-Brown S, Tennant A, Tennant R, Platt S, Parkinson J, Weich S. Internal construct validity of the Warwick-Edinburgh mental well-being scale (WEMWBS): a Rasch analysis using data from the Scottish health education population survey. Health Qual Life Outcomes. 2009;7(1):15–15.
Angrist JD, Imbens GW. Two-stage least squares estimation of average causal effects in models with variable treatment intensity. J Am Stat Assoc. 1995;90(430):431–42.
Stata Corp. Stata statistical software: release 15.1. StataCorp LLC. 2017. https://www.stata.com/stata15/
Angrist JD, Imbens GW, Rubin DB. Identification of causal effects using instrumental variables. J Am Stat Assoc. 1996;91(434):444–55.
Imbens GW, Rubin DB. Estimating outcome distributions for compliers in instrumental variables models. Rev Econ Stud. 1997;64(4):555–74.
Imbens G, Rubin DB. Causal inference for statistics, social, and biomedical sciences: an introduction. New York: Cambridge University Press;: Cambridge University Press; 2015.
Toussaint A, Hüsing P, Gumz A, Wingenfeld K, Härter M, Schramm E, et al. Sensitivity to change and minimal clinically important difference of the 7-item generalized anxiety disorder questionnaire (GAD-7). J Affect Disord. 2020;265:395–401.
Aschbacher K, Rivera LM, Hornstein S, Nelson BW, Forman-Hoffman VL, Peiper NC. Longitudinal patterns of engagement and clinical outcomes: results from a therapist-supported digital mental health intervention. Psychosom Med. 2023;85(7):651–8.
Hoffman V, Flom M, Mariano TY, Chiauzzi E, Williams A, Kirvin-Quamme A, et al. User engagement clusters of an 8-Week digital mental health intervention guided by a relational agent (Woebot): exploratory study. JMIR J Med Internet Res Med Internet Res. 2023;25(1):e47198–47198.
Karyotaki E, Efthimiou O, Miguel C, Maas genannt Bermpohl F, Furukawa TA, Cuijpers P, et al. Internet-based cognitive behavioral therapy for depression: a systematic review and individual patient data network meta-analysis. JAMA Psychiatr. 2021;78(4):361–71.
Benjet C, Zainal NH, Albor Y, Alvis-Barranco L, Carrasco-Tapias N, Contreras-Ibáñez CC, et al. A precision treatment model for internet-delivered cognitive behavioral therapy for anxiety and depression among university students: a secondary analysis of a randomized clinical trial. JAMA Psychiatr. 2023;80(8):768–77.
Alqahtani F, Orji R, Riper H, McCleary N, Witteman H, McGrath P. Motivation-based approach for tailoring persuasive mental health applications. Behav Inf Technol. 2023;42(5):569–95.
Oinas-Kukkonen H, Harjumaa M. Persuasive systems design: key issues, process model, and system features. Commun Assoc Inf Syst. 2009;24(28). https://doi.org/10.17705/1CAIS.02428
Wu A, Scult MA, Barnes ED, Betancourt JA, Falk A, Gunning FM. Smartphone apps for depression and anxiety: a systematic review and meta-analysis of techniques to increase engagement. NPJ Digit Med. 2021;4(1):20–20.
Kelders SM, Kok RN, Ossebaard HC, Van Gemert-Pijnen JEWC. Persuasive system design does matter: a systematic review of adherence to web-based interventions. J Med Internet Res. 2012;14(6):e152-152.
Garrido S, Millington C, Cheers D, Boydell K, Schubert E, Meade T, et al. What works and what doesn’t work?? A systematic review of digital mental health interventions for depression and anxiety in young people. Front Psychiatry. 2019;10(Journal Article):759–759.
Chaturvedi A, Aylward B, Shah S, Graziani G, Zhang J, Manuel B, et al. Content recommendation systems in web-based mental health care: real-world application and formative evaluation. JMIR Form Res. 2023;7(Journal Article):e38831-38831.
Rooke S, Copeland J, Norberg M, Hine D, McCambridge J. Effectiveness of a self-guided web-based cannabis treatment program: randomized controlled trial. J Med Internet Res. 2013;15(2):e26.
Huang S, Cordova D, Estrada Y, Brincks AM, Asfour LS, Prado G. An application of the complier average causal effect analysis to examine the effects of a family intervention in reducing illicit drug use among high-risk Hispanic adolescents. Fam Process. 2014;53(2):336–47.
Acknowledgements
Not applicable.
Funding
This work was supported by Health Canada’s Substance Use and Addictions Program (arrangement:1920-HQ-000069; University of British Columbia ID: F19-02914).
Author information
Authors and Affiliations
Contributions
DVV and HX conceived the study and provided overall guidance and supervised the statistical analyses. AYW, MV, CGR, RJM prepared the first draft. AYW, MV, LM and TM aided in the collection of the data. RM and KH conducted the data analyses. All authors collaborated in interpreting results and contributed to the preparation of the submitted version of the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
This study was performed in accordance with the Declaration of Helsinki and ethics review was approved by UBC Behavioural Research Ethics Board (H21-03248).
Consent for publication
N/A.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Y. Wang, A., Vereschagin, M., G. Richardson, C. et al. Examining the effects of engagement with an app-based mental health intervention: a secondary analysis of a randomized control trial with treatment non-compliance. Int J Ment Health Syst 19, 30 (2025). https://doi.org/10.1186/s13033-025-00688-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13033-025-00688-4