TY - JOUR
T1 - Validation of the Mobile Application Rating Scale (MARS)
AU - Terhorst, Yannik
AU - Philippi, Paula
AU - Sander, Lasse B.
AU - Schultchen, Dana
AU - Paganini, Sarah
AU - Bardus, Marco
AU - Santo, Karla
AU - Knitza, Johannes
AU - Machado, Gustavo C.
AU - Schoeppe, Stephanie
AU - Bauereiß, Natalie
AU - Portenhauser, Alexandra
AU - Domhardt, Matthias
AU - Walter, Benjamin
AU - Krusche, Martin
AU - Baumeister, Harald
AU - Messner, Eva Maria
N1 - Publisher Copyright:
© 2020 Terhorst et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
PY - 2020/11/2
Y1 - 2020/11/2
N2 - BackgroundMobile health apps (MHA) have the potential to improve health care. The commercial MHA market is rapidly growing, but the content and quality of available MHA are unknown. Instruments for the assessment of the quality and content of MHA are highly needed. The Mobile Application Rating Scale (MARS) is one of the most widely used tools to evaluate the quality of MHA. Only few validation studies investigated its metric quality. No study has evaluated the construct validity and concurrent validity. Objective This study evaluates the construct validity, concurrent validity, reliability, and objectivity, of the MARS. Methods Data was pooled from 15 international app quality reviews to evaluate the metric properties of the MARS. The MARS measures app quality across four dimensions: Engagement, functionality, aesthetics and information quality. Construct validity was evaluated by assessing related competing confirmatory models by confirmatory factor analysis (CFA). Noncentrality (RMSEA), incremental (CFI, TLI) and residual (SRMR) fit indices were used to evaluate the goodness of fit. As a measure of concurrent validity, the correlations to another quality assessment tool (ENLIGHT) were investigated. Reliability was determined using Omega. Objectivity was assessed by intra-class correlation. Results In total, MARS ratings from 1,299 MHA covering 15 different health domains were included. Confirmatory factor analysis confirmed a bifactor model with a general factor and a factor for each dimension (RMSEA = 0.074, TLI = 0.922, CFI = 0.940, SRMR = 0.059). Reliability was good to excellent (Omega 0.79 to 0.93). Objectivity was high (ICC = 0.82). MARS correlated with ENLIGHT (ps<.05). Conclusion The metric evaluation of the MARS demonstrated its suitability for the quality assessment. As such, the MARS could be used to make the quality of MHA transparent to health care stakeholders and patients. Future studies could extend the present findings by investigating the re-test reliability and predictive validity of the MARS.
AB - BackgroundMobile health apps (MHA) have the potential to improve health care. The commercial MHA market is rapidly growing, but the content and quality of available MHA are unknown. Instruments for the assessment of the quality and content of MHA are highly needed. The Mobile Application Rating Scale (MARS) is one of the most widely used tools to evaluate the quality of MHA. Only few validation studies investigated its metric quality. No study has evaluated the construct validity and concurrent validity. Objective This study evaluates the construct validity, concurrent validity, reliability, and objectivity, of the MARS. Methods Data was pooled from 15 international app quality reviews to evaluate the metric properties of the MARS. The MARS measures app quality across four dimensions: Engagement, functionality, aesthetics and information quality. Construct validity was evaluated by assessing related competing confirmatory models by confirmatory factor analysis (CFA). Noncentrality (RMSEA), incremental (CFI, TLI) and residual (SRMR) fit indices were used to evaluate the goodness of fit. As a measure of concurrent validity, the correlations to another quality assessment tool (ENLIGHT) were investigated. Reliability was determined using Omega. Objectivity was assessed by intra-class correlation. Results In total, MARS ratings from 1,299 MHA covering 15 different health domains were included. Confirmatory factor analysis confirmed a bifactor model with a general factor and a factor for each dimension (RMSEA = 0.074, TLI = 0.922, CFI = 0.940, SRMR = 0.059). Reliability was good to excellent (Omega 0.79 to 0.93). Objectivity was high (ICC = 0.82). MARS correlated with ENLIGHT (ps<.05). Conclusion The metric evaluation of the MARS demonstrated its suitability for the quality assessment. As such, the MARS could be used to make the quality of MHA transparent to health care stakeholders and patients. Future studies could extend the present findings by investigating the re-test reliability and predictive validity of the MARS.
UR - http://www.scopus.com/inward/record.url?scp=85095405860&partnerID=8YFLogxK
U2 - 10.1371/journal.pone.0241480
DO - 10.1371/journal.pone.0241480
M3 - Article
C2 - 33137123
AN - SCOPUS:85095405860
SN - 1932-6203
VL - 15
JO - PLoS ONE
JF - PLoS ONE
IS - 11 November
M1 - e0241480
ER -