Skip to main content

Building the case for actionable ethics in digital health research supported by artificial intelligence

Abstract

The digital revolution is disrupting the ways in which health research is conducted, and subsequently, changing healthcare. Direct-to-consumer wellness products and mobile apps, pervasive sensor technologies and access to social network data offer exciting opportunities for researchers to passively observe and/or track patients ‘in the wild’ and 24/7. The volume of granular personal health data gathered using these technologies is unprecedented, and is increasingly leveraged to inform personalized health promotion and disease treatment interventions. The use of artificial intelligence in the health sector is also increasing. Although rich with potential, the digital health ecosystem presents new ethical challenges for those making decisions about the selection, testing, implementation and evaluation of technologies for use in healthcare. As the ‘Wild West’ of digital health research unfolds, it is important to recognize who is involved, and identify how each party can and should take responsibility to advance the ethical practices of this work. While not a comprehensive review, we describe the landscape, identify gaps to be addressed, and offer recommendations as to how stakeholders can and should take responsibility to advance socially responsible digital health research.

Peer Review reports

Background

The digital revolution is disrupting the ways in which health research is conducted, and subsequently, changing healthcare [1,2,3]. The rise of digital health technologies has resulted in vast quantities of both qualitative and quantitative ‘big data’, which contain valuable information about user interactions and transactions that may potentially benefit patients and caregivers [4]. Digital data ‘exhaust’, or the traces of everyday behaviors captured in our digital experiences, are of particular interest because they contain our natural behaviors gathered in real time. No doubt, important societal conversations are needed to shape how these sociotechnical systems influence our lives as individuals, as well as the impact on society [5]. While not a formal review, this opinion essay provides a selective overview of the rapidly changing digital health research landscape, identifies gaps, highlights several efforts that are underway to address these gaps, and concludes with recommendations as to how stakeholders can and should take responsibility to advance socially responsible digital health research.

Direct-to-consumer wellness products and mobile apps (e.g., Fitbit, Strava), wearable research tools (e.g., SenseCam, ActivPAL), and access to social network data offer exciting opportunities for individuals [6], as well as traditional health researchers [7], to passively observe and/or track individual behavior ‘in the wild’ and 24/7. The volume of granular personal health data gathered using these technologies is unprecedented, and is increasingly leveraged to inform personalized health promotion and disease treatment interventions. The use of artificial intelligence (AI) tools in the health sector is also increasing. For example, electronic health records provide training data for machine learning that inform algorithms, which can detect anomalies more accurately than trained humans – particularly in the fields of cancer, cardiology, and retinopathy [8]. The digital therapeutics sector is also seeking to expand and bring products into the healthcare system, with the goal of complementing or providing an alternative to traditional medical treatments [9]. While the digital health revolution brings transformational promise for improving healthcare, we must acknowledge our collective responsibility to recognize and prevent unintended consequences introduced by biased and opaque algorithms that could exacerbate health disparities and jeopardize public trust [10, 11]. Moreover, it is critical that the minimal requirements used to make a digital health technology available to the public are not mistaken for a product that has passed rigorous testing or demonstrated real world therapeutic value [12].

Although rich with potential, the digital health ecosystem presents new ethical challenges for those making decisions about the selection, testing, implementation and evaluation of technologies in healthcare. Researchers began to study related ethical issues over 20 years ago, when electronic health records technology was being conceptualized [13], and as new forms of pervasive information communication technologies produce data, guiding principles and standards are emerging within academic research centers [14,15,16] and industry sectors [17, 18]. Accepted ethical principles in health research, including respect for persons, beneficence and justice, remain relevant and must be prioritized to ensure that research participants are protected from harms. Applying these principles in practice means that: people will have the information they need to make an informed choice; risks of harm will be evaluated against potential benefits and managed; and no one group of people will bear the burden of testing new health information technologies [19]. However, ethical challenges arise from the combination of new, rapidly evolving technologies; new stakeholders (e.g. technology giants, digital therapeutic start-ups, citizen scientists); data quantity; novel computational and analytic techniques; and a lack of regulatory controls or common standards to guide this convergence in the health ecosystem.

It is of particular importance that these technologies are finding their way into both research and clinical practice without appropriate vetting. For example, we have heard that, “if the product is free, then you’re the product.” This means that our search terms, swipes, clicks and keyboard interactions produce the data that companies use to inform product improvement. These ‘big data’ are used to train algorithms to produce, for example, tailored advertisements. Consumers allow this by clicking “I Accept” to confirm their agreement with the Terms and Conditions (T&C), which are not necessarily intended to be easy to read or understand. Why does this matter? When an algorithm is used to serve up a reminder about that yellow jacket you were eyeing, or the summer vacation you mentioned to a friend the other day, it may seem ‘creepy’, but it might be nice in terms of convenience. Sometimes the AI gets it right, and other times it is not even close. For example, if you were to write something on Facebook that its proprietary AI interprets as putting you at serious risk, it may send the police to your home! Is Facebook getting it right? We do not know: Facebook has claimed that, even though its algorithm is not perfect and makes mistakes, it does not consider its actions to be ‘research’ [20]. Aside from threats to one’s privacy, we should question the process of informed consent, whether there is an objective calculation of risk of harms against potential benefits, and whether people included in the product testing phase are those most likely to benefit.

Governance in the ‘wild west’

Those involved in the development, testing and deployment of technologies used in the digital health research sector include technology developers or ‘tool makers’, funders, researchers, research participants and journal editors. As the ‘Wild West’ of digital health research moves forward, it is important to recognize who is involved, and to identify how each party can and should take responsibility to advance the ethical practices of this work.

Who is involved?

In the twentieth century, research was carried out by scientists and engineers affiliated with academic institutions in tightly controlled environments. Today, biomedical and behavioral research is still carried out by trained academic researchers; however, they are now joined by technology giants, startup companies, non-profit organizations, and everyday citizens (e.g. do-it-yourself, quantified self). The biomedical research sector is now very different, but the lines are also blurred because the kind of product research carried out by the technology industry has, historically, not had to follow the same rules to protect research participants. As a result, there is potential for elevated risks of harm. Moreover, how and whether research is carried out to assess a product’s effectiveness is variable in terms of standards and methods, and, when the technology has health implications, standards become critically important. In addition, not all persons who initiate research are regulated or professionally trained to design studies. Specific to regulations, academic research environments require the involvement of an ethics board (known as an institutional review board [IRB] in the USA, and a research ethics committee [REC] in the UK and European Union). The IRB review is a federal mandate for entities that receive US federal funding to conduct health research. The ethics review is a peer review process to evaluate proposed research, and identify and reduce potential risks that research participants may experience. Having an objective peer review process is not a requirement for technology giants, startup companies or by those who identify with the citizen science community [10, 21]; however, we have a societal responsibility to get this right.

What questions should be asked?

When using digital health technologies, a first step is to ask whether the tools, be they apps or sensors or AI applied to large data sets, have demonstrated value with respect to outcomes. Are they clinically effective? Do they measure what they purport to measure (validity) consistently (reliability)? For example, a recent review of the predictive validity of models for suicide attempts and death found that most are currently less than 1%; a number at which they are not yet deemed to be clinical viable [22]. Will these innovations also improve access to those at highest risk of health disparities? To answer these questions, it is critical that all involved in the digital health ecosystem do their part to ensure the technologies are designed and scientifically tested in keeping with accepted ethical principles; be considerate of privacy, effectiveness, accessibility, utility; and have sound data management practices. However, government agencies, professional associations, technology developers, academic researchers, technology startups, public organizations and municipalities may be unaware of what questions to ask, including how to evaluate new technologies. In addition, not all tools being used in the digital health ecosystem undergo rigorous testing, which places the public at risk of being exposed to untested and potentially flawed technologies.

Demonstrating value must be a precursor to the use of any technologies that claim to improve clinical treatment or population health. Value is based on the product being valid and reliable, which means that scientific research is needed before a product is deployed within the health sector [12]. We should also not move ahead assuming that privacy and the technology revolution are mutually exclusive. We are in a precarious position in which, without standards to shape acceptable and ethical practices, we collectively run the risk of harming those who stand to benefit most from digital health tools.

Decision-making framework

While there are discussions about the need for regulations and laws, and incremental progress being made on that front, until some consensus is reached, it is essential that stakeholders recognize their obligation to promote the integrity of digital health research [23]. The digital health decision-making domains framework (Fig. 1) was developed to help researchers make sound decisions when selecting digital technologies for use in health research [24, 25]. While originally developed for researchers, this framework is applicable to various stakeholders who might evaluate and select digital technologies for use in health research and healthcare. The framework comprises five domains: 1, Participant Privacy; 2 Risks and Benefits; 3, Access and Usability; 4, Data Management; and 5, Ethical Principles. These five domains are presented as intersecting relationships.

Fig. 1
figure 1

Digital health decision-making framework and excerpts from the companion checklist designed to support researchers [24]

The domains in this framework were developed into a checklist tool to further facilitate decision-making. The checklist was informed via developmental research involving a focus group discussion, and a design exercise with behavioral scientists [25]. To demonstrate how the decision-making domains can be put into practice, we present a use case to illustrate the complexities and nuances that are important for stakeholders to consider.

Use case: MoodFlex for mental health

MoodFlex is a private startup technology company that has developed a mobile app to detect signals of poor mental health by analyzing a person’s typing and voice patterns from their smartphones. MoodFlex is negotiating with several municipalities to integrate their product within the public mental healthcare system, with the goal of delivering better services to people with mental illness through predictive analytics. Since MoodFlex does not claim to provide a clinical diagnosis or treatment, approval from the US Food and Drug Administration is not necessary. The vendor claims to have a proven product; however, there are no publications documenting evidence that it is safe, valid or reliable. The only research that is formally acknowledged involves an evaluation of the implementation process and uptake of the product by health providers within the state mental health system. The patient will be invited to download the app after reviewing the vendor’s T&C – no other consent process is proposed. The algorithm is proprietary, and therefore, an external body is unable to determine whether the algorithm that resulted from a machine-learning process was trained on representative data, or how decision-making occurs. Data captured about people using the app are owned by the vendor.

Brief analysis

Before introducing MoodFlex into the public healthcare system, decision makers – particularly the funding organization – should evaluate evidence supporting the efficacy of this product. Reproducible evidence is the hallmark of evidence-based practice, and is the first step prior to dissemination and implementation. If a product is supported by evidence, the logical next step is the translational phase, in which a ‘dissemination and implementation’ (D&I) design is appropriate. Unfortunately, many health apps move straight into a D&I phase before the evidence exists to support that direction.

Lacking evidence that the product is effective, decision-makers should recognize that a testing phase is necessary. As with regulated research involving people, a research plan should be developed and reviewed by an external and objective ethics board (i.e., REC or IRB) that will assess the degree to which people who are invited do not bear an inappropriate burden (justice), potential risks are offset by the benefits (beneficence), and individuals are provided with an ability to make an informed choice to volunteer (respect). At this early stage, it is reasonable for the vendor to provide the sponsor with a robust data management plan, with explicit language regarding data ownership, access, sharing and monitoring. When involving vulnerable populations, such as those with a mental health diagnosis, additional precautions should be considered to ensure that those involved in the study are protected from harms – including stigma, economic and legal implications. In addition, it is important to consider whether some people will be excluded because of access barriers. For example, it may be necessary to adapt the technology to be useful to non-English speakers. Informed consent must also be obtained in a way that results in a person making a choice to participate based on having adequate and accessible information – this demonstrates the principle of ‘respect for persons’, and is a hallmark of research ethics. Placing consent language for a research study in the T&C is unacceptable. For patients who become research participants, it is particularly important for them to understand the extent to which the technology will support their healthcare needs. Patients might falsely rely on the technology to provide the care they believe they need when, in reality, they may need to see their healthcare provider.

Digital research gaps and opportunities

This use case reflects the shift in health research associated with digital technologies, in that traditional methods of developing an evidence base may be pushed aside in favor of what appears to be exciting innovation. The landscape is unsettled and potentially dangerous, which makes governance important. We have identified three notable gaps: 1, disciplinary/sector challenges; 2, issues of data and technology literacy; and 3, inconsistent or non-extant standards to guide the use of AI and other emerging technologies in the healthcare settings.

Inter/trans/cross-disciplinary and sector challenges

Emerging technologies and AI systems require diverse expertise when applied to digital medicine, which introduces new challenges. Technology makers may not understand patients’ needs, and develop tools with limited utility in practice [25, 26]. Computational scientists may train AI using datasets that are not representative of the public, limiting the ability to provide meaningful assessments or predictions [27]. Clinicians may not know how to manage the depth of granular data, nor be confident in decisions produced by AI [28]. Research is needed to examine this disconnect, and identify strategies to reduce gaps and improve meaningful connections between these groups that are integral to digital health research and the use of AI in the health care sector.

Digital/tech-literacy

The idea that keystrokes and voice patterns can be used to aid diagnosis of Parkinson’s disease remains impressive, but now it may also be possible to use keystroke dynamics, kinematics and voice patterns to detect mental health problems [29]. Knowing this information may create public concern if not communicated in a way that is useful and contextual, adding to fear, skepticism and mistrust. The ‘public’ includes policy-makers, educators, regulators, science communicators, and those in our healthcare system, including clinicians, patients, and caregivers. Research is needed to increase our understanding of what these stakeholders know, what they want to know, and how best to increase their technology literacy. This information can then be used to inform educational resources targeting specific stakeholders. For example, when reviewing manuscripts reporting digital health research, reviewers and editors should be aware of how to evaluate new methodologies and computational analytics to verify the accuracy and appropriateness of the research and results.

Ethical and regulatory standards

As new digital tools and AI-enabled technologies are developed for the healthcare market, they will need to be tested with people. As with any research involving human participants, the ethics review process is critical. Yet, our regulatory bodies (e.g., IRB) may not have the experience or knowledge needed to conduct a risk assessment to evaluate the probability or magnitude of potential harms [30]. Technologists and data scientists who are making the tools and training the algorithms may not have received ethics education as part of their formal training, which may lead to a lack of awareness regarding privacy concerns, risks assessment, usability, and societal impact. They may also not be familiar with regulatory requirements to protect research participants [23]. Similarly, the training data used to inform the algorithm development are often not considered to qualify as human subjects research, which – even in a regulated environment – makes a prospective review for safety potentially unavailable.

New initiatives – what resources are available for the digital health/medicine community?

Several initiatives have begun to address the ethical, legal and social implications (ELSI) of the digital revolution in healthcare. Prominent examples of such initiatives concern AI. Specific to AI, the foci are broad, and include autonomous vehicles, facial recognition, city planning, the future of work, and in some cases, health. A few selected examples of current AI efforts appear to be well-funded and collaborative programs (see Table 1).

Table 1 AI initiatives underway to inform broad cross-sector standards

Across these initiatives are efforts to assess the potential ELSI of AI. Similar to the impact of the European Union (EU)‘s General Data Protection Regulation (GDPR) in countries beyond the EU, the intention of groups assessing AI through an ELSI lens is to develop standards that can be applied or adapted globally. In practice, however, most current efforts to integrate ELSI to AI are quite broad, and as a result, may overlap in scope and lack specificity.

While AI has a place in the digital health revolutions, the scope of technologies goes well beyond AI. Other initiatives are looking more specifically at ELSI in mobile apps, social network platforms, and wearable sensors being used in digital research. These include, for example, the Connected and Open Research Ethics (CORE) initiative at the University of California (UC) San Diego Research Center for Optimal Digital Ethics in Health (ReCODE Health), the Pervasive Data Ethics for Computational Research (PERVADE) program at the University of Maryland, and the Mobile Health ELSI (mHealthELSI) project out of Sage Bionetworks and the University of Louisville. What these initiatives have in common is a goal to inform policy and governance in a largely unregulated space. These initiatives are but a few examples, and it is important to note that many laboratories and institutes are working on digital health ELSI.

Conclusion

Being mindful of new health technologies with new actors in the arena, the gap between known and unknown risks fundamentally challenges the degree to which decision-makers can properly evaluate the probability and magnitude of potential harms against benefits. Now is the time to take a step back and develop the infrastructure necessary for vetting new digital health technologies, including AI, before deploying them into our healthcare system. Selecting and implementing technologies in the digital health ecosystem requires consideration of ethical principles, risks and benefits, privacy, access and usability, and data management. New technologies have the potential to add important value; however, without careful vetting, may exacerbate health disparities among those most vulnerable.

Availability of data and materials

Not applicable.

Abbreviations

AI:

Artificial intelligence

ELSI:

Ethical, legal and social implications

IRB:

Institutional review board

REC:

Research ethics committee

References

  1. Agarwal R, Gao G, DesRoches C, Jha AK. Research commentary – the digital transformation of healthcare: current status and the road ahead. Inform Syst Res. 2010;21(4):796–809 http://0-pubsonline-informs-org.brum.beds.ac.uk/doi/abs/10.1287/isre.1100.0327. Accessed 15 May 2019.

    Article  Google Scholar 

  2. Herrmann M, Boehme P, Mondritzki T, Ehlers JP, Kavadias S, Truebel H. Digital transformation and disruption of the health care sector: internet-based observational study. J Med Internet Res. 2018;20(3):e104.

    Article  Google Scholar 

  3. Wallin A. Transforming healthcare through entrepreneurial innovations. Int J E-Services Mob Appl. 2017;9(1):1–17 https://0-www-igi--global-com.brum.beds.ac.uk/gateway/article/173033. Accessed 2019 May 15. https://0-doi-org.brum.beds.ac.uk/10.4018/IJESMA.2017010101

    Article  Google Scholar 

  4. Lupton D. The commodification of patient opinion: the digital patient experience economy in the age of big data. Sociol Health Illn. 2014;36(6):856–69.

    Article  Google Scholar 

  5. Boyd D, Crawford K. Critical questions for big data. Inform Commun Soc. 2012;15(5):662–79.

    Article  Google Scholar 

  6. Lupton D. Self-tracking, health and medicine. Health Sociol Rev. 2017;26(1):1–5.

    Article  Google Scholar 

  7. Arigo D, Jake-Schoffman DE, Wolin K, Beckjord E, Hekler EB, Pagoto SL. The history and future of digital health in the field of behavioral medicine. J Behav Med. 2019;42(1):67–83.

    Article  Google Scholar 

  8. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230–43.

    Article  Google Scholar 

  9. Joyce M, Leclerc O, Westhues K, Xue H. Digital therapeutics: preparing for takeoff. New York: McKinsey & Company; 2018. https://www.mckinsey.com/industries/pharmaceuticals-and-medical-products/our-insights/digital-therapeutics-preparing-for-takeoff. Accessed 21 Mar 2019

    Google Scholar 

  10. Nebeker C, Harlow J, Giacinto-Espinoza R, Linares-Orozco R, Bloss C, Weibel N. Ethical and regulatory challenges of research using pervasive sensing and other emerging technologies: IRB perspectives. Am J Bioeth Empir Bioeth AJOB Empir Bioeth. 2017;8(4):266–76.

    Article  Google Scholar 

  11. Ferryman K, Pitcan M. Fairness in precision medicine. New York: Data & Society; 2018. https://datasociety.net/wp-content/uploads/2018/02/Data.Society.Fairness.In_.Precision.Medicine.Feb2018.FINAL-2.26.18.pdf. Accessed 13 May 2019

    Google Scholar 

  12. Coravos A, Goldsack JC, Karlin DR, Nebeker C, Perakslis E, Zimmerman N, et al. Digital medicine: a primer on measurement. Digit Biomarkers. 2019;3(2):31–71 https://www.karger.com/Article/FullText/500413. Accessed 13 May 2019.

    Article  Google Scholar 

  13. Kluge EH. Health information, the fair information principles and ethics. Methods Inf Med. 1994;33(04):336–45.

    Article  CAS  Google Scholar 

  14. Pimple KD. Emerging pervasive information and communication tTechnologies (PICT): ethical challenges, opportunities and safeguards. Dordrecht: Springer Netherlands; 2013.

    Google Scholar 

  15. Vitak J, Proferes N, Shilton K, Ashktorab Z. Ethics regulation in social computing research: examining the role of institutional review boards. J Empir Res Hum Res Ethics. 2017;12(5):372–82.

    Article  Google Scholar 

  16. Van Velthoven MH, Smith J, Wells G, Brindley D. Digital health app development standards: a systematic review protocol. BMJ Open. 2018;8(8):e022969.

    Article  Google Scholar 

  17. Dittrich D, Kenneally E. The Menlo report: ethical principles guiding information and communication technology research. Washington, DC: US Department of Homeland Security; 2012. https://www.caida.org/publications/papers/2012/menlo_report_actual_formatted/menlo_report_actual_formatted.pdf. Accessed 14 May 2019

    Google Scholar 

  18. Jackman M, Kanerva L. Evolving the IRB: building robust review for industry research. Washington Lee Law Rev Online. 2016;72(3) https://scholarlycommons.law.wlu.edu/wlulr-online/vol72/iss3/8. Accessed 14 May 2019.

  19. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont report: ethical principles and guidelines for the protection of human subjects of research, vol. 44. Washington, DC: Department of Health, Education, and Welfare, US Department of Health and Human Services; 1979. p. 23192–7. https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html. Accessed 14 May 2019

    Google Scholar 

  20. Barnett I, Torous J. Ethics, transparency, and public health at the intersection of innovation and Facebook’s suicide prevention efforts. Ann Intern Med. 2019;170(8):565.

    Article  Google Scholar 

  21. Bloss C, Nebeker C, Bietz M, Bae D, Bigby B, Devereaux M, et al. Reimagining human research protections for 21st century science. J Med Internet Res. 2016;18(12):e329.

    Article  Google Scholar 

  22. Belsher BE, Smolenski DJ, Pruitt LD, Bush NE, Beech EH, Workman DE, et al. Prediction models for suicide attempts and deaths: a systematic review and simulation. JAMA Psychiatry. 2019. https://0-doi-org.brum.beds.ac.uk/10.1001/jamapsychiatry.2019.0174.

    Article  Google Scholar 

  23. Pagoto S, Nebeker C. How scientists can take the lead in establishing ethical practices for social media research. J Am Med Informatics Assoc. 2019;26(4):311–3.

    Article  Google Scholar 

  24. Nebeker C, Bartlett Ellis R, Torous J. CORE tools. Digital health decision-making framework and checklist designed for researchers. 2018. https://recode.health/dmchecklist/. Accessed 26 Feb 2019.

  25. Nebeker C, Bartlett Ellis RJ, Torous J. Development of a decision-making checklist tool to support technology selection in digital health research. Transl Behav Med. 2019. https://0-doi-org.brum.beds.ac.uk/10.1093/tbm/ibz074.

  26. Wang S, Bolling K, Mao W, Reichstadt J, Jeste D, Kim H-C, et al. Technology to support aging in place: the older adult perspective. Healthcare. 2019. https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare7020060.

    Article  Google Scholar 

  27. Sears M. AI bias and the ‘people factor. In: AI development. Jersey City: Forbes Media; 2018. https://www.forbes.com/sites/marksears1/2018/11/13/ai-bias-and-the-people-factor-in-ai-development/#24ddaaf79134. Accessed 26 Feb 2019.

    Google Scholar 

  28. Adibuzzaman M, DeLaurentis P, Hill J, Benneyworth BD. Big data in healthcare – the promises, challenges and opportunities from a research perspective: a case study with a model database. AMIA Annu Symp Proc. 2018;2017:384–92.

    PubMed  PubMed Central  Google Scholar 

  29. Rabinovitz J. Your computer may know you have Parkinson’s. Shall it tell you? Stanford: Stanford Magazine; 2018. https://medium.com/stanford-magazine/your-computer-may-know-you-have-parkinsons-shall-it-tell-you-e8f8907f4595. Accessed 19 Feb 2019

    Google Scholar 

  30. The Partnership on AI. https://www.partnershiponai.org/. Accessed 07 June 2019.

  31. AI-100. https://ai100.stanford.edu/. Accessed 07 June 2019.

  32. Ethics and Governance of AI Fund. https://cyber.harvard.edu/topics/ethics-and-governance-ai. Accessed 07 June 2019.

  33. AI Now Institute. https://ainowinstitute.org/. Accessed 07 June 2019.

  34. Initiative on Ethics of Autonomous and Intelligent Systems. https://0-standards-ieee-org.brum.beds.ac.uk/industry-connections/ec/autonomous-systems.html. Accessed 07 June 2019.

  35. Human Rights, Big Data and Technology Project. https://hrbdt.ac.uk/. Accessed 07 June 2019.

  36. The Institute for Ethics in Artificial Intelligence. 2019. https://www.tum.de/en/about-tum/news/press-releases/details/article/35190. Accessed 07 June 2019.

  37. High-Level Expert Group on Artificial Intelligence. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence. Accessed 07 June 2019.

  38. Chinese Association for Artificial Intelligence. https://www.iotone.com/organization/chinese-association-for-artificial-intelligence-caai/o209. Accessed 07 June 2019.

  39. AI for Humanity. https://www.aiforhumanity.fr/en/. Accessed 07 June 2019.

Download references

Acknowledgements

Not applicable.

Funding

This work was supported, in part, by the Robert Wood Johnson Foundation (grant number 72876, to CN, Principal Investigator [PI] 2015–2019), and by the National Institutes of Mental Health (grant number 1K23MH116130–01, to JT, PI 2018–2022).

Author information

Authors and Affiliations

Authors

Contributions

CN conceptualized and prepared the original manuscript; JT, RBE and CN wrote, reviewed and edited the manuscript. All authors read and approved the final version of the manuscript.

Authors’ information

CN directs the Research Center for Optimal Digital Ethics (ReCODE.health) at the University of California at San Diego. Her appointment is in the Division of Behavioral Medicine, in the Department of Family Medicine and Public Health. JT directs the Digital Psychiatry Division at Beth Israel Deaconess Medical Center, and is a licensed psychiatrist and clinical informaticist. RJBE is a behavioral scientist at Indiana University who uses sensors and mobile technologies to study biobehavioral mechanisms related to medication adherence and treatment efficacy.

Corresponding author

Correspondence to Camille Nebeker.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no conflicts of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nebeker, C., Torous, J. & Bartlett Ellis, R.J. Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Med 17, 137 (2019). https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-019-1377-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-019-1377-7

Keywords