A techno-optimistic attitude tells us we’re living at an inflexion point where care practices are being transformed by technology. Monitoring and attending to health and well-being are no longer activities bound within physical spaces like hospitals and clinics; these activities have extended to the basic functions of smart phones. A new labor force has emerged for this digitized health transformation utilizing open source engineering platforms, structuring work into two-week Agile design sprints, and leveraging professionals from traditional healthcare settings. In many ways, the practices of these workers appear synonymous to those of other start-up companies across industry spaces. Throughout ethnographic fieldwork over the last year, I have explored the evolution of this phenomenon within an emergent area of the digital health sphere: Digital therapeutics.
What is a Digital Therapeutic?
Digital therapeutics constitute a subset of digital health technologies intended to treat various physical and mental health conditions. Individuals working in connection with this area assign a constellation of terms to a rolling definition of the space including “clinically validated,” “rigorous,” “focused,” and “FDA-approved.” They operationalize the term in varying ways. Diagnostic and therapeutic use-cases are often blurred into one product offering, while many start-ups aspire to translate some type of behavioral intervention into a digital format. With few exceptions, individuals across professional areas working in this labor arena have enthusiastically described the potential of these products to extend therapeutic access to individuals without health care. Digital therapeutics aiming to mimic the product development path of pharmaceutical drugs are often called “software as medicine,” or “behavioral medicine in a digital format,” which implies that skills-based psychotherapy techniques are delivered online or by phone. Recently approved examples of digital therapeutic products include Reset®, a mobile cognitive behavioral therapy intervention addressing substance use disorder, and Abilify MyCite, an ingestible sensor to track medication adherence in individuals living with bipolar II disorder and schizophrenia. These products rely on proprietary algorithms for their mechanisms of action, in addition to the algorithmic management of information. The combined functions make possible the accumulation, generation, and output of health data.
Why Now? Policy Context
Methods for categorizing “low-risk medical software,” the U.S. Food and Drug Administration’s (FDA’s) product category for many digital therapeutics, leave significant grey areas among products that do not fit neatly within a specific risk category. These loopholes create considerable space for companies developing digital therapeutics to shape the regulatory environment overseeing them – and the time is quite ripe for regulatory reshaping. Due to recent changes in legislation, the FDA has started to revamp its evaluation pathway for digital health products. In 2016, the United States Congress passed the 21st Century Cures Act to expedite novel drugs and devices to patients and consumers. Extending from this legislation, the FDA’s Center for Radiologic and Digital Health began working with the industry to create a new framework for evaluating digital health products based on company characteristics rather than individual product efficacy. This type of assessment is new and intended to better address the iterative nature of software development that, contrary to hardware development, may proceed faster and less expensively than carrying a product all the way through linear prototyping procedures. While the FDA is very early in determining specific markers to define an “excellent” company, the intention is for “excellent” companies to receive precertification for marketing their products without review or in a more streamlined fashion.
While the newly envisioned regulatory pathway lifts barriers to market for these types of products—serving as an incentive for technological innovation— regulatory agents are early in developing the specific ways that companies will be formally evaluated. Thus, for many digital therapeutics, the jury is still out as to how their proprietary algorithmic mechanisms will be considered, if at all. Will companies be required to send source code to the FDA prior to selling their intervention? Or, perhaps more aptly, should they be required to do so? Imagining appropriate design ethics for these technology products is challenging because the very thing that distinguishes one product from another, the algorithmic magic behind its program, remains concealed. This concealment also keeps hidden the social processes and belief systems informing the logic; these stakes are amplified when the technology under scrutiny is one aiming to impact health and human bodies.
What do we actually know?
Scholarship demonstrating the value 0f artificial intelligence (AI) within clinical contexts often stresses the advances that machine learning can make in efficiency, quality, safety, and error prevention (Fagella, 2018 and Jiang et al., 2017). However, real-world accounts of the existing impacts of electronic health records reveal more about the clumsy task of human maneuvering between software systems and care practices than the real ways health data is leveraged to improve cures (Ofri, 2019 and Are, 2018). Something similar can be said of digital therapeutics, where their most visible impacts—at least currently—reside separately from the technology’s intended benefit. While they have not yet become institutionalized within health/care settings, digital therapeutic creation points to a world where human suffering can be broken down into constituent parts and corrected with mechanistic tweaks.
Continuing to move ‘up and to the right’
During recent fieldwork, an entrepreneur eagerly explained to me, “We have not talked much about the harm that’s being done by not having [digital therapeutics] already available to patients. I get that we need to prove safety and efficacy, but there is a real urgency here, a lack of an available alternative for people.” There is a widely implicit assumption within the evaluation literature on digital health that the positive impact of technology to patients, providers, and health systems is absolutely forthcoming. Robert Wachter (2015) has referred to care problems as “adaptability and interoperability problems,” reflecting a position that central issues facing health systems are the way people engage with technology and the lack of communication between technical systems. The explanation is a convenient one since it defines the issues in a way that continues to position technology as a future unlocking agent.
This attitude keeps with the generally promissory aspect of technological enterprise: the future is always bright, already ahead, a nut just waiting to be cracked, and an opportunity to be seized (Alam, 2016). Shepherding in new ways to measure and monitor this future-in-the-making seems not only fair but just. Investigating small data and the varying social dimensions to which they roughly map is one way to bring better transparency to ways that humans benefit heterogeneously from these labor shifts and to more equitably weigh the absence of product results against the varying forms of capital (social, cultural, and financial) that go into their formation. To this end, the question of whether digital therapeutics will succeed as an industry may be less important for public health than how and in what ways these products are being made and marketed.
At a recent digital therapeutics conference, I found myself sitting in the audience between a psychologist and a business consultant listening to an entrepreneur on stage profess, “In order to get income, you’ve got to show outcomes!” Perhaps this drive to demonstrate positive outcomes will, in fact, usher in a new era of digitized cures. Before I take the pill, though, I’d like to know what’s inside.
Are, U. S. (2018). Physician burnout in the electronic health record era: are we ignoring the real cause? Ann Intern Med, 169, 50-51.
Alam, S. Y. R. (2016). Promissory Failures: How Consumer Health Technologies Build Value, Infrastructures, and the Future in the Present (Doctoral dissertation, University of California, San Francisco).
Faggella, D. (2018). “Machine learning healthcare applications – 2018 and beyond.” TechEmergence. Accessed March 23, 2018. Available: https://www.techemergence.com/machine-learning-healthcare-applications/
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., Wang, Y. (2017). “Artificial Intelligence in Healthcare.” Stroke and Vascular Neurology 2(4): 230-243. doi: 10.1136/svn-2017-000101
Ofri, D. (2019). Perchance to Think. The New England journal of medicine, 380(13), 1197-1199.
Wachter, R. (2015). The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age. New York: McGraw Hill Books
 Reference to the U.S. Food and Drug Administration