Ask most AI researchers, and they'll tell you that one of the biggest advances within artificial intelligence capabilities in the past five years has been in the sub-field of computer vision - in particular, image recognition, which now performs as well as humans in many domains. The infamous ImageNet repository, which houses hundreds of object categories and millions of example images, has been running a global competition since 2010. The "Large Scale Visual Recognition Challenge," (aka ILSVRC) has attracted approximately 50 competing institutions, culminating with a deep net outscoring a top human performer in 2015 on a narrow case.
If you feed enough labelled and formatted image data through appropriately designed deep net systems, these systems begin to exhibit impressive image recognition capabilities. Perception, however, is far from being solved. Issues remain (and I would have a healthy debate with anyone who says otherwise), including the binding problem, the understanding of contextual information, the applicability of transfer learning on new datasets, and iteratively reducing false positives, just to name a few. However, it is important to note that these technological advances are already delivering applications and creating value today in everything from inspection tasks (PreNav, Tractable), to continuous automotive monitoring (Nexar*), to healthcare.
Healthcare needs such technology for numerous reasons. Machine learning can find features within images that are simply not apparent to humans; in fact much recent research literature on the topic describes the use of images and other data to predict genetic and molecular properties that were once thought only identifiable with tissue sampling. Additionally, the surfeit of patient information in electronic health records enables an enormous possibility set for leveraging machine learning techniques to deliver services. Useful data is generated during each part of the care workflow, though today it exists in a largely unstructured format and in clinician-dictated text, so it is difficult to extract insight. Attempting to do so is a worthwhile endeavour for the healthcare technology and bioinformatics communities, and there still is a long road to achieve the accuracy required for clinical settings -- it's a work-in-progress. What does exist in a standardised format (i.e. DICOM) are the image scans (e.g. MRI's, CAT scans, ultrasound, etc.) attached to these records, comprising around 90% of all medical data. The first annual conference on machine intelligence in medical imaging (C-MIMI) was held in September of 2016, intimating the time might finally have come for this set of applications.
Application
Imagine a (not-so-pleasant) world where you are rushed to the A&E centre in the middle of the night. Post-triage, every extra moment could mean more pain, irredeemable losses to your health, and perhaps even death. However, rather than having to wait to see an appropriate specialist, a physician assistant is able to administer a relevant image scan immediately, without consulting senior staff. He or she can provide rich, contextual conclusions that would have previously required hours, days, or even weeks to coordinate your care.
Alternatively, on the way to the hospital, the EMT is able to administer a remote scan in the ambulance. This scanning data is provided to the hospital staff immediately, providing additional time for them to prepare and coordinate your care so you can get healthy, fast. Earlier and better diagnosis facilitates easier discharge and reduces chances of readmission. Lower-cost, quicker answers like these can provide rippling, sizable benefits to all stakeholders in the healthcare system in acute as well as non-acute environments.
Core Challenges
All these characteristics make a strong case for deploying modern machine vision technology in medical imaging data analysis. This makes the healthcare system more cost-effective while optimizing healthcare outcomes. In the not-so-distant future, when outcomes are matched to patient input data, this will be especially powerful - a topic to be discussed another time. However, unlike other markets where technology is ripe to disrupt, the healthcare vertical has a robust and often highly misaligned value chain. Payors, regulators, providers, hardware manufacturers, patients, etc. have different incentives and as such require different messaging / product needs for opt-in and usage. A key universal challenge amongst these players, however, exists on the regulatory front (at a later date, I will also go into detail on how to effectively sell technology into different healthcare systems, geographies, stakeholders etc.).
These constraints explain why startups often position themselves as clinical decision-support and efficiency tools to regulators, customers, and investors alike. Whether an early cancer-screening device or an image analysis tool in acute care, technology in this market should not (at least initially) be described as 'diagnosis' or even 'triage' tool. Both terms imply extricating human decision-making from the clinical process (The New Yorker wrote a compelling piece about this which is worth a look). A 'productivity' tool is the right vocal lens and this should help in clinician uptake which is a bottleneck as well. The point is that regulatory constraints play a role in all processes from product conception to marketing. Mishandling this process can set the business back years, if not indefinitely. I would encourage every entrepreneur thinking about entering this market to start that conversation with a regulatory consultant on day 1, in order to move as quickly as possible. Arterys, backed by our friends at AME and Morado Ventures amongst others, is a shining light in the industry: the first venture-funded startup to receive FDA approval for deep learning application in a clinical setting.
Market Segmentation Framework
For every market we participate in, Mosaic seeks to back gritty, world-class technology teams. We have strong conviction about the existing technical capabilities available for structured image analysis, and we wanted to share how we think about market entry in imaging as thematic investors. We have tried to summarise it to three core components, all of which need to be balanced and carefully considered when participating in the imaging diagnostics market:
Given the resource constraints of a young startup, we believe prioritising market segments toward areas with lower regulatory hurdles and quicker sales cycles will be imperative - especially given the cash-consumptive nature of a startup. If you have chosen route requiring Class III regulatory approval, for example, we would encourage partnering with a larger fund even at seed stage, to ensure you have the capital to be successful and not be slowed down. These aren't entirely separate paths and there is a way to orchestrate moving closer to a more diagnostic approach long-term.
We continue to be excited about this market and believe there will be several large and significant companies built over the next decade that will save lives, and make the world a healthier place. Some argue that, within the next decade, no imaging medical exam will be reviewed by a radiologist until it has been pre-analysed by a machine. There is a lot to do here.
Please get in touch if you are building a company in this market with such ambitions. We'd be happy to try and help.