Samuel Bateman is a senior associate and patent attorney, and Chris Froud is a partner and patent attorney, at European IP firm, Withers & Rogers. Both specialise in advising innovative companies and developers in electronics and computing.
The combination of enhanced imaging technologies and AI-powered robotic tools are improving diagnostics and leading to better patient outcomes. But are patients ready to accept the benefits they could bring?
The core technologies used for medical imaging, such as MRI, X-ray, CT, and ultrasound, have remained largely unchanged for decades. However, advances in AI modelling and the development of sophisticated robotic systems are providing clinicians with more accurate and reliable image data than ever before and giving them access to otherwise hard-to-reach areas of the human body.
A novel robotic bronchoscopy that uses advanced imaging technology has been heralded as a breakthrough in the safe, timely, and accurate diagnosis of lung cancer. Developed by US robotics and biotechnology company, Intuitive Surgical, the patented Ion Endoluminal System is currently being used by doctors at Wythenshawe Hospital in south Manchester in the UK. Designed for use by a human operative, it is essentially a mechanically controlled robotic tool, but its ultrathin design and advanced manoeuvrability means it can identify very small spots or lesions within hard-to-reach areas of the lung. A key benefit of the system is that it can facilitate the early detection of cancer, leading to better patient outcomes.
The miniaturisation of advanced robotic technologies makes them ideally suited for novel invasive diagnostic tools for use before or during surgery. For example, the idea of swallowing a camera in a pill form so doctors can get a close-up view of what is happening inside a patient’s body, is nothing new. However, the inability to steer the camera meant that successfully imaging difficult to reach parts of the body, such as the point where the small intestine connects directly to the pylorus of the stomach, depended on the chance of the camera facing the right direction as it passed through the relevant part of the body. To tackle this problem, wirelessly-operated robotic systems, taken in the form of a pill, can now perform a precise remote-controlled “capsule endoscopy”, making it much easier to record the data required by the clinician.
For example, US company Endiatx have developed and filed a series of patent applications, including WO2023225228, towards a pill-sized robot that incorporates a series of motors, allowing the orientation of the robot and, by extension, a camera on the robot, to be controlled remotely as it passes through a patient’s body. This means that the robot is better able to obtain the images and data needed by the clinician as part of the diagnostic process.
Other advancements in robotic technologies are improving the efficacy of core imaging technologies such as CT scans and X-rays, whilst protecting patients from excessive exposure to harmful electromagnetic (EM) radiation. In the case of a whole-body CT scan, the patient is typically required to lie on a table, which enters a large ring-shaped scanner. They are then surrounded by a rotating x-ray source, which takes cross-sectional or ‘sliced’ images of the body. Modern robotic tools, which are capable of fine control, can scan the patient and generate images from various angles. This provides the clinician with a better quality and more accurate 3D image of the patient’s body without excessive exposure to electromagnetic radiation.
Among the software advances coming through are various AI-powered platforms to speed the process of technological advancement. For example, Nvidia has recently launched its Isaac for Healthcare Medical Device Simulation Platform to support the development of the technologies involved in robotic surgery and digital imaging. Utilising pre-trained AI models for sensors and anatomy, the platform allows device manufacturers to test their systems in a virtual environment. As a result of an early-access collaboration, GE HealthCare has confirmed that it intends to use the platform to build autonomous imaging systems comprising both X-ray and ultrasound hardware, which is controlled by robotic arms that responds to a patient’s position using machine vision technologies. For example, this AI-driven autonomous approach might be applied to GE HealthCare’s OEC 3D imaging C-arm, for which the company has a number of recently filed and granted patents, such as US2024341706A1 and US11266360B2.
A key problem that has slowed the development of useful AI-based platforms for device developers is a lack of high-quality training data. Collecting sufficient high-quality imagery of surgical or diagnostic procedures has proved challenging, and access to new banks of simulated or synthetic training data has provided a breakthrough. However, the use of such training data in the development of surgical robotic tools, and other devices used for invasive clinical applications, is a source of controversy.
Developers of AI-powered and robotic tools for applications in this area are aware that patient acceptance is critical to uptake, however there is currently a great deal of scepticism. Even though the robotic tools developed for such purposes are at best semi-autonomous, requiring a human operative, patients are naturally concerned about the risks they might pose. A recent study, published in Nature, demonstrating the efficiency of AI-powered analytical models used in clinical diagnosis has shown that whilst most can outperform a general physician, they can’t match the more nuanced capability of a medical expert with specialist knowledge. In fact, it was found that AI-powered models produce more accurate diagnoses in some areas than others – for example, they are particularly efficient in diagnosing dermatological conditions, but far less accurate when identifying gastroenterological issues. As more is learnt about the trained capabilities of these fast-evolving technologies, it is likely that patients will become more comfortable with their use, but completely autonomous diagnostic tools are unlikely to be accepted.
Familiarity should encourage public acceptance in time, and as the costs associated with developing advanced robotic systems fall, they will inevitably become more prevalent. Research by ARK Invest suggests that the average price of an industrial robot halved in the decade to 2022, and further significant price reductions have been forecast. For developers, this potential for growth means there is a strong commercial motivation to patent their innovations and, in doing so, secure a 20-year period of exclusivity to profit from their commercialisation in key global markets.
When preparing patent applications for AI-powered and robotic innovations, particularly those that bring together both technologies, it is important to extract all the valuable intellectual property (IP). This can be achieved by seeking patent protection for the constituent technologies, whilst also flagging that they can be used together. For developers seeking patent protection for AI models, it is sometimes wrongly assumed that software isn’t patentable despite the UK Intellectual Patent Office (UKIPO) and the European Patent Office (EPO) making it clear that it can meet the eligibility criteria.
At a time when advanced technologies such as AI, data analytics and robotics are converging, medtech innovators must ensure they know where opportunities exist and what the market is ready to accept. When it comes to AI-powered robotic diagnostic and surgical solutions, early-stage projects have shown them to be accurate and reliable in some areas, but their suitability for widespread clinical use is still being evaluated. From an IP perspective, investing to build a robust patent portfolio in this area now could generate significant value in the future.