This Facial Recognition Technology Could Help ICU Patients
Japanese scientists have used facial recognition technology to help predict when intensive care unit (ICU) patients may need more assistance due to unsafe behavior, such as removing their own breathing tubes.
The accuracy percentage is relatively high, at 75 percent.
RELATED: 15 MEDICAL ROBOTS THAT ARE CHANGING THE WORLD
This new research was presented in Vienna, Austria, at the Euroanaesthesia congress held between 1-3 June. The congress is the annual meeting for the European Society of Anaesthesiology.
Hospital staff shortages.
With limited staff availability comes higher risk for patients. What the scientists of this new research suggest is that this limitation may be reduced thanks to the automated risk detention tool they have created. It acts as a continuous monitor of patients’ safety, and assists in observing critically-ill patients at their bedside.
The head of the research team, Dr Akane Sato from Yokohama City University Hospital, Japan, said that “Using images we had taken of a patient’s face and eyes we were able to train computer systems to recognize high-risk arm movement”.
She continued, “We were surprised about the high degree of accuracy that we achieved, which shows that this new technology has the potential to be a useful tool for improving patient safety, and is the first step for a smart ICU which is planned in our hospital.”
ICU patients and how they are monitored.
Currently, most critically-ill patients in ICU are sedated to reduce their pain levels, discomfort, and to keep them safe. Issues arise with sedation, because if a patient is not adequately sedated they may accidentally remove invasive devices linked to their bodies.
The study involved 24 post-operation patients, with an average age of 67, admitted to ICU in Yokohama City University Hospital between June and October 2018.
A camera mounted on the ceiling of each patient’s bed took images to create the proof-of-concept model. Their faces and eyes had to be shown clearly, and the body’s position had to be clear as well. Over 300 hours of data were shot.
An algorithm, not dissimilar to how the human brain functions and learns, was created through 99 images. The end result? The technology was able to determine when the patient was in high-risk behavior. Notably as seen through facial recognition.
“Various situations can put patients at risk, so our next step is to include additional high-risk situations in our analysis, and to develop an alert function to warn healthcare professionals of risky behaviour. Our end goal is to combine various sensing data such as vital signs with our images to develop a fully automated risk prediction system”, says Dr Sato.
The limitations linked to the study at this stage.
The authors of the study noted a number of limitations, which included the need for more images of patients in different positions in order to improve the generalisability of the technology in real life. Further monitoring of the patients’ consciousness could lead to improved accuracy in distinguishing between high-risk behavior and voluntary actions.