- Students Info
Facial nerve palsy is a common condition affecting one out of every two thousand people around the world. Patients with facial movement dysfunction can suffer from major functional, aesthetic and psychological disabilities. Grading the severity of facial palsy has prognostic and follow-up significance. However, traditional grading systems (e.g. the House-Brackmann scale) are based on a clinical observation by the physician and are thus prone to inter and intra-observer variability.
Several objective grading systems have been proposed in the past, none of which have gained wide clinical usage, partly due to issues of practicality and availability to the physician. Our goal is to create a mobile application which will serve as an objective and easily available diagnostic tool.
As the basis for the development of the diagnostic algorithm and mobile application we are undertaking a large scale clinical study of normal and abnormal facial function. The objective of the study is to amass a large database of subjects (patients and healthy controls) whose facial function has been video-recorded and assessed.
Each subject participating in the study is video-recorded while performing a predetermined set of facial movements under the guidance of a physician. During the video-recording a set of thirteen facial landmarks is marked on the subjects face using ordinary stickers to facilitate the precise tracking of points of interest. Both movements and facial landmarks were chosen after evaluation of the existing literature. After video-recording each patient is then separately evaluated by three Otolaryngologists and the severity of the palsy is graded using three existing, well accepted, subjective grading scales (House-Brackmann, Yanagihara, Sunnybrook).
As of October 2014, We have finished the clinical evaluation stage for more than 40 healthy controls and 30 patients, with the study continually expanding.
Using the database obtained via the clinical study we are concurrently developing a machine-learning algorithm for classification facial palsy based on video recordings. The first stage of the algorithm entails extraction of the facial landmarks of interest from the video followed by extraction of features relevant to the classification. This stage is being performed by using different image processing techniques. We are currently considering and testing a number of machine learning models with a focus on the SVM classifier.
The mobile application is the final realization of the project’s objectives which combines both recording the required videos and the classification algorithm. The application is being developed in an android environment.
Performing diagnosis using the mobile application includes three stages:
First, 13 stickers are placed on the face of the patient at predefined landmarks.
Next, the patient is guided by a physician to perform 9 facial movements. this movements are recorded using the mobile application.
The application provides friendly UI to guide the physician in recording this movements.
Lastly, the application provides diagnosis by applying the proposed algorithm to the recorded video.
 – Linstrom, C. J. (2002), Objective Facial Motion Analysis in Patients With Facial Nerve Dysfunction.
The Laryngoscope, 112: 1129–1147. doi: 10.1097/00005537-200207000-00001
 – Tessa A. Hadlock, MD; Luke S. Urban, MS; Toward a Universal, Automated Facial Measurement Tool in Facial Reanimatio
Arch Facial Plast Surg. 2012;14(4):277-282. doi:10.1001/archfacial.2012.111
This project was initiated by Dr. Ofer Azoulay of the Kaplan Medical Center (KMC) and is currently an ongoing research effort in collaboration with the Technion’s Vision and Image Sciences Laboratory (VISL). The clinical and medical aspects of the research are carried out at the KMC under the supervision of Dr. Azoulay in the Otolaryngology Department. Development of the classification algorithm and Android mobile application is carried out at VISL by Lior Gersi and Yotam Ater under the supervision of Yonatan Glassner and Ori Bryt.
Initiator & Medical Staff:
Dr. Ofer Azoulay, azoulo [at] gmail [dot] com
Yotam Ater, yater [at] t2 [dot] technion [dot] ac [dot] il
Lior Gersi , liorgersi [at] gmail [dot] com
Ori Bryt, stinger [at] tx [dot] technion [dot] ac [dot] il
Yonatan Glassner, yonatangl [at] gmail [dot] com