A Mobile Application for the Diagnosis of Facial Nerve Palsy

Facial nerve palsy is a problem that involves the paralysis of any structures activated by the facial nerve , diagnostics today are very subjective.

Introduction – Facial Nerve Palsy

  • Facial nerve palsy is a problem that involves the paralysis of any structures activated by the facial nerve
  • Diagnostics today are very subjective
  • There is a need for an objective method for evaluation of patients (sick/healthy, severity)

Review – Previous Project

  • Finding facial keypoints using stickers
  • Extracting facial features of right and left sides of face and comparing them
  • Using machine learning algorithms todiagnose, based on features
  • Establishing that using facial keypoints is useful for a stable diagnostics

Project Goals

  • Finding facial keypoints without stickers
  • Adding new features
  • Performing diagnostics (sick/healthy) using the features found
  • Improving diagnostics accuracy
  • Implementing the new algorithm as a mobile application

Algorithm Requirements

  • Fast computation time – for real time applications
  • Robust to:
    • Face size
    • Slight rotation of face
    • Different facial expressions
  • Low error between detected keypoints and actual facial features

Possible Solutions

  • Active Appearance Model (AAM)
  • Constrained Local Models (CLM)
  • Extracting features using basic image processing tools

AAM – Active Appearance Model

  • Creating a statistical model of face appearances: shape and texture (gray level)
  • Matching the Appearance Model to new and unseen images
    • Appearance Model consisted shape and texture parameters
    • Update parameters iteratively
    • Goal: minimizing the difference between a real image and one synthesized by the model

AAM – Active Appearance Model -Results

 

CLM – Constrained Local Model

  • Face model consisted of a face shape model and different patch models for different facial parts (around keypoints)
  • Matching patches to given image in order to best locate keypoints while maintaining shape constraints (eyes above nose, nose above mouth…)
  • Theoretically, will allow higher flexibility, less constrained by symmetry
  • Implementation by Prof. Tim Cootes of the University of Manchester.

CLM – Constrained Local Model – Results

 

Extracting Features Using Basic Tools

  • Using Viola and Jones to locate face, nose, eyes, mouth
  • Using segmentation in different color channels and edge detection to extract interest points

Extracting Features Using Basic Tools -Results

 

Summery

  • Algorithms for facial key point extractions exist, but do not work well for our cause:
  • High computation time
  • Relies on symmetry (Models based on healthy subjects)
  • Found 12 key points using basic image processing tools
  • Generated good results on specific subjects, need adjustments for different variations of:
    • Skin color
    • Hair color
    • Beard/mustache
    • Facial proportions

For Future Examination

  • Building face models using examples from both healthy and ill subjects (for AAM or CLM)
  • Creating a table of different parameters and thresholds for different subjects
  • Using 3D analysis, depth images
Collaboration:

In collaboration with Ofer Azulay M.D, Kaplan Medical Center