Bayesian Tactile Face

1. Z. Wang, X. Xu, Baoxin Li, “Bayesian Tactile Face”, IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008.

Abstract: Computer users with visual impairment cannot access the rich graphical contents in print or digital media unless relying on visual-to-tactile conversion, which is done primarily by human specialists. Automated approaches to this conversion are an emerging research field, in which currently only simple graphics such as diagrams are handled. This paper proposes a systematic method for automatically converting a human portrait image into its tactile form. We model the face based on deformable Active Shape Model (ASM), which is enriched by local appearance models in terms of gradient profiles along the shape. The generic face model including the appearance components is learnt from a set of training face images. Given a new portrait image, the prior model is updated through Bayesian inference. To facilitate the incorporation of a pose-dependent appearance model, we propose a statistical sampling scheme for the inference task. Furthermore, to compensate for the simplicity of the face model, edge segments of a given image are used to enrich the basic face model in generating the final tactile printout.  Link for downloading PDF file.

 

2. Z. Wang, Baoxin Li, “A Bayesian Approach to Automated Creation of Tactile Facial Images”, IEEE Transactions on Multimedia 12(4):233-246, June 2010.

Abstract: Portrait photos (facial images) play important social and emotional roles in our life. This type of visual media is unfortunately inaccessible by users with visual impairment. This paper proposes a systematic approach for automatically converting human facial images into a tactile form that can be printed on a tactile printer and explored by a user who is blind. We propose a deformable Bayesian Active Shape Model (BASM), which integrates anthropometric priors with shape and appearance information learnt from a face dataset. We design an inference algorithm under this model for processing new face images to create an input-adaptive face sketch. Further, the model is enhanced by input-specific details through semantic-aware processing. We report experiments on evaluating the accuracy of face alignment using the proposed method, with comparison with other state-of-the-art results. Furthermore, subjective evaluations of the produced tactile face images were performed by 17 persons including six visually-impaired users, confirming the effectiveness of the proposed approach in conveying via haptics vital visual information in a face image.

 

 

3. N. Li, Z. Wang, J. Yuriar, B. Li, “TactileFace: A System for Enabling Access to Face Photos by Visually-impaired People”, (Live demo), International Conference on Intelligent User Interfaces (IUI), Feb., 2011.

 

See related media reports below:

MSNBC http://www.msnbc.msn.com/id/41624232/41626743

Discovery News http://www.msnbc.msn.com/id/41624232/41626743

ABC News http://abcnews.go.com/Technology/printed-photos-blind/story?id=12951372