The dissertation is devoted to the development of the technology of the analysis of the psychophysiological state of a person by the emotions on the face and synthesis of facial expressions using 3d-dimensional models of the human head. The psychophysiological state and its expression on the human face is an integral element of communication in new computer interfaces that involve the nonverbal component of communication and gestures. In addition to being used in computer systems, facial expressions open up a wide range of possibilities in the fields of medicine, psychology, and linguistics, including sign languages. Despite the fact that in the world currently scientific institutions and leading companies in the field of information technology conduct a large amount of research in this area, but there are some unresolved problems related to the modeling of psycho-emotional states, in particular, their number, connections, principles occurrence and their expressions on the human face, including muscle contractions and skin deformities. That is why, in order to create technologies for modeling and recognition of facial expressions, there is a need to recognize their elements using mathematical methods and algorithms for analyzing the psycho-emotional state of the face. To this end, it is necessary to develop new or improve existing methods of formalizing emotions, their expression on the face, identifying the instantaneous state of the face by its individual features.
In the dissertation, for the first time a parametric model of facial and muscular structure of the face was created, which describes the movements of points on the surface of the face through the scalar field of deformations and its features (gradients, isolines and normals). This model made possible to determine the process of fixing the image of facial expressions from the appearance of a nerve impulse, muscle contraction, deformation of the facial surface, to obtain an image of the face with existing deformations and, as a result, to obtain technology for analysis and synthesis of facial expressions. Based on the analysis of the face of a real person and atlases of anatomy, a set of model parameters was proposed; they are responsible for changes in the vicinity of the largest deformations of the facial surface. This set of model parameters was used in the information technology of analysis of facial expressions, in particular: to localize the characteristic features of facial expressions in the task of capturing changes in face expressions over time with the use of additional markers.
For the first time, the technology of capturing facial movements with the use of additional markers and computer vision methods was proposed, which allowed depicting the parameters of facial point movements. Based on this technology, a software implementation was created involving the Lucas – Canade method to capture anthropometric facial features and methods of classification of facial expressions by proposed set of features. The obtained software implementation allowed conducting experimental tests of algorithms for localization of anthropometric features with the use of additional markers and classification algorithms based on known methods of data classification and choose the ones that fit the most to the proposed technology.
The Karhunen – Loeve method has been improved, in particular the method of calculating the covariance matrix and its decomposition on the basis of characteristic functions, which is based on the properties of the proposed parametric model and analysis of the trajectories of anthropometric features in time. This method was used to recognize changes in the state of the face over time based on the trajectories of anthropometric features on the face, which allowed representing changes in the state of the face over time by a set of characteristic functions. A comparison of the existing method of data dimensionality reduction and its modification was conducted, using different methods of calculating the eigenvalues based on reference vectors.
Methods of modeling the elements of sign language got a further development, in particular methods of 3-dimensional modeling and capture of human movements, methods of modeling nonverbal elements of sign language, which allowed to model common features based on them: to develop 3-dimensionl computer animation model of the human head, using skeletal computer animation.
The results of modeling and experimental research obtained in this work can be used to develop advanced technologies for user communication with the computer and in alternative communication systems in sign language. These results were used in the information technology of nonverbal communication in sign language by including tools for modeling the psycho-emotional state and nonverbal elements of sign language and was introduced in R&D company “NVP “Infoservice”, what is proven by a specific papers that acknowledge this introduction.