Facial Expression and Speech Pattern Computing using Modern Techniques of Computer Vision

Two crucial areas of research in the field of emotional computing are recognition of facial emotions and spoken emotion. In this study, we tackle the challenge of precisely identifying and analyzing emotions from speech and facial expressions. A fascinating problem in human-computer interaction, men...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of information systems engineering & management Jg. 10; H. 5s; S. 602 - 614
1. Verfasser: Anant Kaulage
Format: Journal Article
Sprache:Englisch
Veröffentlicht: 24.01.2025
ISSN:2468-4376, 2468-4376
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Two crucial areas of research in the field of emotional computing are recognition of facial emotions and spoken emotion. In this study, we tackle the challenge of precisely identifying and analyzing emotions from speech and facial expressions. A fascinating problem in human-computer interaction, mental health monitoring, social robots, and other fields is the detection of emotions from facial expressions and verbal inputs. By combining different modalities, we can obtain a more complete picture of emotions by utilizing the complimentary information included in facial expressions and speech patterns. Our approach succeeds in a number of crucial goals. In order to accurately extract facial features and analyze facial expressions, it first applies cutting-edge computer vision techniques. Modern speech processing algorithms are also used to extract emotional indicators from speech data and record pertinent acoustic elements. Our solution has two results in total. First of all, it permits in-the-moment emotion recognition, enabling prompt and context-sensitive reactions in programs like virtual assistants or emotion-aware systems. Second, it offers insightful information about human emotions that can help with psychological research, clinical diagnosis, and therapeutic interventions. For the extraction of facial features, the suggested method makes use of the Haar cascade classifier, a well-liked object detection algorithm. Numerous facial characteristics, including the areas around the eyes, the nose, and the lips, are represented by haar-like features. The classifier is trained using a training dataset made up of facial photos that have been labeled with the associated emotions. Preprocessing is done on the labeled dataset to separate out the Haar-like features and produce the positive and negative samples. The trained classifier can then recognize and categorize emotions from facial photos or video streams in real-time. The findings point to the potential for real-time emotion recognition in a variety of real-world settings, enabling more sympathetic and context-sensitive human-machine interfaces.
ISSN:2468-4376
2468-4376
DOI:10.52783/jisem.v10i5s.726