relation: http://repository.upi.edu/138377/ title: PENGENALAN EKSPRESI WAJAH PESERTA DIDIK DI RUANG KELAS MENGGUNAKAN VISION TRANSFORMER (VIT) creator: Muhammad Fakhri Fadhlurrahman, - creator: Munir, - creator: Yaya Wihardi, - subject: L Education (General) subject: LB Theory and practice of education subject: QA Mathematics subject: TK Electrical engineering. Electronics Nuclear engineering description: Ekspresi wajah merupakan bentuk komunikasi non-verbal yang penting dalam memahami kondisi emosional peserta didik di ruang kelas. Pemahaman ini dapat membantu pendidik menyesuaikan metode pengajaran sesuai dengan keadaan emosional siswa, sehingga proses belajar mengajar menjadi lebih efektif. Penelitian ini bertujuan untuk mengembangkan dan menerapkan sistem pengenalan ekspresi wajah secara real-time di ruang kelas dengan memanfaatkan arsitektur Vision Transformer (ViT). Dua pendekatan sistem dikembangkan dalam penelitian ini: sistem dual-stage yang memanfaatkan kombinasi model deteksi wajah YOLOv11s dan model pengenalan ekspresi wajah HybridViT (ResNet-50), serta sistem single-stage yang menggunakan model YOLOv11s untuk langsung mendeteksi emosi dari citra wajah. Dataset yang digunakan meliputi Face Expression Recognition Dataset (RAF-DB) dan Facial Expression in Classroom, yang masing-masing digunakan untuk pelatihan awal dan fine-tuning model. Hasil pengujian menunjukkan bahwa sistem dual-stage memiliki performa klasifikasi yang lebih baik dengan nilai mean Average Precision (mAP) sebesar 0,2846, dibandingkan sistem single-stage dengan mAP sebesar 0,1603. Sebaliknya, dari segi efisiensi inferensi, sistem single-stage lebih unggul dengan latensi rata-rata per wajah sebesar 0,290 ms (6.539 FPS) di GPU dan 1,862 ms (545 FPS) di CPU, dibandingkan sistem dual-stage yang memiliki latensi lebih tinggi. Selain itu, evaluasi menunjukkan ketidakseimbangan performa antar kelas emosi akibat distribusi data yang tidak merata. Secara keseluruhan, kedua pendekatan menunjukkan potensi yang menjanjikan untuk implementasi sistem pengenalan ekspresi wajah di ruang kelas. Keduanya masih dapat ditingkatkan dari segi akurasi, generalisasi antar emosi, serta efisiensi waktu inferensi melalui peningkatan kualitas dataset dan eksplorasi teknik pelatihan lanjutan. Facial expressions serve as an essential form of non-verbal communication in understanding students' emotional states in the classroom. This understanding enables educators to adjust their teaching methods according to students' emotions, thus improving the effectiveness of the learning process. This study aims to develop and implement a real-time facial expression recognition system in classroom settings by utilizing the Vision Transformer (ViT) architecture. Two system approaches were developed: a dual-stage system combining a YOLOv11s face detection model with a HybridViT (ResNet-50) facial expression recognition model, and a single-stage system using a YOLOv11s model to directly detect emotions from facial images. The datasets used include the Real-world Affective Faces Database (RAF-DB) and the Facial Expression in Classroom Dataset, which were employed for model training and fine-tuning, respectively. Evaluation results demonstrate that the dual-stage system achieves superior classification performance with a mean Average Precision (mAP) of 0.2846, compared to the single-stage system's mAP of 0.1603. However, in terms of inference efficiency, the single-stage system outperforms the dual-stage system, achieving a lower average latency per face of 0.290 ms (6.539 FPS) on GPU and 1.862 ms (545 FPS) on CPU. The evaluation also highlights an imbalance in classification performance across emotion classes, primarily due to the uneven distribution of training and fine-tuning data. Overall, both approaches exhibit promising potential for facial expression recognition applications in classroom environments. Further improvements in accuracy, emotional generalization, and computational efficiency can be achieved through enhanced dataset quality, balanced emotion representation, and exploration of advanced training techniques. date: 2025-08-27 type: Thesis type: NonPeerReviewed format: text language: id identifier: http://repository.upi.edu/138377/1/S_KOM_2105997_Title.pdf format: text language: id identifier: http://repository.upi.edu/138377/2/S_KOM_2105997_Chapter1.pdf format: text language: id identifier: http://repository.upi.edu/138377/3/S_KOM_2105997_Chapter2.pdf format: text language: id identifier: http://repository.upi.edu/138377/4/S_KOM_2105997_Chapter3.pdf format: text language: id identifier: http://repository.upi.edu/138377/5/S_KOM_2105997_Chapter4.pdf format: text language: id identifier: http://repository.upi.edu/138377/6/S_KOM_2105997_Chapter5.pdf identifier: Muhammad Fakhri Fadhlurrahman, - and Munir, - and Yaya Wihardi, - (2025) PENGENALAN EKSPRESI WAJAH PESERTA DIDIK DI RUANG KELAS MENGGUNAKAN VISION TRANSFORMER (VIT). S1 thesis, Universitas Pendidikan Indonesia. relation: https://repository.upi.edu/