eprintid: 138377 rev_number: 23 eprint_status: archive userid: 218123 dir: disk0/00/13/83/77 datestamp: 2025-09-09 09:38:09 lastmod: 2025-09-09 09:38:09 status_changed: 2025-09-09 09:38:09 type: thesis metadata_visibility: show creators_name: Muhammad Fakhri Fadhlurrahman, - creators_name: Munir, - creators_name: Yaya Wihardi, - creators_nim: NIM2105997 creators_nim: NIDN0025036602 creators_nim: NIDN0025038901 creators_id: mfakhrif@upi.edu creators_id: munir@upi.edu creators_id: yayawihardi@upi.edu contributors_type: http://www.loc.gov/loc.terms/relators/THS contributors_type: http://www.loc.gov/loc.terms/relators/THS contributors_name: Munir, - contributors_name: Yaya Wihardi, - contributors_nidn: NIDN0025036602 contributors_nidn: NIDN0025038901 contributors_id: munir@upi.edu contributors_id: yayawihardi@upi.edu title: PENGENALAN EKSPRESI WAJAH PESERTA DIDIK DI RUANG KELAS MENGGUNAKAN VISION TRANSFORMER (VIT) ispublished: pub subjects: L1 subjects: LB subjects: QA subjects: TK divisions: ILKOM full_text_status: restricted keywords: Dual-Stage, Pengenalan Ekspresi Wajah, Real-Time, Ruang Kelas, Single-Stage, Vision Transformer, YOLOv11s Classroom, Dual-Stage, Facial Expression Recognition, Real-Time, Single-Stage, Vision Transformer, YOLOv11s. note: https://scholar.google.com/citations?user=AMES4EIAAAAJ&hl=id ID SINTA Dosen Pembimbing Munir: 5974517 Yaya Wihardi: 5994413 abstract: Ekspresi wajah merupakan bentuk komunikasi non-verbal yang penting dalam memahami kondisi emosional peserta didik di ruang kelas. Pemahaman ini dapat membantu pendidik menyesuaikan metode pengajaran sesuai dengan keadaan emosional siswa, sehingga proses belajar mengajar menjadi lebih efektif. Penelitian ini bertujuan untuk mengembangkan dan menerapkan sistem pengenalan ekspresi wajah secara real-time di ruang kelas dengan memanfaatkan arsitektur Vision Transformer (ViT). Dua pendekatan sistem dikembangkan dalam penelitian ini: sistem dual-stage yang memanfaatkan kombinasi model deteksi wajah YOLOv11s dan model pengenalan ekspresi wajah HybridViT (ResNet-50), serta sistem single-stage yang menggunakan model YOLOv11s untuk langsung mendeteksi emosi dari citra wajah. Dataset yang digunakan meliputi Face Expression Recognition Dataset (RAF-DB) dan Facial Expression in Classroom, yang masing-masing digunakan untuk pelatihan awal dan fine-tuning model. Hasil pengujian menunjukkan bahwa sistem dual-stage memiliki performa klasifikasi yang lebih baik dengan nilai mean Average Precision (mAP) sebesar 0,2846, dibandingkan sistem single-stage dengan mAP sebesar 0,1603. Sebaliknya, dari segi efisiensi inferensi, sistem single-stage lebih unggul dengan latensi rata-rata per wajah sebesar 0,290 ms (6.539 FPS) di GPU dan 1,862 ms (545 FPS) di CPU, dibandingkan sistem dual-stage yang memiliki latensi lebih tinggi. Selain itu, evaluasi menunjukkan ketidakseimbangan performa antar kelas emosi akibat distribusi data yang tidak merata. Secara keseluruhan, kedua pendekatan menunjukkan potensi yang menjanjikan untuk implementasi sistem pengenalan ekspresi wajah di ruang kelas. Keduanya masih dapat ditingkatkan dari segi akurasi, generalisasi antar emosi, serta efisiensi waktu inferensi melalui peningkatan kualitas dataset dan eksplorasi teknik pelatihan lanjutan. Facial expressions serve as an essential form of non-verbal communication in understanding students' emotional states in the classroom. This understanding enables educators to adjust their teaching methods according to students' emotions, thus improving the effectiveness of the learning process. This study aims to develop and implement a real-time facial expression recognition system in classroom settings by utilizing the Vision Transformer (ViT) architecture. Two system approaches were developed: a dual-stage system combining a YOLOv11s face detection model with a HybridViT (ResNet-50) facial expression recognition model, and a single-stage system using a YOLOv11s model to directly detect emotions from facial images. The datasets used include the Real-world Affective Faces Database (RAF-DB) and the Facial Expression in Classroom Dataset, which were employed for model training and fine-tuning, respectively. Evaluation results demonstrate that the dual-stage system achieves superior classification performance with a mean Average Precision (mAP) of 0.2846, compared to the single-stage system's mAP of 0.1603. However, in terms of inference efficiency, the single-stage system outperforms the dual-stage system, achieving a lower average latency per face of 0.290 ms (6.539 FPS) on GPU and 1.862 ms (545 FPS) on CPU. The evaluation also highlights an imbalance in classification performance across emotion classes, primarily due to the uneven distribution of training and fine-tuning data. Overall, both approaches exhibit promising potential for facial expression recognition applications in classroom environments. Further improvements in accuracy, emotional generalization, and computational efficiency can be achieved through enhanced dataset quality, balanced emotion representation, and exploration of advanced training techniques. date: 2025-08-27 date_type: published institution: Universitas Pendidikan Indonesia department: KODEPRODI55201#Ilmu Komputer_S1 thesis_type: other thesis_name: other official_url: https://repository.upi.edu/ related_url_url: https://perpustakaan.upi.edu/ related_url_type: org citation: Muhammad Fakhri Fadhlurrahman, - and Munir, - and Yaya Wihardi, - (2025) PENGENALAN EKSPRESI WAJAH PESERTA DIDIK DI RUANG KELAS MENGGUNAKAN VISION TRANSFORMER (VIT). S1 thesis, Universitas Pendidikan Indonesia. document_url: http://repository.upi.edu/138377/1/S_KOM_2105997_Title.pdf document_url: http://repository.upi.edu/138377/2/S_KOM_2105997_Chapter1.pdf document_url: http://repository.upi.edu/138377/3/S_KOM_2105997_Chapter2.pdf document_url: http://repository.upi.edu/138377/4/S_KOM_2105997_Chapter3.pdf document_url: http://repository.upi.edu/138377/5/S_KOM_2105997_Chapter4.pdf document_url: http://repository.upi.edu/138377/6/S_KOM_2105997_Chapter5.pdf