PENGENALAN EKSPRESI WAJAH PESERTA DIDIK DI RUANG KELAS MENGGUNAKAN VISION TRANSFORMER (VIT)

    Muhammad Fakhri Fadhlurrahman, - and Munir, - and Yaya Wihardi, - (2025) PENGENALAN EKSPRESI WAJAH PESERTA DIDIK DI RUANG KELAS MENGGUNAKAN VISION TRANSFORMER (VIT). S1 thesis, Universitas Pendidikan Indonesia.

    Abstract

    Ekspresi wajah merupakan bentuk komunikasi non-verbal yang penting dalam memahami kondisi emosional peserta didik di ruang kelas. Pemahaman ini dapat membantu pendidik menyesuaikan metode pengajaran sesuai dengan keadaan emosional siswa, sehingga proses belajar mengajar menjadi lebih efektif. Penelitian ini bertujuan untuk mengembangkan dan menerapkan sistem pengenalan ekspresi wajah secara real-time di ruang kelas dengan memanfaatkan arsitektur Vision Transformer (ViT). Dua pendekatan sistem dikembangkan dalam penelitian ini: sistem dual-stage yang memanfaatkan kombinasi model deteksi wajah YOLOv11s dan model pengenalan ekspresi wajah HybridViT (ResNet-50), serta sistem single-stage yang menggunakan model YOLOv11s untuk langsung mendeteksi emosi dari citra wajah. Dataset yang digunakan meliputi Face Expression Recognition Dataset (RAF-DB) dan Facial Expression in Classroom, yang masing-masing digunakan untuk pelatihan awal dan fine-tuning model. Hasil pengujian menunjukkan bahwa sistem dual-stage memiliki performa klasifikasi yang lebih baik dengan nilai mean Average Precision (mAP) sebesar 0,2846, dibandingkan sistem single-stage dengan mAP sebesar 0,1603. Sebaliknya, dari segi efisiensi inferensi, sistem single-stage lebih unggul dengan latensi rata-rata per wajah sebesar 0,290 ms (6.539 FPS) di GPU dan 1,862 ms (545 FPS) di CPU, dibandingkan sistem dual-stage yang memiliki latensi lebih tinggi. Selain itu, evaluasi menunjukkan ketidakseimbangan performa antar kelas emosi akibat distribusi data yang tidak merata. Secara keseluruhan, kedua pendekatan menunjukkan potensi yang menjanjikan untuk implementasi sistem pengenalan ekspresi wajah di ruang kelas. Keduanya masih dapat ditingkatkan dari segi akurasi, generalisasi antar emosi, serta efisiensi waktu inferensi melalui peningkatan kualitas dataset dan eksplorasi teknik pelatihan lanjutan. Facial expressions serve as an essential form of non-verbal communication in understanding students' emotional states in the classroom. This understanding enables educators to adjust their teaching methods according to students' emotions, thus improving the effectiveness of the learning process. This study aims to develop and implement a real-time facial expression recognition system in classroom settings by utilizing the Vision Transformer (ViT) architecture. Two system approaches were developed: a dual-stage system combining a YOLOv11s face detection model with a HybridViT (ResNet-50) facial expression recognition model, and a single-stage system using a YOLOv11s model to directly detect emotions from facial images. The datasets used include the Real-world Affective Faces Database (RAF-DB) and the Facial Expression in Classroom Dataset, which were employed for model training and fine-tuning, respectively. Evaluation results demonstrate that the dual-stage system achieves superior classification performance with a mean Average Precision (mAP) of 0.2846, compared to the single-stage system's mAP of 0.1603. However, in terms of inference efficiency, the single-stage system outperforms the dual-stage system, achieving a lower average latency per face of 0.290 ms (6.539 FPS) on GPU and 1.862 ms (545 FPS) on CPU. The evaluation also highlights an imbalance in classification performance across emotion classes, primarily due to the uneven distribution of training and fine-tuning data. Overall, both approaches exhibit promising potential for facial expression recognition applications in classroom environments. Further improvements in accuracy, emotional generalization, and computational efficiency can be achieved through enhanced dataset quality, balanced emotion representation, and exploration of advanced training techniques.

    [thumbnail of S_KOM_2105997_Title.pdf] Text
    S_KOM_2105997_Title.pdf

    Download (1MB)
    [thumbnail of S_KOM_2105997_Chapter1.pdf] Text
    S_KOM_2105997_Chapter1.pdf

    Download (659kB)
    [thumbnail of S_KOM_2105997_Chapter2.pdf] Text
    S_KOM_2105997_Chapter2.pdf
    Restricted to Staf Perpustakaan

    Download (507kB)
    [thumbnail of S_KOM_2105997_Chapter3.pdf] Text
    S_KOM_2105997_Chapter3.pdf

    Download (1MB)
    [thumbnail of S_KOM_2105997_Chapter4.pdf] Text
    S_KOM_2105997_Chapter4.pdf
    Restricted to Staf Perpustakaan

    Download (2MB)
    [thumbnail of S_KOM_2105997_Chapter5.pdf] Text
    S_KOM_2105997_Chapter5.pdf

    Download (517kB)
    Official URL: https://repository.upi.edu/
    Item Type: Thesis (S1)
    Additional Information: https://scholar.google.com/citations?user=AMES4EIAAAAJ&hl=id ID SINTA Dosen Pembimbing Munir: 5974517 Yaya Wihardi: 5994413
    Uncontrolled Keywords: Dual-Stage, Pengenalan Ekspresi Wajah, Real-Time, Ruang Kelas, Single-Stage, Vision Transformer, YOLOv11s Classroom, Dual-Stage, Facial Expression Recognition, Real-Time, Single-Stage, Vision Transformer, YOLOv11s.
    Subjects: L Education > L Education (General)
    L Education > LB Theory and practice of education
    Q Science > QA Mathematics
    T Technology > TK Electrical engineering. Electronics Nuclear engineering
    Divisions: Fakultas Pendidikan Matematika dan Ilmu Pengetahuan Alam > Program Studi Ilmu Komputer
    Depositing User: Muhammad Fakhri Fadhlurrahman
    Date Deposited: 09 Sep 2025 09:38
    Last Modified: 09 Sep 2025 09:38
    URI: http://repository.upi.edu/id/eprint/138377

    Actions (login required)

    View Item View Item