eprintid: 143714 rev_number: 25 eprint_status: archive userid: 220605 dir: disk0/00/14/37/14 datestamp: 2025-10-21 01:32:34 lastmod: 2025-10-21 01:32:34 status_changed: 2025-10-21 01:32:34 type: thesis metadata_visibility: show creators_name: Rafli Zaki Rabbani, - creators_name: Siscka Elvyanti, - creators_name: Nurul Fahmi Arief Hakim, - creators_nim: NIM2100415 creators_nim: NIDN0022117303 creators_nim: NIDN0405099302 creators_id: raflizaki@upi.edu creators_id: sisckael@upi.edu creators_id: nurulfahmi@upi.edu contributors_type: http://www.loc.gov/loc.terms/relators/THS contributors_type: http://www.loc.gov/loc.terms/relators/THS contributors_name: Siscka Elvyanti, - contributors_name: Nurul Fahmi Arief Hakim, - contributors_nidn: NIDN0022117303 contributors_nidn: NIDN0405099302 contributors_id: sisckael@upi.edu contributors_id: nurulfahmi@upi.edu title: PENERAPAN COMPUTER VISION DAN WEBRTC SEBAGAI MEDIA PEMBELAJARAN MENGGUNAKAN GESTUR TANGAN ispublished: pub subjects: L1 subjects: LB subjects: QA75 subjects: T1 divisions: PTOIS full_text_status: restricted keywords: CNN, computer vision, gestur tangan, interaksi natural, monitoring, WebRTC CNN, computer vision, hand gesture, monitoring, natural interaction, WebRTC note: https://scholar.google.com/citations?hl=id&authuser=1&user=Thil7SoAAAAJ ID SINTA Dosen Pembimbing: Siscka Elvyanti: 6722063 Nurul Fahmi Arief Hakim: 6725597 abstract: Penerapan teknologi pembelajaran berbasis computer vision semakin memungkinkan dengan menawarkan peningkatan efektivitas, interaktivitas, dan efisiensi dalam pembelajaran dengan meningkatnya kemampuan perangkat keras dan internet. Penelitian ini mengobservasi dan menemukan solusi pengaplikasian computer vision dan Web Real-Time Communication (WebRTC) pada pembelajaran, yaitu presentasi dan tes/kuis. Penelitian menerapkan solusi yang dinamis dan aksesibel dalam mengatasi ketergantungan pada operator atau perangkat keras dalam metode presentasi tradisional serta menawarkan interaksi natural. Selain itu, penelitian ini menerapkan teknologi tersebut untuk mencegah kecurangan pada ujian daring yang diawasi (online proctored exam). Pengembangan sistem dilakukan dengan metode Research and Development (R&D) pendekatan spiral. Sistem memanfaatkan deteksi posisi landmark tangan menggunakan MediaPipe serta klasifikasi gestur dengan model Convolutional Neural Network (CNN) yang dilatih berdasarkan fitur sudut sendi landmark tangan. Dua fitur utama sistem ini, yaitu kontrol presentasi berbasis gestur (seperti menggerakkan kursor dan menavigasi presentasi), serta kuis interaktif pilihan ganda yang dijawab dengan acungan jari. Aplikasi dikembangkan pada dua platform berbeda, yaitu aplikasi desktop berbasis PyQt5 dan aplikasi web berbasis React dan Django dengan WebRTC untuk transmisi video pada tes online secara real-time. Evaluasi dilakukan melalui pengujian fungsional (black-box) dan pengujian teknis terhadap performa sistem. Hasil menunjukkan bahwa model CNN mencapai akurasi sebesar 0.9733, presisi 0.9739, recall 0.9733, F1-score 0.9731, dan mAP 0.9916. Waktu respons rata-rata 145,3 ms, yang menghasilkan frame rate 22,3 FPS pada aplikasi desktop, lalu frame rate inbound dan outbound rata-rata sekitar 15-15.2 FPS pada aplikasi web termasuk latensi RTT dan frame assembly tercatat sekitar 480-500 ms, menghasilkan sistem dengan performa dan responsivitas yang baik. The application of computer vision-based learning technology is becoming increasingly feasible, offering enhanced effectiveness, interactivity, and efficiency in educational settings with the increasing capabilities of hardware and the internet this study observes and finds solutions for applying computer vision and Web Real-Time Communication (WebRTC) for learning, precisely presentations and tests/quizzes. This research presents a dynamic and accessible solution to reduce reliance on operators or additional hardware in traditional presentation methods while enabling natural interaction. Furthermore, it applies this technology to prevent cheating in online proctored exams. System development is carried out using the Research and Development (R&D) method with spiral approach. The system utilizes hand landmark detection using MediaPipe and gesture classification with a Convolutional Neural Network (CNN) model trained on angular features derived from hand joint landmarks. The system’s two main features are gesture-based presentation control (e.g., cursor movement and presentation navigation) and an interactive multiple-choice quiz answered through finger-count gestures. The application is developed on two different platforms: a PyQt5-based desktop application and a web-based application using React and Django, employing WebRTC for real-time webcam video transmission during online tests. Evaluation was conducted through functional (black-box) testing and technical performance testing of the system. Results show that the CNN model achieved an accuracy of 0.9733, precision of 0.9739, recall of 0.9733, F1-score of 0.9731, and a mean Average Precision (mAP) of 0.9916. The system recorded an average response time of 145.3 ms, resulting in a frame rate of 22.3 FPS on the desktop application, and an average inbound and outbound frame rate of approximately 15–15.2 FPS on the web application, with RTT and frame assembly latency ranging from 480–500 ms, indicating a system with reliable performance and responsiveness. date: 2025-08-25 date_type: published institution: Universitas Pendidikan Indonesia department: KODEPRODI56203#Pendidikan Teknik Otomasi Industri dan Robotika_S1 thesis_type: other thesis_name: other official_url: https://repository.upi.edu related_url_url: https://perpustakaan.upi.edu related_url_type: org citation: Rafli Zaki Rabbani, - and Siscka Elvyanti, - and Nurul Fahmi Arief Hakim, - (2025) PENERAPAN COMPUTER VISION DAN WEBRTC SEBAGAI MEDIA PEMBELAJARAN MENGGUNAKAN GESTUR TANGAN. S1 thesis, Universitas Pendidikan Indonesia. document_url: http://repository.upi.edu/143714/1/S_PTOIR_2100415_Title.pdf document_url: http://repository.upi.edu/143714/2/S_PTOIR_2100415_Chapter1.pdf document_url: http://repository.upi.edu/143714/3/S_PTOIR_2100415_Chapter2.pdf document_url: http://repository.upi.edu/143714/4/S_PTOIR_2100415_Chapter3.pdf document_url: http://repository.upi.edu/143714/5/S_PTOIR_2100415_Chapter4.pdf document_url: http://repository.upi.edu/143714/6/S_PTOIR_2100415_Chapter5.pdf document_url: http://repository.upi.edu/143714/7/S_PTOIR_2100415_Appendix.pdf