Hanisah Fildza Annafisah, - (2025) IMPLEMENTASI FINE-TUNING DAN BASE MODEL GPT-3.5 DALAM MENGHASILKAN INTERVENSI AFEKTIF PADA PEMBELAJARAN DARING SINKRONIS. S1 thesis, Universitas Pendidikan Indonesia.
Abstract
Teknologi digital mendukung pembelajaran daring sinkronis melalui video konferensi, tetapi keterbatasan interaksi fisik menantang kesejahteraan emosional pelajar yang berpengaruh pada keberhasilan pembelajaran. Penelitian ini mengeksplorasi implementasi Generative AI dengan model GPT-3.5 untuk menghasilkan intervensi afektif yang relevan dan efisien sesuai emosi sehingga dapat meningkatkan kesejahteraan emosional pelajar selama pembelajaran daring sinkronis. Implementasi dengan base model memberikan pemahaman dasar kemampuan model sebelum di-fine-tuning yang diharapkan dapat menyesuaikan tugas spesifik dalam hal ini menghasilkan intervensi afektif yang relevan dan efisien sesuai dengan emosi pelajar. Tahap awal penelitian menggunakan base model GPT-3.5 dengan Prompt Engineering melalui API menghasilkan rata-rata penggunaan 405.51 prompt token, 40.05 completion token, dan 445.56 total token serta rata-rata BERTScore Precision 0.73, Recall 0.71, dan F1-Score 0.72. Hasil awal ini menunjukkan kebutuhan penyesuaian lebih lanjut sehingga dilakukan proses fine-tuning. Proses fine-tuning melibatkan persiapan data melalui augmentasi data menggunakan ChatGPT sehingga menghasilkan 1400 data yang dibagi menjadi 80% set pelatihan dan 20% set pengujian. Fine-tuning dilakukan dengan metode Instruction-Tuning dan Supervised Fine-Tuning pada model GPT-3.5 dari OpenAI. Meskipun training loss selama pelatihan fluktuatif, nilainya menurun hingga mencapai 0.8341 yang mengindikasikan model mempelajari data pelatihan dengan baik. Evaluasi pada model yang telah di-fine-tuning menunjukkan peningkatan performa dengan rata-rata penggunaan 125.51 prompt token, 47.71 completion token, dan 173.22 total token serta rata-rata BERTScore Precision 0.78, Recall 0.78, dan F1-Score 0.78. Model akhir diimplementasikan melalui API dan di-deploy ke Google Cloud Platform. Hasil penelitian membuktikan bahwa fine-tuning model GPT-3.5 secara signifikan menghasilkan intervensi afektif yang relevan dan efisien sesuai emosi pelajar dibandingkan base model. ------- Digital technology supports synchronous online learning through video conferencing, but limited physical interaction challenges students’ emotional well-being, affecting learning success. This study explores the implementation of Generative AI with the GPT-3.5 model to generate relevant and efficient affective interventions aligned with emotions to enhance students’ emotional well-being during synchronous online learning. Implementation with the base model provides a fundamental understanding of the model’s capabilities before fine-tuning, expected to adapt to the specific task of generating relevant, efficient affective interventions according to students’ emotions. The initial research stage used the GPT-3.5 base model with Prompt Engineering via API, yielding an average of 405.51 prompt tokens, 40.05 completion tokens, and 445.56 total tokens, and BERTScore Precision of 0.73, Recall 0.71, and F1-Score 0.72. These findings indicate the need for further adjustments, prompting a fine-tuning process. Fine-tuning involved data preparation via data augmentation using ChatGPT, producing 1400 data points split into 80% training and 20% testing sets. It was carried out with Instruction-Tuning and Supervised Fine-Tuning methods on the GPT-3.5 model from OpenAI. Although the training loss fluctuated, it decreased to 0.8341, indicating good learning of the training data. Evaluation of the fine-tuned model showed performance improvement, with an average of 125.51 prompt tokens, 47.71 completion tokens, and 173.22 total tokens, and BERTScore Precision, Recall, and F1-Score of 0.78. The final model was implemented via API and deployed on Google Cloud Platform. These findings prove that fine-tuning GPT-3.5 yields more relevant and efficient affective interventions aligned with students’ emotions compared to base model.
![]() |
Text
S_RPL_2103609_Title.pdf Download (10MB) |
![]() |
Text
S_RPL_2103609_Chapter1.pdf Download (1MB) |
![]() |
Text
S_RPL_2103609_Chapter2.pdf Restricted to Staf Perpustakaan Download (1MB) |
![]() |
Text
S_RPL_2103609_Chapter3.pdf Download (1MB) |
![]() |
Text
S_RPL_2103609_Chapter4.pdf Restricted to Staf Perpustakaan Download (964kB) |
![]() |
Text
S_RPL_2103609_Chapter5.pdf Download (430kB) |
![]() |
Text
S_RPL_2103609_Appendix.pdf Restricted to Staf Perpustakaan Download (574kB) |
Item Type: | Thesis (S1) |
---|---|
Additional Information: | https://scholar.google.com/citations?view_op=new_profile&hl=en ID SINTA Dosen Pembimbing: Asyifa Imanda Septiana: 6681802 Indira Syawanodya: 6681751 |
Uncontrolled Keywords: | Affective Intervention, Generative AI, Large Language Model, Generative Pre-trained Transformer 3.5, Fine-Tuning. |
Subjects: | B Philosophy. Psychology. Religion > BF Psychology L Education > L Education (General) Q Science > QA Mathematics > QA76 Computer software T Technology > T Technology (General) |
Divisions: | UPI Kampus cibiru > S1 Rekayasa Perangkaat Lunak |
Depositing User: | Hanisah Fildza Annafisah |
Date Deposited: | 06 Mar 2025 02:37 |
Last Modified: | 09 Apr 2025 04:27 |
URI: | http://repository.upi.edu/id/eprint/130149 |
Actions (login required)
![]() |
View Item |