Person: DERDİYOK, ŞEYMA
Loading...
Email Address
Birth Date
Research Projects
Organizational Units
Job Title
Arş.Gör.
Last Name
DERDİYOK
First Name
ŞEYMA
Name
2 results
Search Results
Now showing 1 - 2 of 2
Publication Restricted Biosignal Based Emotion-Oriented Video Summarization(Springer, 2023) DERDİYOK, ŞEYMA; AKBULUT, FATMA PATLARDigital video is a crucial component of multimedia that enhances presentations with accurate, engaging visual and aural data that affects several industries. The transition of video storage from analog to digital is being fueled by a variety of causes. Improved compression methods, cheaper technology, and more network needs are some of these drivers. This paper presents a novel video summarization based on physiological signals provided by emotional stimuli. Through these stimuli, 15 emotions are analyzed using physiological signals. The dataset was gathered from 15 participants who watched 61 episodes of 14 television series while wearing a wristband. We built several deep-learning models for the main purpose of recognizing emotions to summarize video. Among the established networks, the best performance has been obtained with the 1D-CNN, with 92.87% accuracy. This work has been done through a series of empirical experiments; since the frequency of the physiological signals is different, we used models with original and resampled configurations in each experiment. The comprehensive comparison result indicates that the oversampling approach gives the highest accuracy as well as the lowest computational complexity. The performance of the proposed video summarization approach was evaluated by a survey of participants, and the results showed that the summaries contained the critical moments of the video. The proposed approach may be useful and effective in physiological signal-based applications requiring emotion recognition, such as emotion-based video summarization or film genre detection. Additionally, reading such summaries facilitates comprehension of the significance of making rapid judgments regarding likes, ratings, comments, etc.Publication Metadata only Turkish Sign Language Recognition Using a Fine-Tuned Pretrained Model(Springer Science and Business Media Deutschland GmbH, 2024) ÖZGÜL, GİZEM; DERDİYOK, ŞEYMA; AKBULUT, FATMA PATLARMany members of society rely on sign language because it provides them with an alternative means of communication. Hand shape, motion profile, and the relative positioning of the hand, face, and other body components all contribute to the uniqueness of each sign throughout sign languages. Therefore, the field of computer vision dedicated to the study of visual sign language identification is a particularly challenging one. In recent years, many models have been suggested by various researchers, with deep learning approaches greatly improving upon them. In this study, we employ a fine-tuned CNN that has been presented for sign language recognition based on visual input, and it was trained using a dataset that included 2062 images. When it comes to sign language recognition, it might be difficult to achieve the levels of high accuracy that are sought when using systems that are based on machine learning. This is due to the fact that there are not enough datasets that have been annotated. Therefore, the goal of the study is to improve the performance of the model by transferring knowledge. In the dataset that was utilized for the research, there are images of 10 different numbers ranging from 0 to 9, and as a result of the testing, the sign was detected with a level of accuracy that was equal to 98% using the VGG16 pre-trained model. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.