Bilgisayar Mühendisliği Bölümü / Department of Computer Engineering
Permanent URI for this collectionhttps://hdl.handle.net/11413/6817
Browse
Browsing Bilgisayar Mühendisliği Bölümü / Department of Computer Engineering by Title
Now showing 1 - 20 of 216
- Results Per Page
- Sort Options
Publication Metadata only 2D UAV path planning with radar threatening areas using simulated annealing algorithm for event detection(2018) Basbous, Bilal;Path Planning for Unmanned Aerial Vehicles (UAVs) can be used for many purposes. However, the problem becomes more and more complex when dealing with a large number of points to visit for detecting and catching different type of events and simple threat avoidance such as Radar Areas. In the literature different type of algorithms (especially evolutionary algorithms) are preferred. In this project, Simulated Annealing (SA) Algorithm is used for solving the path planning problem. Firstly, problem is converted to a part of Travelling Salesman Problem (TSP), and then the solutions are optimized with the 2-Opt approach and other simple algorithms. The code is implemented in MATLAB by using its visualization. Circular avoidance approach is developed and applied with the Simulated Annealing in order to escape from circular radar threats. Tests have been made to observe the results of SA algorithm and radar threats avoidance approaches, where the results show that after a period of time, SA algorithm gives acceptable solutions with the capacities of escaping from radar area threats. Where SA algorithm gives better solutions in less period of time when there are no radar threats. Experimental results depicted that the proposed model can result in an acceptable solution for UAVs in sufficient execution time. This model can be used as an alternative solution to the similar evolutionary algorithms.Publication Open Access A Bayesian Deep Neural Network Approach to Seven-Point Thermal Sensation Perception(IEEE-Inst Electrical Electronics Engineers Inc., 2022) ÇAKIR, MUSTAFA; AKBULUT, AKHANTo create and maintain comfortable indoor environments, predicting occupant thermal sensation is an important goal for architects, engineers, and facility managers. The link between thermal comfort, productivity, and health is common knowledge, and researchers have developed many state-of-the-art thermal-sensation models from dozens of research projects over the last 50 years. In addition to these, the use of intelligent data-analysis techniques, such as black-box artificial neural networks (ANNs), is receiving research attention with the aim of designing building thermal-behavior models from collected data. With the convergence of the internet of things (IoT), cloud computing, and artificial intelligence (AI), smart buildings now protect us and keep us comfortable while saving energy and cutting emissions. These types of smart buildings play a vital role in building smart cities of the future. The aim of this study is to help facility managers predict the thermal sensation of the occupants under the given circumstances. To achieve this, we applied a data-driven approach to predict the thermal sensation of occupants of an indoor environment using previously collected data. Our main contribution is to design and evaluate a deep neural network (DNN) for predicting thermal sensations with a high degree of accuracy regardless of building type, climate zone, or a building's heating and/or ventilation methods. We used the second version of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Global Thermal Comfort Database to train our model. The hyperparameter-tuning process of the proposed model is optimized using the Bayesian strategy and predicts the thermal sensation of occupants with 78% accuracy, which is much higher than the traditional predicted mean vote (PMV) model and the other shallow and deep networks compared.Publication Embargo A Comparative Study of Smoothing a Vehicle s Trajectory which is Calculated by an Evolutionary Algorithm(2016-06) Buran, Bayram Ali; Çağlar, Süleyman Hikmet; ŞAHİNGÖZ, ÖZGÜR KORAY; 214903; 243931; 114368Determining a vehicle’s trajectory is a complex and hard to solve type problem in the literature and it is identified as a NP-Hard optimization problem which is studied in different engineering disciplines such as computer, electrical and industrial engineering. It has been observed that such complex problems can be solved by using various approaches and lots of them are focused on the usage of Evolutionary Algorithms especially in case of a large number of controls points which are needed to be visited. Although these algorithms provide near optimal solutions, in the real world, vehicles are not able to follow this determined path (trajectory) without any deviation. Because vehicles are moving objects and each one moves with a certain speed. Therefore it is impossible for a vehicle to make a sharp turn after visiting control points. These vehicles need to make smoothed turns over these points. Therefore there will be a certain difference between the calculated path and the real path. It is needed to determine the real path by using necessary mathematical solutions for smoothing these paths. To ensure the motion continuity of vehicles, they need to follow paths determined according to a certain criterion. In this study, the most common smoothing methods which are used to ensure these continuities (Bezier, B-Spline and Dubins) have been compared and it is aimed to show the different approaches in an application area of path planning problems as a comparative study.Publication Metadata only A Comparative Study to Determine the Effective Window Size of Turkish Word Sense Disambiguation Systems(Springer, 233 Spring Street, New York, Ny 10013, United States, 2013) Adalı, Eşref; Tantuğ, Ahmet Cüneyd; İLGEN, BAHAR; 141812; 8786; 21833In this paper, the effect of different windowing schemes on word sense disambiguation accuracy is presented. Turkish Lexical SampleDataset has been used in the experiments. We took the samples of ambiguous verbs and nouns of the dataset and used bag-of-word properties as context information. The experi-ments have been repeated for different window sizes based on several machine learning algorithms. We follow 2/3 splitting strategy (2/3 for training, 1/3 for test-ing) and determine the most frequently used words in the training part. After re-moving stop words, we repeated the experiments by using most frequent 100, 75, 50 and 25 content words of the training data. Our findings show that the usage of most frequent 75 words as features improves the accuracy in results for Turkish verbs. Similar results have been obtained for Turkish nouns when we use the most frequent 100 words of the training set. Considering this information, selected al-gorithms have been tested on varying window sizes {30, 15, 10 and 5}. Our find-ings show that Naive Bayes and Functional Tree methods yielded better accuracy results. And the window size +/-5 gives the best average results both for noun and the verb groups. It is observed that the best results of the two groups are 65.8 and 56% points above the most frequent sense baseline of the verb and noun groups respectively.Publication Metadata only A component-oriented process model(IEEE Computer Soc, 10662 Los Vaqueros Circle, Po Box 3014, Los Alamitos, Ca 90720-1314 USA, 2003-07) Altunel, Yusuf; 141288Publication Metadata only A compressed sensing based approach on discrete algebraic reconstruction technique(IEEE, 345 E 47th St, New York, Ny 10017 USA, 2015) Demircan Türeyen, Ezgi; Kamaşak, Mustafa Erşel; 237397; 27148Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.Publication Embargo A decision support system to determine optimal ventilator settings(Biomed Central Ltd, 236 Grays Inn Rd, Floor 6, London Wc1X 8Hl, England, 2014) Akkur, Erkan; Akan, Aydın; Yarman, B. Sıddık; AKBULUT, FATMA PATLARBackground: Choosing the correct ventilator settings for the treatment of patients with respiratory tract disease is quite an important issue. Since the task of specifying the parameters of ventilation equipment is entirely carried out by a physician, physician ' s knowledge and experience in the selection of these settings has a direct effect on the accuracy of his/her decisions. Nowadays, decision support systems have been used for these kinds of operations to eliminate errors. Our goal is to minimize errors in ventilation therapy and prevent deaths caused by incorrect configuration of ventilation devices. The proposed system is designed to assist less experienced physicians working in the facilities without having lung mechanics like cottage hospitals. Methods: This article describes a decision support system proposing the ventilator settings required to be applied in the treatment according to the patients ' physiological information. The proposed model has been designed to minimize the possibility of making a mistake and to encourage more efficient use of time in support of the decision making process while the physicians make critical decisions about the patient. Artificial Neural Network (ANN) is implemented in order to calculate frequency, tidal volume, FiO(2) outputs, and this classification model has been used for estimation of pressure support /volume support outputs. For the obtainment of the highest performance in both models, different configurations have been tried. Various tests have been realized for training methods, and a number of hidden layers mostly affect factors regarding the performance of ANNs. Results: The physiological information of 158 respiratory patients over the age of 60 and were treated in three different hospitals between the years 2010 and 2012 has been used in the training and testing of the system. The diagnosed disease, core body temperature, pulse, arterial systolic pressure, diastolic blood pressure, PEEP, PSO2, pH, pCO(2), bicarbonate data as well as the frequency, tidal volume, FiO(2), and pressure support / volume support values suitable for use in the ventilator device have been recommended to the physicians with an accuracy of 98,44%. Performed experiments show that sequential order weight/bias training was found to be the most ideal ANN learning algorithm for regression model and Bayesian regulation backpropagation was found to be the most ideal ANN learning algorithm for classification models. Conclusions: This article aims at making independent of the choice of parameters from physicians in the ventilator treatment of respiratory tract patients with proposed decision support system. The rate of accuracy in prediction of systems increases with the use of data of more patients in training. Therefore, non-physician operators can use systems in determination of ventilator settings in case of emergencies.Publication Metadata only A discretized tomographic image reconstruction based upon total variation regularization(Elsevier Sci Ltd, The Boulevard, Langford Lane, Kidlington, Oxford Ox5 1Gb, Oxon, England, 2017-09) Demircan Türeyen, Ezgi; Kamasak, Mustafa E.; 237397Tomographic image reconstruction problem has an ill-posed nature like many other linear inverse problems in the image processing domain. Discrete tomography (DT) techniques are developed to cope with this drawback by utilizing the discreteness of an image. Discrete algebraic reconstruction technique (DART) is a DT technique that alternates between an inversion stage, employed by the algebraic reconstruction methods (ARM), and a discretization (i.e. segmentation) stage. Total variation (TV) minimization is another popular technique that deals with the ill-posedness by exploiting the piece-wise constancy of the image and basically requires to solve a convex optimization problem. In this paper, we propose an algorithm which also performs the successive sequences of inversion and discretization, but it estimates the continuous reconstructions under TV-based regularization instead of using ARM. Our algorithm incorporates the DART's idea of reducing the number of unknowns through the subsequent iterations, with a 1-D TV-based setting. As a second contribution, we also suggest a procedure to be able to select the segmentation parameters automatically which can be applied when the gray levels (corresponding to the different densities in the scanned object) are not known a priori. We performed various experiments using different phantoms, to show the proposed algorithm reveals better approximations when compared to DART, as well as three other continuous reconstruction techniques. While investigating the performances, we considered limited number of projections, limited-view, noisy projections and lack of prior knowledge on gray levels scenarios. (C) 2017 Elsevier Ltd. All rights reserved.Publication Embargo A fault detection strategy for software projects(Univ Osijek, Tech Fac, Trg ivane Brlic-Mazuranic 2, Slavonski Brod, Hr-35000, Croatia, 2013-02) Çatal, Çağatay; Diri, Banu; 108363; 25308Abstract The existing software fault prediction models require metrics and fault data belonging to previous software versions or similar software projects. However, there are cases when previous fault data are not present, such as a software company's transition to a new project domain. In this kind of situations, supervised learning methods using fault labels cannot be applied, leading to the need for new techniques. We proposed a software fault prediction strategy using method-level metrics thresholds to predict the fault-proneness of unlabelled program modules. This technique was experimentally evaluated on NASA datasets, KC2 and JM1. Some existing approaches implement several clustering techniques to cluster modules, process followed by an evaluation phase. This evaluation is performed by a software quality expert, who analyses every representative of each cluster and then labels the modules as fault-prone or not fault-prone. Our approach does not require a human expert during the prediction process. It is a fault prediction strategy, which combines a method-level metrics thresholds as filtering mechanism and an OR operator as a composition mechanism.Publication Metadata only A Genetic Algorithm Approach to a General Category Project Scheduling Problem(IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 345 E 47TH ST, NEW YORK, NY 10017-2394 USA, 1999-02) Özdamar, LinetA genetic algorithm (GA) approach is proposed for the general resource constrained project scheduling model, in which activities may be executed in more than one operating mode and renewable as well as nonrenewable resource constraints exist, Each activity operation mode has a different duration and requires different amounts of renewable and nonrenewable resources. The objective is the minimization of the project duration or makespan, The problem under consideration is known to be one of the most difficult scheduling problems, and it is hard to find a feasible solution for such a problem, let alone the optimal one, The GA approach described here incorporates problem-specific scheduling knowledge by an indirect chromosome encoding that consists of selected activity operating modes and an ordered set of scheduling rules, The scheduling rules in the chromosome are used in an iterative scheduling algorithm that constructs the schedule resulting from the chromosome. The proposed GA is denoted as a hybrid GA (HGA) approach since it is integrated with traditional scheduling tools and expertise specifically developed for the general resource constrained project scheduling problem. The results demonstrate that HGA approach produces near-optimal solutions within a reasonable amount of computation time.Publication Metadata only A graph-based web service composition technique using ontological information(IEEE Computer Soc, 10662 Los Vaqueros Circle, Po Box 3014, Los Alamitos, Ca 90720-1264 USA, 2007) Aydoğan, Reyhan; Zirtiloğlu, Hande; 145578We investigate Web service composition as a planning problem and use the input-output parameter relations in order to select the constituent services that make up the composite service. Furthermore, we make use of ontological information between the input-output parameters such that a more specific concept can be used instead of a general concept to make the process more flexible. Our proposed approach is based on constructing a dependency graph including the service parameters and Web services themselves. By using this dependency graph, we perform backward chaining starting to search from the desired output parameters, which is in fact the goal, to the available in put parameters. In addition to using semantic information through the search, our approach considers non-functional attributes of the services such as service quality. Considering the quality measures, we find the constituent services by making use of depth first search. After finding the required services, our algorithm generates a plan that shows the execution order of each service.Publication Metadata only A hierarchical planning approach for a production-distribution system(TAYLOR & FRANCIS LTD, ONE GUNPOWDER SQUARE, LONDON EC4A 3DE, ENGLAND, 1999-11-10) Özdamar, Linet; AKTİN, AYŞE TÜLİN; 57877; 109203A production-distribution model involving production and transportation decisions in a central factory and its warehouses is developed. The model is based on the operating system of a multi-national company producing detergents in a central factory from which products are distributed to geographically distant warehouses. The overall system costs are optimized considering factory and warehouse inventory costs and transportation costs. Constraints include production capacity, inventory balance and fleet size integrity. Here, a hierarchical approach is adopted in order to make use of medium range aggregate information, as well as to satisfy weekly fluctuating demand with an optimal fleet size. Thus, a model which involves an aggregation of products, demand, capacity, and time periods is solved. In the next planning phase, the aggregate decisions are disaggregated into refined decisions in terms of time periods, product families, inventory and distribution quantities related to warehouses. Consistency between the aggregate and disaggregation models is obtained by imposing additional constraints on the disaggregation model. Infeasibilities in the disaggregated solution are resolved through an iterative constraint relaxation scheme which is activated in response to infeasible solutions pertaining to different causes. Here, we investigate the robustness of the hierarchical model in terms of infeasibilities occurring due to the highly fluctuating nature of demand in the refined time periods and also due to the aggregation process itself.Publication Metadata only A Hierarchical Planning System for Energy Intensive Production Environments(ELSEVIER SCIENCE BV, PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS, 1999-01-15) Özdamar, Linet; Birbil, SIThis paper describes a hierarchical production planning approach with decision support features for energy intensive industries with particular reference to a tile manufacturing factory. In the tiling industry, the facilities which contribute most to the consumption of energy (and, hence, to the production costs) are usually the kilns where the curing operation is carried out. Frequently, the kilns are also the bottleneck in terms of capacity utilization. Thus, in order to save on energy costs, a planning approach which aims at minimizing the number of active kilns throughout the year is needed besides optimizing the process design in the curing department. To achieve the latter goal, it is necessary to take into account demand fluctuations as well as detailed capacity restrictions while deciding on the lot sizes of the products and the kilns on which the products are loaded. Rather than adopting a monolithic mathematical model for developing a desirable production plan, a hierarchical approach which decomposes the problem into two sub-problems is preferred. In the first level, products and capacity are aggregated over the planning horizon to achieve an overall consideration of demand fluctuations over time. Then, the solution provided by the aggregate solution for the current planning period is disaggregated into a detailed lot sizing and loading solution. The disaggregated problem is difficult to solve and hence, a heuristic is proposed here. This planning approach is sustained by a Decision Support System which enables the elimination of possible inconsistencies in the production plan by providing an effective interaction with the decision maker. (C) 1999 Elsevier Science B.V. All rights reserved.Publication Metadata only A new contour reconstruction approach from dexel data in virtual sculpting(IEEE COMPUTER SOC, 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA, 2008) Yüksel, Kemal; Zhang, Weihan; Ridzalski, Boryslaw Iwo; Leu, Ming C.This paper presents a novel method of contour reconstruction from dexel data solving the shape anomalies for the complex geometry in virtual sculpting. Grouping and traversing processes are developed to find connectivity between dexels along every two adjacent rays. After traveling through all the rays on one slice, sub-boundaries are connected into full boundaries which are desired contours. The complexity of the new method has been investigated and determined as O(n). We also demonstrate the ability of the described method for viewing a sculpted model from different directions.Publication Metadata only A Note On the Use of a Fuzzy Approach in Adaptive Partitioning Algorithms for Global Optimization(IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA, 1999-08) Demirhan, Melek; Özdamar, LinetIn global optimization, adaptive partitioning algorithms (APA) operate on the basis of partitioning the feasible region into subregions, sampling and evaluating each subregion, and selecting one or more subregions for repartitioning, The purpose of the repartitioning process is to locate a narrow neighborhood around the global optimum. In this correspondence, He propose to use a fuzzy approach in the assessment of subregions using random samples taken from these subregions. We discuss different types of uncertainties involved in APA and ne conclude that the use of a fuzzy approach in the assessment of subregions is in concurrence with APA's convergence property, We provide numerical results for the fuzzy approach on 13 test functions from the literature.Publication Metadata only A Novel Input Set for LSTM based Transport Mode Detection(2019-03) Güvensan, M.Amaç; AŞCI, GÜVEN; 285689The capability of mobile phones are increasing with the development of hardware and software technology. Especially sensors on smartphones enable to collect environmental and personal information. Thus, smartphones become the key components of ambient intelligence. Human activity recognition and transport mode detection (TMD) are the main research areas for tracking the daily activities of a person. This study aims to introduce a novel input set for daily activities mainly for transportation modes in order to increase the detection rate. In this study, the frame-based novel input set consisting of time-domain and frequency-domain features are fed to LSTM network. Thus, the classification ratio on HTC public dataset is climbed up to 97% which is 2% more than the state-of-the-art method in the literature.Publication Open Access A pipeline for adaptive filtering and transformation of noisy left-arm ECG to its surrogate chest signal(MDPI AG, 2020-05) Tanneeru, Akhilesh; Lee, Bongmook; Misra, Veena; Mohaddes, F.; Zhou, Y.; Lobaton, E.; AKBULUT, FATMA PATLARThe performance of a low-power single-lead armband in generating electrocardiogram (ECG) signals from the chest and left arm was validated against a BIOPAC MP160 benchtop system in real-time. The filtering performance of three adaptive filtering algorithms, namely least mean squares (LMS), recursive least squares (RLS), and extended kernel RLS (EKRLS) in removing white (W), power line interference (PLI), electrode movement (EM), muscle artifact (MA), and baseline wandering (BLW) noises from the chest and left-arm ECG was evaluated with respect to the mean squared error (MSE). Filter parameters of the used algorithms were adjusted to ensure optimal filtering performance. LMS was found to be the most effective adaptive filtering algorithm in removing all noises with minimum MSE. However, for removing PLI with a maximal signal-to-noise ratio (SNR), RLS showed lower MSE values than LMS when the step size was set to 1 × 10−5. We proposed a transformation framework to convert the denoised left-arm and chest ECG signals to their low-MSE and high-SNR surrogate chest signals. With wide applications in wearable technologies, the proposed pipeline was found to be capable of establishing a baseline for comparing left-arm signals with original chest signals, getting one step closer to making use of the left-arm ECG in clinical cardiac evaluations.Publication Metadata only A sentiment classification model based on multiple classifiers(Elsevier Science Bv, Po Box 211, 1000 AE Amsterdam, Netherlands, 2017-01) Çatal, Çağatay; Nanğır, Mehmet; 108363With the widespread usage of social networks, forums and blogs, customer reviews emerged as a critical factor for the customers' purchase decisions. Since the beginning of 2000s, researchers started to focus on these reviews to automatically categorize them into polarity levels such as positive, negative, and neutral. This research problem is known as sentiment classification. The objective of this study is to investigate the potential benefit of multiple classifier systems concept on Turkish sentiment classification problem and propose a novel classification technique. Vote algorithm has been used in conjunction with three classifiers, namely Naive Bayes, Support Vector Machine (SVM), and Bagging. Parameters of the SVM have been optimized when it was used as an individual classifier. Experimental results showed that multiple classifier systems increase the performance of individual classifiers on Turkish sentiment classification datasets and meta classifiers contribute to the power of these multiple classifier systems. The proposed approach achieved better performance than Naive Bayes, which was reported the best individual classifier for these datasets, and Support Vector Machines. Multiple classifier systems (MCS) is a good approach for sentiment classification, and parameter optimization of individual classifiers must be taken into account while developing MCS-based prediction systems. (C) 2016 Elsevier B.V. All rights reserved.Publication Metadata only A smart wearable system for short-term cardiovascular risk assessment with emotional dynamics(Elsevier Sci Ltd, The Boulevard, Langford Lane, Kidlington, Oxford Ox5 1Gb, Oxon, England, 2018-11) Akan, Aydın; AKBULUT, FATMA PATLAR; 2918Recent innovative treatment and diagnostic methods developed for heart and circulatory system disorders do not provide the desired results as they are not supported by long-term patient follow-up. Continuous medical support in a clinic or hospital is often not feasible in elderly or aging populations; yet, collecting medical data is still required to maintain a high-quality of life. In this study, a smart wearable system design called Cardiovascular Disease Monitoring (CVDiMo), which provides continuous medical monitoring and creates a health profile with the risk of disease over time. Systematic tests were performed with analysis of six different biosignals from two different test groups with 30 participants. In addition to examining the biosignals of patients, using the physical activity results and stress levels deduced from the emotional state analysis achieved a higher performance in risk estimation. In our experiments, the highest accuracy of determining the short-term health status was obtained as 96%.Publication Metadata only A Survey on White Box Cryptography Model for Mobile Payment Systems(2017-12-28) Aydın, Muhammed Ali; Sertbaş, Ahmet; ŞENGEL, ÖZNUR; 144004; 176402; 2283The technology is showing rapid development and these developments are changing our lives, our habits, and our needs. As electronic devices, which are indispensable for our daily lives, continue to be intelligent, we are able to do our every operation through these devices. Mobile payment technologies and services are one of the innovations. Consumers all over the world and in our country have started to use their mobile devices as a means of payment as well as communication services. With rapidly developing technology, one of the most important needs of many systems such as electronic, mobile and bank is to move and store the data safely. In addition to data security in electronic transactions, the speed of the system operations is becoming very important. Developing a mobile payment system whether by installing an application or using existing hardware, the most important issue in both cases is the creation of a reliable system based on the protection of the current situation of the consumer and the confidentiality of their information.