https://spast.org/techrep/issue/feed SPAST Abstracts 2021-11-22T14:08:13+00:00 Office-SPAST office@spast.org Open Journal Systems <p><strong>SPAST Abstracts </strong>publishes the extended abstracts of the papers presented in the First International Conference on Technologies for Smart Green Connected Society 2021</p> <p><strong>SPAST Abstracts </strong>are open access and provide the audience with the insights into the latest research trends in the field.</p> <p><strong>SPAST Abstracts </strong>is the part of the SPAST Open Access Research series and is google scholar indexed.</p> <p> </p> https://spast.org/techrep/article/view/1112 A Systematic Review on Features Extraction Techniques for Aspect Based Text Classification using Artificial Intelligence 2021-09-21T09:22:21+00:00 Nagendra nagendra.n@res.christuniversity.in <p>Now a days, people express their opinions on various social media sites and commercial organizations as reviews, comments, and feedback. Feedback from end-users dramatically impacts the development of the new version of the product or service. For companies that invest in clients, manually analyzing each feedback can be overwhelming. Similarly, rating a company's performance against the usual quantitative feedback system is challenging for an organization. Text classification and context analysis can help solve these problems early and increase sales and productivity.</p> <p>Additionally, reviews written in natural language are often unstructured and time-consuming to process. Because the data is available in large size, it is not possible to manually process and analyze the information. Many machine learning techniques and deep learning models are proposed for machine learning, extraction, and analysis to solve this problem. As technology advances, businesses, organizations, social media, and e-commerce sites can benefit from this detailed information can be analyzed.</p> <p>In Natural language processing, Aspect Extraction is an important, challenging, and meaningful task in aspect-based text classification analysis techniques. One of the common goals with NLP is to identify text and extract insights. But can find numerous methods to perform and implement text analysis techniques, especially text classification problems on multiple domains like e-commerce, customer feedback, etc. In many existing works in the area of aspect-extraction-based sentiment analysis. The work in [1] proposed a feature-based review summarizer for a product review. It identified a frequent feature set of a maximum of three words using association rule mining. And another scenario [2] combined wordnet and statistical analysis to propose a multi-knowledge-based approach for bringing about feature class opinion summarization of review. A practical and flexible system that utilized syntactic and semantic information to extract product features and opinions was proposed in work [3]. And also considered the pre-trained models to build the research models and considered the small data set is used in the current papers.</p> <p>In aspect extraction technique apply variants of topic models on task, while reasonably successful, these methods usually do not produce highly coherent aspects. This review presents a novel neural/cognitive approach to discover coherent aspects methods. They were exploiting the distribution of word co-occurrences through neural/cognitive word embeddings. Unlike topics that typically assume independently generated words, word embedding models encourage words that appear in similar factors close to each other in the embedding space. Also, use an attention mechanism to deemphasize irrelevant words during training, further improving aspects coherence. Methods results on datasets demonstrate that the approach discovers more meaningful and coherent aspects and substantially outperforms baseline. Aspect-based text analysis aims to determine people's attitudes towards different aspects in a review. Extracting specific aspect mentions, referred to as aspect term extraction, and detecting the sentiment orientation towards each extracted aspect term, referred to as aspect-level text classification. Generally, aspect extraction can be classified into three approaches: rule-based, supervised, and unsupervised. Rule-based methods usually do not group extracted aspect terms into categories. Supervised learning requires data annotation and suffers from domain adaptation problems. Unsupervised methods are adopted to avoid reliance on labeled data needed for supervised learning. The more feedbacks added the more reflective the text classification score is to the performance of the models.</p> 2021-09-21T00:00:00+00:00 Copyright (c) 2021 Nagendra https://spast.org/techrep/article/view/1979 The Sentiment analysis on Social network data and its marketing Strategies: A review 2021-10-09T11:30:19+00:00 Priyanka Dash priyankadash2018@gmail.com Jyotirmaya Mishra jyoti@giet.edu Suresh Dara darasuresh@live.in <p>Any social media plan should include the creation of sticky content. Marketers produce viral content in the expectation that it will go viral rapidly. Customers should be encouraged to use social media marketing to create and distribute their own content, such as product reviews or comments. Influencer marketing on social media is very popular and effective. The main issue is that Influencer Marketing efforts are difficult to track and might have catastrophic ramifications. Sentiment Analysis may be used to assess Influencer Marketing efforts and assist brands in making educated decisions. The goal of the study is to determine how effective an influencer is at creating or boosting intangible assets, as well as to provide practical data for brands looking to hire the ideal influencer for their products. Through sentiment analysis, this study identifies the ideal conditions for influencer marketing. The research also outlines the opportunities and challenges it faces along the process. The nature of the research study is conceptual. The researcher analysed and drew conclusions using secondary data from reliable secondary sources and conceptual demonstration. The next consistent promoting field is social media. Currently, Facebook dominates the advanced advertising area, closely followed by Twitter. Despite the evident benefits that these platforms provide, sites, YouTube, and MySpace are less popular. To investigate the effects of various internet promotional efforts on brand awareness. The purpose of this research is to see how Social Media Sentimental Analysis affects business growth. To investigate the audience's reaction to the brand in order to develop a fresh marketing strategy. To investigate the impact of a social media campaign on the target audience. With knowledge of the public's opinion toward a product or service, one can decide whether or not to buy the product or service. By processing and analysing public sentiment received from internet reviews and social media, the polarity of the sentiment can be determined.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Priyanka Dash, Jyotirmaya Mishra, Suresh Dara https://spast.org/techrep/article/view/1145 Solar Tracker coding using C language 2021-09-22T09:40:58+00:00 Hardik Sharma hardiksharma756@gmail.com Saumya saumya3632@gmail.com Harshika Singhal singhalharshi0910@gmail.com Vaishali Kharpuse vaishalikharpuse@gmail.com <p>An AI based C program to compute number and size of payloads, parabolic troughs, Fresnel reflectors, configuration details of lenses or the mirrors to design an efficient solar tracker for the commercial use. The program suggest all possible spare parts with capacity, size and cost based on needs of the buyer.</p> 2021-09-23T00:00:00+00:00 Copyright (c) 2021 Hardik Sharma, Saumya, Harshika Singhal, Vaishali Kharpuse https://spast.org/techrep/article/view/507 A Machine Learning Based Framework for Heart Disease Detection 2021-09-15T12:12:56+00:00 Harikumar Pallathadka abhishek14482@gmail.com Dr. Mohd Naved abhishek14482@gmail.com Khongdet Phasinam abhishek14482@gmail.com Myla M. Arcinas abhishek14482@gmail.com <p>In industrialized nations, heart disease affects around 5% of persons under the age of 35 and more than 20% of those over the age of 75 [1] [2]. Around 3 to 5 percent of hospital admissions are due to heart collapse. Heart failure is the most prevalent reason for doctors to admit patients to the hospital in the course of their professional practice. Affluent countries spend up to 20% of their entire healthcare budget on these expenses.</p> <p>Computer algorithms that use machine learning can extract new information in the form of patterns from a database's history. Using data mining techniques, it is possible to predict illnesses at an early stage. Costly tests are necessary for patients to examine symptoms and reasons for an accurate disease diagnosis, and these tests can be rather expensive for individuals. Data mining and machine learning algorithms can reduce the number of tests performed on patients.[3]</p> <p>As a result, heart disease prediction is crucial since it allows healthcare practitioners to examine the qualities needed for diagnosis such as blood pressure and diabetes. Despite the fact that many data mining algorithms are already in use in the medical business, more study has to be done on the performance evaluation of such classification approaches in order to enhance and adjust the degree of accuracy. The goal of the project is to solve the problems of developing the prediction models to forecast heart disease by giving prompt response among the best candidates. Aiming to improve the accuracy of cardiac disease prediction, this research aims to address and overcome the problems of the field. [4][5]</p> <p>Primary goal of this project is to create a prediction system that is more accurate by improving the categorization algorithm. In order to improve the accuracy of heart disease prediction, it is required to change the necessary components. In addition, it illustrates that data mining may be used to healthcare datasets in order to forecast or categorize data with realistic accuracy.</p> <p>A disease prediction framework is shown in Figure 1. These data are fed into the system via a set of student performance metrics. To decrease noise and make the input data set consistent, this student data set has been preprocessed. A variety of machine learning algorithms are used on the input data, including SVM, ID3, and C4.5. This involves the categorization of data. There is a comparison of the categorization results of different techniques. There is a comparison between the categorization results of different techniques.</p> <table> <tbody> <tr> <td width="7">&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> </tr> </tbody> </table> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <table> <tbody> <tr> <td width="8">&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> </tr> </tbody> </table> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><strong>Fig.1. Framework for disease prediction</strong></p> <p><strong>&nbsp;</strong></p> <p>This research relies on the UCI heart disease data gathering equipment [6]. ID3, C4.5, SVM uses the 303-record Cleveland database as input. Attained precision may be seen in Figure 2.</p> <p><strong>&nbsp;</strong></p> <p><strong>&nbsp;</strong></p> <p><strong>&nbsp;</strong></p> <p><strong>&nbsp;</strong></p> <p><strong>Fig.2. </strong>Disease Data Classification Results</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Harikumar Pallathadka, Dr. Mohd Naved, Khongdet Phasinam, Myla M. Arcinas https://spast.org/techrep/article/view/1179 Ionospheric Model Development for Indian Region: a survey paper 2021-09-24T11:13:53+00:00 Parinda Prajapati parindaprajapati2011@gmail.com <p>Ionosphere’s role is the most important in vigorous satellite communication for the navigation positional correctness purpose. Ionosphere contains diverse layers reliant on its electron density with altitude in the layer. There are various Ionospheric models to forecast electron density with temporal resolutions cited by literatures. GPS data are frequently used by these models. So, the necessity is, a prerequisite of evolving Ionospheric models with different time duration for low latitudes of India. Also, an ionosphere tomography is considered as an ill-posed problem. Ionospheric TEC found simultaneously at numerous locations, can be preserved with several algorithms to conquer electron density. This research is proposed for evolving a model to forecast 3D tomography of total electron density for whole Indian region. Mainly used satellite Data can be collected by various mean. The management of vast statistics are planned by using data mining techniques, artificial neural network techniques for estimation. This paper is an outcome of detailed research on Ionospheric model development</p> 2021-09-24T00:00:00+00:00 Copyright (c) 2021 Parinda Prajapati https://spast.org/techrep/article/view/1217 The Heterogeneity Paradigm in Big Data Characterised Under Variety of Voluminous Data - A Literature Review 2021-09-27T09:10:56+00:00 Bhavana Hotchandani bhavana.mca@gmail.com Disha Parekh Parekh disha.hparekh213@gmail.com Dr. Vishal Dahiya cs.hod@indusuni.ac.in <p><em><span style="font-weight: 400;">Big data is a buzzword that is implemented in almost every sector of the world today; be it business, be it education, be it research firm or be it healthcare or any spatial science institutes. Big Data refers to a collection of datasets which is extremely large, highly complex, and swiftly changing that they become difficult to process using extant database management tools or traditional data processing applications. Big data is a very vague term where we refer “big” as real big though the word necessarily enhances the understanding of data which is highly complex in nature. This data that we deal with in our day to day life is really huge and is highly voluminous to be handled. Today many research institutes and academics are focused towards carrying research in the field of big data. Big data is simply a combination of voluminous data with a wider variety and a greater velocity. These 7Vs, i.e. Volume, Velocity, Value, Veracity, Variability, Visibility and Variety are the base of any big data that we talk about today. Big data essentially consists of varieties of data, where heterogeneity in its format is observed at an extensive rate. This heterogeneity in data, usually referred to as heterogeneous data, is of sometimes poor quality and has many missing attributes which leads to untruthful data analysis. This heterogeneity poorness and issues associated with it can be reduced under 3 different levels of data cycle: one being data processing, where Machine Learning is usually implemented today, next is data integration, where Big data and its tools are implemented and last is data analytics, where deep learning is followed today by researchers. Out of all these three methods for heterogeneous data processing, our paper is aiming to show big data and its implementation strategies to reduce the vagueness in data. In this review paper, we will be focusing majorly on the characteristics of big data concluding towards important 7Vs. In the paper we have also focused on architecture of big data. Further this paper will also list the challenges faced in the heterogeneity paradigm of big data. As our future perspective towards research is inclined over heterogeneity and its types, we have focused on various data processing methods on heterogeneous data. Big data applications are observed immensely in the sectors of healthcare, where medical data is a junk, and is also observed majorly in aviation and IoT based techniques. Hence, this paper will also enhance the insights of researchers who are focusing on healthcare or aviation or IOT based research perspectives towards heterogeneity of big data. This paper, thus, which is aimed to carry out an extensive review on big data, its 7Vs and heterogeneous data, will be showing levels of heterogeneity and tools used at each level with its description. We have also discussed the possible research insights that one shall be interested in carrying out further on bigdata and its heterogeneity. This paper could be aimed by the researchers or the scholars who are very novel to the concepts of big data to learn its characteristics and the future enhancements.</span></em></p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Bhavana Hotchandani, Ms. Disha H. Parekh, Dr. Vishal Dahiya https://spast.org/techrep/article/view/70 An Online Retail Market Analysis for Social Development with Machine Learning 2021-10-22T17:49:12+00:00 Prof.(Dr.) Bhavana Narain narainbhawna@gmail.com Dr. Manjushree Nayak nayaksai.sairam@gmail.com <p>Present era is a digital era where retail marketing &amp; online marketing plays an important role in people living style. Filling the gap between customer &amp;market is a technological responsibility of technocrat’s. In our work we have collected online data and retail data of last 5 years. These data were collected from two major organizations which deal with online marketing and retail marketing. Techniques from unsupervised data type were implemented to analyze the collected data. Knowledge gain from this analysis is used for marketing upliftment &amp; social development. New modified K mean clustering Algorithm (NMKMCA) is used for data analysis. Accuracy result of retail marketing&amp; online marketing is compared in our work. We have taken I/O time and computational time as our working parameters. Result of this parameters are analyzed &amp; discussed in our work. In last section of our work we find that NMKMCA will take less time in computing very large dataset.</p> 2021-10-22T00:00:00+00:00 Copyright (c) 2021 Prof.(Dr.) Bhavana Narain, Dr. Manjushree Nayak https://spast.org/techrep/article/view/121 FUTURE PREDICTION OF HEART DISEASE THROUGH EXPLORATORY ANALYSIS OF DATA 2021-08-23T08:04:56+00:00 Dr T LALITHA t.lalitha@jainuniversity.ac.in <p>This research paper aims to give an in-depth analysis of the healthcare field and data analysis related to healthcare. The Healthcare industry usually generates numerous amounts of data. These data are used for making a decision, so this must be very accurate. In order to identify the errors in the healthcare data, Exploratory Data Analysis (EDA) is proposed in this research. The EDA tries to detect the mistake, find the perfect data, check the errors, and determine the correlation. The most dependent analytical techniques and tools for improving the healthcare performance in the areas of operations, decision making, prediction of disease, etc. In most situations, a complicated combination of pathological&nbsp;and clinical&nbsp;evidence is used to diagnose cardiac disease. Because of this complication, clinical practitioners and scientists are keen to learn more about how to anticipate cardiac disease efficiently and accurately. With the use of the K-means algorithm, the factors that cause heart-related disorders and problems are considered and forecasted in this study. The research is based on publicly available medical information about heart disease. There are 208 entries in this dataset, each with eight characteristics: the patient's age, type of chest discomfort, blood glucose level, BP level, heart rate, &nbsp;ECG,&nbsp;and so on. The K-means grouping technique as well as visualisation and analytics tools, are utilised to forecast cardiac disease. The proposed model's prediction is more accurate than the other model, according to the results.</p> 2021-08-23T00:00:00+00:00 Copyright (c) 2021 Dr T LALITHA https://spast.org/techrep/article/view/2287 Dentistry Using AR 2021-10-05T19:58:10+00:00 Ammar Shareef ammarshareef28@gmail.com Srujith Rao Ambati srujithraoambati@gmail.com Mohammed Syed Akbar Hashmi akbar6127@gmail.com G. Shanmukhi Rama shanmukhi.rama_cse@cbit.ac.in <p>The process of getting the teeth aligned is a tedious and an expensive process. We aim to educate the patient about how the process of treating malocclusion is, before the actual process starts. The dentist first makes a 3D model, of the teeth to help the patient make a good decision whether to go for the correction or not. In addition to the 3D model the latest innovation of extracting the teeth is to use an intraoral scanner to achieve the highest accuracy and better clarity for the dentists. We aim to make the process faster and cheaper as using intraoral scanners as the first step could be expensive and time taking, In this project the application allows users to view their teeth alignment in a proper manner using Augmented Reality. To do so, we are importing 3D models of the teeth from Primary Dental Clinic Sydney, Australia. These images are natural and clean beautiful teeth that are ideal for the dataset. The aim is to present a neat 3D model of the teeth which can be viewed in real-time using Augmented Reality. This first step allows the patients to see their new smiles prior to the treatment which saves a lot of time and money before proceeding towards the intraoral scanning of the teeth. This visualization of the teeth serves not only the patient but also the dental professionals.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Ammar Shareef, Srujith Rao Ambati, Mohammed Syed Akbar Hashmi, G. Shanmukhi Rama https://spast.org/techrep/article/view/1631 Automated Detection of Pneumothorax using Frontal Chest X-Rays 2021-09-30T04:58:26+00:00 Sivakumar Rajagopal rsivakumar@vit.ac.in Mathumetha P mathumetha.p2019@vitstudent.ac.in Shailly Vaidya shailly.vaidya2018@vitstudent.ac.in Basim Alhadidi b_hadidi@bau.edu.jo <p>Pneumothorax is the medical term for a collapsed lung. Pneumothorax occurs when air enters the space around the lungs medically termed as the pleural space. Air can find its way into the pleural space when there’s an open injury in the chest wall or a rupture in the lung tissue, disrupting the negative pressure that keeps the lungs inflated. The incidence of pneumothorax was 10% in patients with acute respiratory distress syndrome (ARDS), 24% in patients receiving mechanical ventilation, and 56% in patients requiring invasive mechanical ventilation, with 80% patients died. All 5 patients were male and aged ranging from 54 to 79 years old [1]. &nbsp;The numerous imaging modalities such as standard erect PA chest X-ray, lateral x-rays, expiratory films, supine and lateral decubitus X-rays, thoracic ultrasound scanning, and digital imaging [2-3]. The sensitivity of thoracic ultrasound is found to be 81.8 percent and the specificity is found to be 100 percent. The sensitivity of chest X-ray is found to be 31.8 percent and the specificity is found to be 100 percent [4]. The reports of a study conducted in 2020, had a huge number of patients suspected and admitted for COVID-19 pneumonia. On being examined 0.66% of them developed spontaneous pneumothorax. The study concludes that spontaneous pneumothorax is a rare complication of COVID-19 viral pneumonia and may occur in the absence of mechanical ventilation [5]. Practitioners should be observant towards the development of such complications.</p> <p>In this paper, automated methods of detecting pneumothorax are explored. Where image segmentation techniques have been employed for detection purposes. The preprocessing methods are handled by image processing techniques using a Support Vector Machine. SVM is a supervised machine learning algorithm that can be used for classification or regression problems. A database of around 10.5k images has been utilized. Initially, the images were preprocessed to remove noise artifacts. The image was then segmented to filter out the region of interest. The images are further passed through the Sobel filter to view the tilts and shifts of the patient’s chest. Furthermore, morphological operations were performed on them to add pixels to image boundaries. This greatly helps in estimating the size of the pneumothorax. Now, in the first phase, the textural features of the X-ray images were extracted. In the second and the last phase, the images were classified based on the severity of pneumothorax.&nbsp; Resulting in the greater view of the lung in RGB for simplified classification of the normal condition from the pneumothorax affected lung.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Sivakumar Rajagopal, Mathumetha P, Shailly Vaidya, Basim Alhadidi https://spast.org/techrep/article/view/166 BChain-Driven Scalable Approach to Big Data Verification of Db. Applications Processing 2021-09-02T06:22:15+00:00 akrati Sharma akrati.sharma@jlu.edu.in abhilasha singh abhilasha.singh@jlu.edu.in <p>In current world data is the key intergradient among all the organizations including IT sectors, academic fields, medical records and list goes on. Every sector deals with one common problem that is the management of such huge data generation that is commonly called as big data [1-2]. With the time data becomes historical as every day, every hour, every minute new data is generated. The main issues are arises when handling of such huge amount of data become an issue, because such data can neither be deleted as it might be useful for the organization nor be kept because it causes no memory space for newly generated data [4]. Hence to solve the issues one trending technology can be helpful that is block chain. Block chain [10-11] can be one of the promising technology which dealt easily with the problem occurred to manage big data,</p> <p><br /> 2 Q’s can be take care while combining these two terms together that is quality and quantity [15]. Quality in terms of efficient management of data so that without degrading the quality of data it can easily be managed without causing any issues with the newly generated data. Quantity in terms of as the name defines itself that is the generation of big data. Every day several organizations produce data in terabytes and zeta bytes. The reason behind using these two technologies together is the ones disadvantage becomes another’s advantage. First one is security, such huge data generation can cause security loops. Maintain security of such big data is really a hazardous task and block chain dealt with security issues [31] quite well. The most important positive feature of block chain is decentralization [17] that means data does not belong to one single person; hence chances of data breaches are going downwards. Second one is flexibility, as big data contains every type of data like structured, unstructured, and semi-structured that include image files, video files, audio files etc. To work upon on various kinds of data at same time and same place again is not at all feasible. Here comes the working of block chain, as there is no limitation on block chain. Block chain [25] can easily worked upon variety of data.</p> <p><br />As Big Data through block chain analysis is still a relatively young field in the collection of massive datasets recognition and gain on discoveries of certain patterns in the data. Since the certain patterns in the data are tremendously big and difficult data, it cannot be developed through the traditional data processing systems. To facilitate analyze such ever-growing amount of data to argues that block chain analysis should be pleasured as a new type of application for Big Data platforms in particular Map/Reduce,[7-8] to extract and analyze information from the block chain. Since all the information will be stored in the Block chain it will be convenient to access these details. Because critical operational data source of the design of the Block chain technology, users can view historical transactions effortlessly and need to ingest and analyze as part of their analytics.</p> <p> </p> 2021-09-02T00:00:00+00:00 Copyright (c) 2021 akrati Sharma, abhilasha singh https://spast.org/techrep/article/view/204 Real-time Detection of Anomalies on Performance Data of Container Virtualization Platforms 2021-09-08T06:36:32+00:00 Venkata daya sagar Ketaraju sagar.tadepalli@kluniversity.in <p>Application virtualization platforms are virtualization Technologies that allow applications to run <br>independently. It is observed that applications running on application virtualization platforms may have abnormal <br>working conditions from time to time. However, such situations can be caught by system administrators examining the <br>application log files in detail. This causes abnormal operating conditions to be captured long after they occur. Within the<br>scope of this research, a method that allows to detect abnormal running conditions of applications running on <br>application virtualization platforms in real time is proposed. The proposed method uses both unsupervised learning and<br>supervised learning algorithms together. A prototype application was developed to demonstrate the usability of the <br>proposed method. In order to demonstrate the success of the method, the tests we performed on the prototype yielded <br>high accuracy in a real-time detection of abnormal operating conditions</p> 2021-09-08T00:00:00+00:00 Copyright (c) 2021 Venkata daya sagar Ketaraju https://spast.org/techrep/article/view/2354 Mr Secured Multiparty Access Control Model for Online Social Networking using Machine Learning 2021-10-08T06:19:52+00:00 MADHU NAKEREKANTI madhu0903@gmail.com <p>In recent years, Online Social Networks (OSNs) have registered significant growth. They become part of daily routine life, a phenomenon that received the serious attention of Academic, Technological, and Social Research Communities [1] [2]. Many OSNs allow its users to post multimedia content, communicate in various ways, and share many aspects of their life in addition to building a virtual network of social relationships. People also utilize various social networking sites to keep up with everyday news [4]. Current social networks provide a platform for essential communication, promoting business, advertising, product campaign, and multiple ways of effective outreach. It has several flaws, and most people are unaware of the issue produced by social networking as a system. The social network is responsible for maintaining the privacy of its user community and the other security issues [8].&nbsp; People should be aware of the benefits and drawbacks of social media and try to utilize it more securely in order to reduce the danger of hacking, cybercrime, threats, and attacks.</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Users in OSNs reveal personal information such as their profiles photos, relationship status, phone numbers, dates of birth, and other social activities without being aware of the risks and thefts which may occur [15]. While sharing data to the multiple users, there is no mechanism in place to enforce privacy and security issues in OSNs. To provide security of the information, the automated annotation of images or texts has been introduced, aiming to create the metadata information about the images or text by using the novel approach for retrieving the images or texts [21]. Therefore, this paper aims to capture the concepts of multiparty authorization and policy enforcement in an access control model [29].</p> <p>The Paper deliberates broad-spectrum issues of the multi-users such as privacy and security issues, data integrity, scalability, data authentication, and so on while sending the data. To overcome such a situation, we need to develop an innovative framework, i.e., Novel Adaptive Privacy Policy Prediction (NA3P), for multi-users access control policies using machine learning classification technics to provide security and policy prediction for shared content [3] [6].</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 MADHU NAKEREKANTI https://spast.org/techrep/article/view/2390 An Analysis of Emotion Recognition Based on GSR Signal 2021-10-10T14:02:54+00:00 Stobak Dutta stobak.dutta@gmail.com Brojo Mishra brojomishra@gmail.com Anirban Mitra mitra.anirban@gmail.com Amartya Chakraborty amartya3@gmail.com <p>In our day-to-day life emotion plays an important role in taking decision. As per different psychologists &nbsp;human behavior is also very much dependent on emotions. On the basis of emotion we percept the outer world. For&nbsp; long time the psychologists are trying to develop a model which can explain the effect of emotion and the different emotional states. Now a days lots of work are going on different&nbsp; automated emotion recognition system which can be utilized in different arena like education, marketing , health and human -robot interaction. Mental health is an important vertical of research in todays scenario when the world is trying to cope up with the post Covid scenario. Emotion recognition will help in understanding the mental condition of those who can not express themselves as well as the people who don’t know the reason behind their illness. There are different ways by which we can detect the emotion of a person. Lots of work are there on emotion detection through Face Recognition and Speech modulation. However there is a huge question on the accuracy or effectiveness of these results as these features can be controlled by the person. A person can pretend something and can hide his emotion by different facial gestures and can also control his voice. So the next approach is the Physiological Signals. These signals are generated by the Central Nervous system and can not be controlled by the person. The different Physiological signals that can be used for this purpose are ECG, EEG, GSR, PPG etc. In our work we are going to use GSR signal for emotion detection. It is easily available off the shelf , non invasive device and is easy to use. The GSR sensor helps us to measure the activity of the sweat gland. These activities are very much related to the emotion arousal. It identifies the variation of the skin resistance with sweat gland activity. The GSR works on the calculation of skin conductance. In our work we will take the different features of the GSR Data collected from different people. These people were shown different videos for eliciting different emotions in them. We have used different machine learning models to classifying different emotional states with better accuracy.&nbsp; The different classifiers that are used are the KNN, SVM, D-Tree, Logistic regression</p> 2021-10-11T00:00:00+00:00 Copyright (c) 2021 Stobak Dutta, Brojo Mishra, Anirban Mitra, Amartya Chakraborty https://spast.org/techrep/article/view/1831 Fruit image segmentation using Teacher-Learner optimization algorithm and fuzzy Entropy 2021-10-09T14:08:51+00:00 Harmandeep Singh Gill faisalsyedmtp@gmail.com Surender Kumar faisalsyedmtp@gmail.com Sudakshina Chakrabarti faisalsyedmtp@gmail.com <p>To perform robust fruit Image segmentation is a challenging task due to several variances in the varieties. Thresholding is one of the popular segmentation methods in recent times. Fuzzy entropy scheme has been mostly applied approach to image thresholding.&nbsp; Fuzzy membership functions are the major source for segmenting the image by fuzzy entropy and thresholding. In the proposed paper, Teacher-Learner based optimization (TLBO) algorithm is involved to search optimal combinations of threshold values for fruit image segmentation based on fuzzy entropy. The proposed scheme is performed on red apple, green apple, golden apple, guava, and orange fruit images. For comparison of segmentation results of the proposed scheme, Genetic algorithm (GA), Honey-bee mating optimization (HBMO), and Bacteria foraging optimization (BFO) approaches has been considered. &nbsp;From experimental and simulation results, it is observed that the performance of TLBO-FE is more effective than the existing approaches.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Harmandeep Singh Gill, Surender Kumar, Sudakshina Chakrabarti https://spast.org/techrep/article/view/2500 An Investigation of Various Versions of AODV Protocol for Discovering Routing Path and Eliminating Packet Loss 2021-10-14T04:55:27+00:00 Veena Trivedi abhishek14482@gmail.com Dr. Padmalaya Nayak ieeemtech@gmail.com <p><strong>Abstract</strong></p> <p>A Mobile Ad hoc Network (MANET) is a network made up of mobile nodes that communicate through radio. It is a self-contained computer that can be individually programmed, ordered, and controlled. The nodes of an ad hoc network move at their own pace. Ad hoc network nodes are unable to communicate directly with one another due to their short propagation ranges. The topology of the network moves as nodes in a network switch, resulting in frequent route failures. The nodes of the ad hoc network are powered by a battery with a limited energy supply. When a node's battery dies, the network lifespan is shortened, resulting in network loss. Routing is a critical component of MANETs. As a result, Energy Efficient Routing Protocol is chosen to boost network efficiency. There are many routing protocols used to transfer data packets in wireless Ad hoc networks, the most important of which is the Ad hoc On-demand Distance Vector (AODV) Routing Protocol. The key issue with using AODV is that connection errors occur as a result of node movement that is unpredictable. As either the source node or an intermediate node shifts during data packet transfer, the energy in the nodes is lost again due to the additional path exploration process, which occurs several times. This paper provides a literature review of various implementations of the AODV protocol in the form of routing route exploration and packet loss reduction. [1-5]</p> <p>Reactive MANET protocols are routing protocols on request, and also exist in large ad-hoc wireless networks. Reactive MANET routing protocols are designed to minimize overhead routing. Each node can only send routing packets when a link request arrives. The majority of on-demand routing protocols start with the tracking method, which means packets are flooded through the network in order to find the right path to the target node. Routing protocols from Reactive MANET are suitable for high node or rare data transfer networks. Ad hoc on demand vectors (AODV) routing and dynamic source routing are the most important reactive MANET protocols (DSR). Both protocols were carefully assessed in literature by MANET and are being studied by the MANET Working Group (IETF)[6]. The protocols are currently under consideration.</p> <p>This article provides an in-depth analysis of basic AODV protocol along with its various modern variants. This study mainly takes into account packet loss and energy consumption of basic AODV and its versions. This will help in identifying the current progress of AODV protocol in handling packet drop ratio and minimizing energy consumption.</p> 2021-10-15T00:00:00+00:00 Copyright (c) 2021 Veena Trivedi, Dr. Padmalaya Nayak https://spast.org/techrep/article/view/1867 AI Based Covid Pneumonia Classifier using Machine Learning 2021-10-08T11:29:42+00:00 A Sowmiya sowmiya.eie@rmd.ac.in C Shilaja shilaja.research@gmail.com G Nalinashini gns.eie@rmd.ac.in N Padmavathi npi.eie@rmd.ac.in Mayakannan Selvaraju kannanarchieves@gmail.com <p>This paper offers a convolutional neural network model trained from scratch to categorize and detect the presence of pneumonia or COVID 19 in a set of chest X-ray pictures. To achieve a remarkable classification performance, other methods rely on transfer learning approaches or traditional handcrafted techniques. However, in our work, we built a convolutional neural network model to extract features from a given chest X-ray image and classify it to determine if a person is affected with pneumonia or COVID 19. This approach could be able to help with the dependability and interpretability issues that come up frequently when dealing with medical images. It is difficult to obtain a large amount of pneumonia dataset for this classification task, as it is for other deep learning classification tasks with sufficient image repositories. As a result, we used several data augmentation algorithms to improve the CNN model's validation and classification accuracy, and we achieved remarkable validation accuracy.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 A Sowmiya, C Shilaja, G Nalinashini, N Padmavathi, Mayakannan Selvaraju https://spast.org/techrep/article/view/1026 Studies on leak detection in process pipelines using artificial neural networks/machine learning 2021-09-20T08:12:46+00:00 Ujwal Shreenag Meda ujwalshreenagm@rvce.edu.in Harshitha N harshithan.ch18@rvce.edu.in Vinayak `Hulake vinayakhulake.ch18@rvce.edu.in Ashwin Rao Padubidri ashwinraop.ch18@rvce.edu.in <p>In process industries, fluids are commonly transported through pipelines. Leakages in pipelines are common and are mostly due to external factors [1]. These leakages can cause hazardous disasters and loss of lives if not monitored regularly. This can be witnessed by several gas leak accidents that took place in Bhopal, Visakhapatnam, etc. It is an environmental, health and economic issue to be addressed without fail. Safe transportation of fluids in a pipeline can be achieved by accurate leak detection and leak prevention in pipelines. Identifying these leaks with less human intervention, without false alarm rates, and accurate leak location identification is the need of the hour.</p> <p>To detect the presence of leaks in pipelines, several conventional leak detection techniques are available. There are various methods for leak detection in pipelines, ranging from manual investigation to improved imaging through satellite. Based on various working principles and ideas, numerous methods for the detection of leaks are reported. From the literature, conventional techniques for the detection of leaks can be mainly classified into hardware-based, visual, and software-based methods. Among those, hardware-based and visual methods are frequently employed in the process industries. But there are drawbacks to these methods as well. Therefore, researchers have focused more on software-based methods because due to simple and reliable operation. Under software-based methods using machine learning algorithms that are a data-driven strategy for detection and localization of leaks is becoming popular because of its learning capabilities. Different machine learning algorithms like Artificial Neural Network (ANN), Neuro-Fuzzy Approach, and Support Vector Machines (SVM) are used for leak detection in process pipelines [2-4].</p> <p>In this review, several recent conventional methods for the detection of leaks and machine learning-based leak detection methods are described. An attempt is made to identify machine learning-based leak detection techniques along with the common methodology followed for building a leak detection system, as shown in fig. 1. In one of the recent works leak diagnosis using combined ANN networks with cascade forward back propagation model to locate and measure leak using pressure and flow rate measurements at the pipe ends is carried out [2]. Therefore, the application of data science and machine learning algorithms for detecting leaks in pipelines in a process plant can help in handling and dealing with issues quite effectively and swiftly and thereby minimizing the wastage of time and increasing the efficiency of a process plant. Automation in leak detection based on machine learning models creates a less human intervention, fast, reliable, accurate, and economical solution. The future approach is to build a leak detection system using Machine learning algorithms to apply the same for real-time analysis in process industries, which has not been accomplished over the years.</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Ujwal Shreenag Meda, Harshitha N, Vinayak `Hulake, Ashwin Rao Padubidri https://spast.org/techrep/article/view/1065 Convolution Neural Network Based Bone Cancer Detection 2021-09-20T08:34:10+00:00 Sivakumar Rajagopal rsivakumar@vit.ac.in Kanimozhi S kanimozhi.s2019@vitstudent.ac.in Apala Chakrabarti apala.chakrabarti2018@vitstudent.ac.in Dimiter Georgiev Velev dgvelev@unwe.bg <p>Bone cancer is an uncommon type of cancer in which cells in the bone start to grow out of control[1-4].&nbsp; It destroys normal bone tissue. In a survey conducted [1] it was seen that out of 100 radiologists, 52 of them reported more than 10 cases of bone tumour per year. A benign tumour does not threaten life and will not spread to other body parts, whereas a malignant tumour can spread to other body parts. Fig.1-A depicts the 19 variants of bone cancers that occurs in the human body. Each type has unique characteristics and is seen in people of different age groups. According to Cancer Research UK [5], the survival rate for patients with bone cancer is 40%. Early detection of tumours can increase chances of survival by providing treatment at the initial stages of cancer. This paper explores various techniques of medical image processing and deep learning and applies them to detect and classify tumours into benign or malignant. Techniques used include image pre-processing using filtering methods, K-means segmentation and edge detection to detect cancerous regions in Computer Tomography (CT) images for Parosteal osteosarcoma, enchondroma and osteochondroma types of bone cancer. After segmentation of the tumour, classification of benign and cancerous cells is done with the help of a deep learning model-based Convolution neural network (CNN) classifier. The accuracy of the model is given by: Accuracy (TP+TN)/(TP+FP+FN+TN) as seen in fig.2. Table1 depicts the prediction percentage of the confusion matrix. The accuracy of the developed model is 98.6%. The tumour detected areas are displayed using a Graphical User Interface (GUI) as shown in fig.1 (B-C). This paper aims to give an overall idea about how image processing techniques and deep learning-based CNN classifier can be used to detect and classify bone cancer at earlier stages to prevent complications and fatalities.</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Sivakumar Rajagopal, Kanimozhi S, Apala Chakrabarti, Dimiter Georgiev Velev https://spast.org/techrep/article/view/1336 A PRUDENT PROVOKE OF DATA ANALYTICS IN AUTOMOTIVE INDUSTRY-A SURVEY 2021-09-28T10:04:04+00:00 Samsudeen S Samaskani ss9614@srmist.edu.in <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>The internet is playing a vital role in our daily lives. The increase in evolution of technology in the cause for the drastic change all over the world. Any number of devices are connected with the internet. Compared to the past over a decades, People are using biometric wearable’s, advanced home appliances and audio-visual equipment. Automobile industry started their play in recent years to corner other industries for their own requirements. Big data is becoming the key assert for the whole production and manufacturing cycle as well as it provides services in the automotive industry. The big opportunities of big data for automotive is enormous and the engineers can make use of the data analytical tools to provide services to the customers in automotive field. The impact of big data on logistics and production is also playing a vital role in automotive industrial elevation. In this paper, the survey deals with various opportunities of big data in automotive industry such as connected cars, crash analytics, road safety, location and infrastructure analysis, traffic management, Custom insurance, predictive analytics, mobility and connectivity, autonomous driving that helps the research minds to initiate a spark on any of these areas that helps in aiding services to the customers in automobile industry through the data analytics technology.</p> </div> </div> </div> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Samsudeen S Samaskani https://spast.org/techrep/article/view/2661 USE OF 3D CORONAL AND SAGITTAL IMAGES TO IMPROVE THE DIAGNOSIS OF BRAIN TUMOR 2021-10-18T06:34:18+00:00 Kunal Singh vtu11767@veltech.edu.in Dr. Shailendra Kumar Mishra shailendra@veltech.edu.in Praveen Kumar vtu14350@veltech.edu.in Raushan Kumar vtu14349@veltech.edu.in <p>A brain tumour is the cancerous or non-cancerous growth of abnormal brain cells that can be described as adenomas (benign) and pernicious (malignant). The benign does not contain active cells but active cells are present in malignant cancer. These tumours are also classified as primary and metastatic brain cancer. In a primary brain tumour, the cell is normally a brain cell. however, in metastatic brain tumours, tumour cells are spared into other body parts. The crucial type of cancer is glioma and this cancer is found in different grades that are classified into a high grade(HG) and low grade(LG) tumours and this is also called glioblastoma multiform and oligodendrogliomas or astrocytomas. A brain tumour is a type of cancer that can not be easily detected by a doctor in the starting stages. Generally, the shape and size of the tumour are unidentified. The Brain tumours classification is performed by serologic analysis and is not usually conducted before conclusive brain surgery. Normally Brain tumour is predicted by Magnetic Resonance Imaging (MRI) images, however, it is time-consuming and high cost. Nowadays a lot of data sets are available for identifying the several stages of brain tumours such as Glioma, Meningioma and a Pituitary tumour to train the machine learning model. The conventional ML Models logistic regression, SVM, CNN and RNN will predict the location of tumours present in the brain and also able to create tumour pattern mask. All Existing Models are available mainly to deal with 2D image data sets. The optimal contrast model takes original images and reference images and provide a more visual image and Non- linear stretching boost the textual information and compress the level of local brightness in the images. The dataset that is used consists of 3200 with size 512*512 images and provides 96% accuracy, The different types of tumour categories accurately using GoogLeNet.99.57%, 99.78% and 99.56% for meningioma, glioma and pituitary tumours[2] and the challenging task is to extract the precise tumour structure present in the 3D MRI images. 3D model Brats 2018 data set is used and the accuracy is 80%[3]. However, their accuracy is very less. In this paper, an effective Machine Learning (ML) model (3D U-net) has been developed that can generate a tumour pattern mask for any type of tumour present in the brain. The overall procedure is that first, the Brats dataset that consists of 3D MRI images is feed to the 3D U-net neural network and then this generates a brain tumour Mask. Finally, the model is going to predict the survival&nbsp;days of the&nbsp;people who are affected by a Brain tumour. The architecture of 3D U-net is similar to the architecture of U-net. In the 3D U-net, the analysis path is on the left side and the&nbsp;synthetic part is on the right side. In the analysis path 3D U-net consists of 3*3*3 convolution network with Relu function and 2*2*2 max polling. As per our calculation, The proposed model provides better accuracy as compared to the conventional method. Simulation results achieve 86% accuracy.</p> 2021-10-19T00:00:00+00:00 Copyright (c) 2021 Kunal Singh, Dr. Shailendra Kumar Mishra, Praveen Kumar, Raushan Kumar https://spast.org/techrep/article/view/562 Traffic Sign Recoginition System under Improved R-CNN Model Trained With Artificially Generated Environment Conditions 2021-10-14T07:54:45+00:00 R.VIJAY vr4046@srmist.edu.in <p>Traffic signals are intended to maintain an orderly flow of traffic, let pedestrians and cars to cross an intersection safely, and limit the likelihood of collisions between vehicles approaching crossings from opposite directions. It primarily involves the use of vehicle cameras to capture real-time road photos and then detect and identify traffic signs encountered on the road, providing precise information to the driving system. Traffic sign identification is typically based on the shape and colour of traffic signs, and traffic sign recognition is frequently done with classifiers like convolutional neural networks (CNNs) and SVM with discriminative characteristics. This work offer a Faster R-CNN-based cross-layer fusion multi-object detection and recognition system that collects additional characteristic information via the VGG16 (Visual Geometry Group) five-layer structure. By lateral embedding the 1x1 convolution kernel, max pooling, and deconvolution, as well as weighted balanced deconvolution, It brings this principle into practise. In a multi-class cross entropy loss function and Soft-NMS to balance the tough and easy samples. The purpose of traffic sign detection is to find the region of interest (ROI) where a traffic sign should be present and to verify the sign after a large-scale search for candidates inside a picture. To detect the ROI, the researchers employ a variety of colour and shape-based techniques. The number of traffic signs in existing datasets is restricted type and difficult circumstances. Because of the lack of metadata relating to these conditions, it is impossible to analyse the effect of a single element. Several climatic conditions are changing at the same time. In order to overcome the existing dataset’s flaws, we take a different approach.&nbsp; An improved convolutional neural network (CNN) technique to address the entire recognition process is automated. Offer a number of enhancements that are tested on traffic sign detection and lead to improved performance. This method is recognise 1000 traffic-sign recognition categories are included in real time image dataset. The findings is presented for some of the most difficult traffic sign recognition categories to be addressed earlier research. We present a detailed investigation of machine learning strategy for detecting signs with considerable intra-category variation, demonstrating error rates of less than 3% the proposed method. We needed to look at the average performance of a traffic sign recognition system employing upgraded CNN architecture, and we discovered that detection performance can drop dramatically under difficult conditions. To enhance precision and accuracy in difficult weather conditions such as snow, haze, rain, darkness, noise, and blur. Our different domain analysis revealed the simulated difficult conditions of detector performance.</p> <p><img src="https://spast.org/public/site/images/editorchief/mceclip0.png"></p> <p>&nbsp;</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 R.VIJAY https://spast.org/techrep/article/view/2111 Universal Dependency Treebank for Santali Language 2021-10-01T18:57:10+00:00 Satya Dash sdashfca@kiit.ac.in Sunil Sahoo sunilsahoobbsr77@gmail.com Brojo Kishore Mishra bkmishra@giet.edu Shantipriya Parida shantipriya.parida@idiap.ch Jatindra Nath Besra jatindra.nathbesra@gmail.com Atul Kr. Ojha shashwatup9k@gmail.com <p>Santhali language is an Indian low resource language that belongs to the Austroasiatic language group. Santals are the largest&nbsp;adivasi&nbsp;(indigenous)&nbsp;community in the Indian subcontinent with&nbsp;a population of more than&nbsp;10 million, and they reside&nbsp;mostly in the Indian states of Jharkhand, Orissa, West Bengal, Assam and Bihar,&nbsp;and sparsely in Bangladesh and Nepal. With the rise in language technology for Indian languages, significant developments have been achieved in major Indian languages but contributions towards research in the lesser-known/low-resourced languages remain minimal. Most parsers and treebanks have been developed for the scheduled (official) languages; the non-scheduled and lesser known languages still have a long way to go. In its endeavour to fill this gap, the present paper discusses the creation and development of Santhali Universal Dependency (UD) treebank and parser. UD has been acknowledged as an emerging framework for cross-linguistically consistent grammatical annotation. The primary aim of this project is to facilitate multilingual parser development. The system will also take into account cross lingual learning and perform parsing research from the perspective of language typology. A major effort is currently underway to develop a large scale treebank for Indian low resource Languages (ILRLs). The lack of such a resource has been a major limiting factor in the development of good natural language tools and applications for ILRLs. Apart from that, a rich and large-scale tree bank can be an essential resource for linguistic investigations. This paper presents the first publicly available treebank of Santhali low resource Indian language. The treebank contains approx. 500 tokens (50 sentences) in Santhali language. All the selected sentences are manually annotated following the "Universal Dependency" guidelines. The morphological analysis of the Santhali treebank was performed using machine learning techniques. The Santhali annotated treebank will enrich the Santhali language resource and will help in building language technology tools for cross-lingual learning and typological research. We also build a preliminary Santhali parser using a machine learning approach. Finally, the paper briefly discusses the linguistic analysis of the Santhali Universal Dependencies (UD) treebank.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Satya Dash, Sunil Sahoo, Brojo Kishore Mishra, Shantipriya Parida, Jatindra Nath Besra, Atul Kr. Ojha https://spast.org/techrep/article/view/2774 The Impact of Work-From-Home and Sustainability Concerns on Residential Electricity Consumptions During COVID-19 2021-10-17T18:07:13+00:00 Padma Priya R padmapriya.r@vit.ac.in <p>All around the world in countries on one side, during the COVID-19 pandemic, there was a common but indeed a milestone transition encountered in employee working style practices. It was a period where employees belonging to most of the businesses adopted and continued working in a new style known as Work-From-Home (WFH) [1-2] as- opposed-to commuting to their office premises. With more and more introduction of WFH patterns, the energy usage of smart devices such as laptops, monitors, desktop CPUs, and mobile phones has grown rapidly. On the other side, these countries around the world are also planning to harness more electricity generations through <em>renewables</em> based establishment, to become more responsible towards promoting their sustainability goals. Before the WFH-era, the main considerations for electricity load prediction from residential homes-based electricity usages in the literature, were contributed by electrical devices – washing machines, sauna, air conditioners, dish washers and TV. However, the usage from smart devices such as laptops, monitors, desktop CPUs, and mobile phones were largely ignored. But recently with a rise in WFH working style patterns, there is a pressing necessity for us to consider the electricity consumed by devices that are connected to the Internet. In this paper we have taken efforts to predict the total household energy consumption load and identify anomaly behaviours in the predicted load patterns of households from the perspective of internet-based smart devices. In our proposed architecture, we have considered a federated learning architecture instead of a centralized learning model. The proposed federated learning model consists of two phases 1) a clustering phase 2) a federated learning phase and network-usage devices electricity load consumption prediction phase. In the <em>first phase</em>, smart meters are clustered based on energy consumed by network connecting residential devices particularly from mobile phones, home office devices (monitors, laptops, tablets etc.) and security (surveillance etc.) devices. The collected features from each smart meter (representing an individual home) are transferred to a RNN based regression model which will be done in the Fog devices (a gateway device in the building apartment). During the <em>second phase</em> these residential houses are aggregated separately for each cluster to create cluster-specific models. Further the proposed RNN-based regression model predicts internet-oriented energy consumption on the clustered smart meters. Also, in this paper to the best of our knowledge for the first time we have identified the energy deficit that may be incurred if the supply of renewables (both Solar and Wind Turbines) based generations are used when a certain network-based devices are powered for a prolonged period of time, apart from other household devices in the residential buildings.&nbsp; To the best of our knowledge our paper is the first paper to consider residential network devices-based energy consumptions. So-as-to understand whether with a certain number of renewables-based electricity production being established in residential communities will satisfy the load demands during a raise in WFH working style pattern or not. And if so there is a deficit in meeting the load through renewables, how much more energy is to be met through renewables. Also we have envisioned fog distributed network paradigm, where in the Smart Meters acting as edge devices will subsequently communicate with the Fog Network were federation phase is likely to happen. Thus, in this paper we aim in to provide an insight to countries into the increased residential power demands to be considered before planning, construction of renewable energy generation systems with this new trend namely WFH if continued to be a plausible working style in future also.</p> <p>In this paper, “HUE: The Hourly Usage of Energy Dataset for Buildings in British Columbia” [3] dataset has been used to train the model and predict energy consumption patterns. The dataset contains hourly energy usage data along with housing attributes for twenty-two households in British Columbia, Canada. Also, we have used the meteorological data for Columbia from National Solar Radiation Database (NSRDB) [4]. Figure 1 specifies our system architecture about the two phases to be carried out in this proposed work. RQ1, RQ2 and RQ3 in Figure.1 represent Research Questions that will be answered in this work. They are network-connected devices load prediction, anomaly detection and green-energy deficit detection respectively.</p> <p><img src="https://spast.org/public/site/images/padmapriya_r/mceclip0.png"></p> <p>&nbsp;</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Padma Priya R https://spast.org/techrep/article/view/1607 Seismic Horizon estimation based on deep learning technique 2021-09-29T20:15:10+00:00 Vineela Chandra Dodda vineelachandra_dodda@srmap.edu.in Lakshmi Kuruguntla lakshmi_kuruguntla@srmap.edu.in Karthikeyan Elumalai karthikeyan.e@srmap.edu.in <p>The seismic horizon estimation from the seismic reflection data is the basic requirement for structural and stratigraphic modeling of reservoirs. Till now, manual interpretation and semi-automated techniques were used to estimate seismic horizon from the seismic data. But the limitation of these techniques are it takes more time for interpretation and needs human expert for analysis. To overcome those limitations, we propose a novel method to estimate the seismic horizon automatically based on autoencoder neural network technique. The autoencoder neural network is an unsupervised deep learning technique used for seismic horizon estimation. The training data contains time segmented seismic horizons with unique symbols assigned as labels. The network is trained with the target values equal to the input values. But learning to copy input to output does not yield accurate meaningful features from the seismic data. Therefore, placing a constraint such as limiting the number of hidden units and imposing sparsity on the hidden neurons will help the network to learn the features from the data. An auto encoder neural network consists of encoder and decoder part with series of hidden layers. If the sparsity is imposed on the network, the encoder part learns sparse features and suppresses the components associated with low-amplitude neurons. The decoder part reconstructs the encoded information from the encoder. In addition, we also use deep structured neural network with dense layers i.e more than one fully connected layer to learn more complex features from the seismic data. Once the training process is finished, the well-trained network hierarchy is applied to the test data. The trained network performs trace segmentation and label assignment trace by trace. We finally extract the boundaries of the predicted label as the final interpreted horizons.After training the network, it is tested using test data and the model performance is evaluated based on mean squared error (MSE) between estimated and the true values.&nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Vineela Chandra Dodda, Mrs. Lakshmi Kuruguntla, Dr. Karthikeyan Elumalai https://spast.org/techrep/article/view/223 Application of Hyperparameter Algorithms using Big Data Platforms 2021-09-08T16:23:27+00:00 Venkata daya sagar Ketaraju sagar.tadepalli@kluniversity.in <p>Algorithms in each step of data analytics application include hyperparameters that are independent of the data itself. The choice of hyperparameters is one of the most time-consuming parts of data analytics since it cannot be performed precisely without heuristic or empirical methods. This paper implemented hyperparameter selection algorithms: Simulated Annealing, Bayesian Search, Tree-structured Parzen Estimators, Differential Evolution, Basin Hopping on Spark, distributed big data processing platform. The performance is measured by comparing the results we get from each algorithm with the results of the Random Search algorithm. We have tested the scalability and ability for parallelization of algorithms.</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Venkata daya sagar Ketaraju https://spast.org/techrep/article/view/262 Devanagari Characters Strings Extraction using Morphology Method of Images 2021-09-10T18:07:52+00:00 Gaurav Goel goyals24@gmail.com <p>The extraction of alphabetic characters from images has gained significant prominence in various applications and fields such as automatic text detection, handwriting recognition, cyber analysis, etc. Although, a large volume of work exists on this topic with respect to foreign languages, the research focusing on Devanagari script characters has been rather limited. The extraction of text from images itself poses challenges that are related to images which contain these texts due to the composition of images which can be composed of the characters themselves and significant clutter in the background viz. noise, photo, textures, gradients or painting etc. In this paper, we employ text extraction method based on highly popular connected components technique for the purpose of extracting Devanagari text from natural scene images by employing morphological operations. The method employed is vigorous pertaining to font size and colour. The outcomes of the algorithm employed in this paper demonstrate that this method extracts the image text in a very effectual way.</p> 2021-09-10T00:00:00+00:00 Copyright (c) 2021 SHASHI KANT GUPTA, Gaurav Goel https://spast.org/techrep/article/view/930 Data mining analysis for Precision Agriculture: A Comprehensive Survey 2021-09-16T12:09:03+00:00 Dr.M.A.Jabbar jabbar.meerja@gmail.com Sonia Sharma sonia@hgcjagadhri.com <p>Agriculture remains a vital sector for most countries. Growth and development in the agriculture sector are not only essential to secure food security, but they play an important role in employment generation as well. However, the growth of agriculture depends on various factors, and investment contributes most.</p> <p>In the past, farmers applied traditional methods of farming which involved intensive use of indigenous knowledge, traditional tools like axe hoes and sticks, natural resources, organic fertilizer, and cultural beliefs of the farmers. It uses Slash &amp; Burn and Shifting Cultivation methods, and there is a lot of absence of accountability, which affects the environment in terms of depletion of nutrients, deforestation, and soil erosion.</p> <p>Therefore, it is extremely important to switch from traditional agricultural methods to modern agriculture by using the latest information technology by considering two important factors. One helps farmers by providing the historical crop yield record with a forecast, reducing the risk of loss, and the other helps the government by making crop insurance policies and policies for supply chain operations so that the farmers can take quick decisions.</p> <p>Easier data extraction from electronic sources and transfer to a secure electronic documentation system will reduce production costs, yield, and market price.</p> <p>&nbsp;Data mining helps detection and classification and prediction of crop diseases, yield prediction, input management (planning of irrigation and pesticides), and fertilizer suggestion. This paper discusses various applications of data mining in the agriculture sector. Appling the emerging technologies in the agriculture sector will help the farmers to increase the crop.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Dr.M.A.Jabbar, Sonia Sharma https://spast.org/techrep/article/view/299 Prediction of Myopia Progression Based on Artificial Intelligence Model 2021-09-11T18:56:21+00:00 Sivakumar Rajagopal rsivakumar@vit.ac.in Subhalaxmi Swain subhalaxmi.swain2019@vitstudent.ac.in <p>Myopia is a leading socio-economical issue that gives a threat mostly to the pediatric population. This issue can lead to visual impairment as well if the prediction and prevention are not done properly. According to the World Health Organization (WHO) around 27% of people all over the world’s population are suffering from myopia. By considering the population of the united nation maximum of 52% of the population is suffering from myopia. Considering the pediatric population maximum of 80%-90% of the population is suffering from different stages of myopia [1-3]. The increasing rate of myopia is having a distinct amount of reason behind it. As children are more prone to this visual error, time spent on the electronic system (TV, smartphone) and for studying, reading can also be one important cause of myopia. Also, some genetic and environmental factors can be considered as one of the cause. According to many types of research in ophthalmology, the field shows that the structural changes in a visual component can also lead to myopia. If the cornea is a little more curved than the normal eye then, the light ray focused incorrectly which leads to myopia [4-5]. This paper proposes a prediction of myopia progression based on Artificial Intelligence (AI)gives an idea about myopia which becomes a leading health concern and socio-economical issue with a big threat to the pediatric population. Also, give an idea about some genetic and environmental factors for myopia progression. Clinical, environmental markers are used for predicting this visually threatening disease. Involvement of corneal biomechanics, a different mode of managing myopia, most important, different approaches for predicting myopia, and an efficient model selection for handling such huge data from EMR and predicting the disease perfectly with the help of AI[fig.1]. we conclude that myopia is a very serious socio-economical issue and recently became the leading public health problem mostly a threat to the pediatric population. In recent days huge medical record AI is a boon to medical science. With the help of an efficient AI model, it is possible to predict myopia with greater efficacy than manual prediction.&nbsp;&nbsp;&nbsp;</p> 2021-09-14T00:00:00+00:00 Copyright (c) 2021 Sivakumar Rajagopal, Subhalaxmi Swain https://spast.org/techrep/article/view/967 Novel Authentication Mechanism Using Blockchain 2021-09-17T12:43:44+00:00 N. JEYANTHI njeyanthi@vit.ac.in Shreya Chatterjee shreya.chatterjee2018@vitstudent.ac.in Emmadi Divya Srujana emmadidivya.srujana2018@vitstudent.ac.in R. Thandeeswaran rthandeeswaran@vit.ac.in <p>The World Wide Web lacks an overall identity platform. The problem of people creating fake accounts and performing illegal and unethical activities has been evident ever since its origin. Given the ease with which fake accounts can be created on social media platforms, it becomes extremely difficult to find the culprit. Currently, spam and fake account creation is prevented using email and mobile verification. An email or an SMS containing a one-time password is sent to the user’s email address or mobile number.&nbsp; The OTP is used to verify a user's account and identify. However, there is no proper way in which an email can link to one particular person. One user can have multiple email addresses too and mobile numbers, hence causing the whole system to fail. The ease with which temporary emails are created proves to be another bottleneck in the system. This can be solved using Unique Identification Number verification on an Ethereum based blockchain platform. The user registers on the network using his Aadhaar number, which is then verified by the admin. Once verified, the user can use his account address to sign up, authenticate, and log in to various social media platforms. Unless the user’s account is verified and his social media apps are authenticated, he cannot sign up on a new app, and if his current application is not authenticated, he cannot sign in to that either.</p> 2021-09-18T00:00:00+00:00 Copyright (c) 2021 N. JEYANTHI, Shreya Chatterjee, Emmadi Divya Srujana, R. Thandeeswaran https://spast.org/techrep/article/view/2644 Z-score normalized features with maximum distance measure based k-NN automated blood cancer diagnosis system 2021-10-17T15:20:55+00:00 Umarani P Ponnusamy umarani.magesh@gmail.com Viswanathan P viswatry2003@gmail.com <p>Leukemia is a blood-forming cancer disease with abnormal growth of white blood cells. Early prediction of cancer decreases the death rate and increases the survival rate. New advancements in technologies came in both medicine and research, but the accurate diagnosis of this deadly disease is a challenging task.&nbsp; A combination of two or more features of nucleus and cytoplasm such as geometric, statistical, color, and texture features are mainly used as training and testing features for predicting leukemia in the k-Nearest Neighbor algorithm[1]. The K-NN approach is based primarily on k-value and distance measures for computing the similarity between the training and testing features[2]. The most commonly used distance metric is Euclidean which measures the minimum distance between the features; however, it faces difficulties on larger datasets. Even though different combinations of features with varying distance measures are used for prediction[4], there is no significant improvement in accuracy. Initially, the attributes of the leukemia dataset are in different ranges that may lead to misclassification of results[5].&nbsp; To overcome the problem, an automatic diagnostic system is proposed to predict the cancerous and non-cancerous cells based on the WBC’s nucleus and cytoplasm characteristics. The proposed methodology comprises a k-NN algorithm that utilizes Chebyshev distance measure with z-score normalization and hyperparameter optimization by Grid Search cross-validation method. Z-Score normalization is applied to normalizes the data range in mean and standard deviation to improve the similarity between the features that avoid misclassification.&nbsp; k-NN with Chebyshev distance measure is applied in the statistical and geometrical features of the nucleus and the cytoplasm of the white blood cells to improve accuracy and overcome the computation problem. The dataset of statistical and geometrical features of the WBC’s nucleus and cytoplasm consists of twelve attributes that has been utilized for prediction and evaluation. Chebyshev distance measure computes the maximum value distance, which leads to higher accuracy than other distance measures. Then, it is optimized by grid-search 5-fold cross-validation, which identifies the best parameters for the k-nearest neighbor model with an optimal value of k. The proposed approach is experimentally evaluated with various distance measures like Euclidean, Minkowski, Cosine, City-block or Manhattan distance, Correlation, and Chebyshev distance with the varying k-value from 1 to 10. It shows that Euclidean distance achieves 93.75%, Minkowski gives 93.75%, cosine shows 95.83%, city-block reaches 91.67%, correlation achieves 95.83%, and Chebyshev distance achieves 97.92% accuracy of classification without cross-validation. Among these, Chebyshev distance has achieved the highest accuracy compared to other distance measures. On cross-validation, the accuracy rate is increased by 6% using Euclidean, 2% improvement by Minkowski, and 1% by Chebyshev distance. Compared with different proportions of training and testing model, Chebyshev distance achieved more accuracy than other metrics while performing the execution of 50: 50 training and testing data proportions and 70: 30 training and testing data proportions. The final classifier evaluation of precision 100%, recall 97%, f1-score 98%, and accuracy 98%&nbsp; predicts k-NN as the best model. This overall system with different approaches helps in facilitating the early diagnosis of blood cancers.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Umarani P Ponnusamy, Viswanathan P https://spast.org/techrep/article/view/1473 REVERSE GUARD 2021-09-29T12:23:33+00:00 MEENALAKSHMI S vaishumeena93@gmail.com <p>The objective of this project is to avoid the problems faced by the driver while the driver is taking reverse. The main objective is to reduce the manual work done by the conductors to reverse the truck or heavy vehicles. This will minimize the chances of accidents or occurrence of tints. The solution of this issue is to use the Proximity sensor. This sensor is used to detect the presence of object within the specified range approximately 30 cm without any physical contact. Therefore, it does not cause any damage to the vehicles. If the object is detected within the range, then the light along with an alarm sound will indicate the driver to stop the vehicle.</p> <p>&nbsp;Sensors have a high reliability and long functional because of absence of mechanical parts. For connecting the alarm and sensor ARDUINO board is used. The sensor converts the information of an object into a electrical signal. The operating range maybe limited. It is robust and resistant to harsh environment. This will be more helpful for the drivers who drives the heavy vehicles. The proximity sensor is fixed at the back end of the vehicles and light alarms will be visible to the driver at the front. This project is implemented at reasonable cost. The paper proposed an idea that reduces the road accidents and guide the drivers to reverse the heavy vehicles without physical contact or third party guidance.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 MEENALAKSHMI S https://spast.org/techrep/article/view/2163 Exploring Sustainable and affordable Cancer Care through Prediction of Disease using Artificial Intelligence 2021-10-01T16:45:35+00:00 Mausumi Goswami mausumi.goswami@christuniversity.in Saurav Thakur saurav.kumar@christuniversity.in <p><strong>Abstract </strong></p> <p>Now, in recent decades AI and ML have become a major part in developing and maintaining the healthcare system. Now, by using AI and ML in healthcare, it can provide a massive help for the healthcare workers.AI and ML help the healthcare workers for making better decisions, In some practical areas, it may take the place of human action for making decisions. such as radiology, it can help to Gather medical knowledge or information from different journals, textbooks, or clinics which will help in reducing time for study and research. AI and ML help in predicting the early diagnosis of disease based on the patient’s data and even help to prevent that dis-ease. Breast cancer is the most frequent category with an estimation os 2, 38,908 by 2025. Breast cancer is followed by lung cancer (1, 11,328); followed by mouth cancer (90,060). These statistics have triggered this research. Breast cancer is found in every one women among eight woman. A few optimizing odds against this type of cancer are the following : creating awareness about nursing , minimizing hormone therapy, a healthy body weight management and maintenance, usage of alcohol ( it is estimated that daily intake of one drink increases the risk factor by 30% . This is again completely related to dose. More consumption of alcohol leads to higher risk of having breast cancer.) , 3 to 4 days of exercise in a week or moderate intensity&nbsp; physical activities like power walk can significantly reduce the risk of breast cancer, less intake of saturated fat can lower the risk, more intake of anti-oxidant food can lower the risk of breast cancer among women. This research tried to investigate the usage of AI-ML to predict the disease. Future direction of work will focus on usage of transfer learning and other models of AI-ML to help the society and mothers of nations to fight against the increasing spread of cancers.</p> <p>Artificial Intelligence and Machine learning is an emerging field which is found to be very promising in the area of health care. In this work, the cancer disease is considered to find the avenues of applying AI-ML techniques using Support Vector Machine Algorithm. The estimation says that the count of cancer patients in India is 13.9 Lakhs which is going to be increased to reach 15.7 lakhs by 2025. As per the reports of “National Centre for Disease Informatics and Research” estimation of the count among men is 6, 79, 421 in 2020. This number is estimated to increase to reach 7, 63,575 in 2025. The statistics tells us the count among women is 7, 12,758; which is expected to reach 8, 06,218 by 2025. Aizawl district in India has highest cancer incidence rate among males. It is estimated that per 1,00,000 population it is 269.4 in Aizwal and 39.5 in Osmanabad and Beed districts . The reasons for having highest value in Aizwal district could be investigated. Among female, Papumpare district in Arunachal Pradesh is highest . Estimate says, per 1,00,000 population cancer incidence rate among female varies from&nbsp; 219.8 to 49.4. Osmanabad records the lowest value of 49.4. Breast cancer is the most frequent category with an estimation as 2, 38,908 by 2025. Breast cancer is followed by lung cancer (1, 11,328); followed by mouth cancer (90,060). These statistics have triggered this research. Breast cancer is found in every one women among eight woman. A few optimizing odds against this type of cancer are the following : creating awareness about nursing , minimizing hormone therapy, a healthy body weight management and maintenance, usage of alcohol ( it is estimated that daily intake of one drink increases the risk factor by 30% . This is again completely related to dose. More consumption of alcohol leads to higher risk of having breast cancer.) , 3 to 4 days of exercise in a week or moderate intensity physical activities like power walk can significantly reduce the risk of breast cancer, less intake of saturated fat can lower the risk, more intake of anti-oxidant food can lower the risk of breast cancer among women. This research tried to investigate the usage of AI-ML to predict the disease. ML algorithm is used to predict the chances of having the disease. Research could help in the direction of early prediction using a large number of features and huge amount of dataset size. Noninvasive techniques also could be helpful. Exposing patients to harmful radiation can be avoided by the usage of machine learning algorithms to predict breast cancer.&nbsp; Appropriate usage of a few algorithms are reviewed in this research. This research has used machine learning algorithms to detect and diagnose the tumor non-invasively with high accuracy. &nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Mausumi Goswami, Saurav Thakur https://spast.org/techrep/article/view/1325 Personalizing Customer Experience by Implementation of Hybrid Recommendation System 2021-09-29T09:42:33+00:00 Sneha Bohra snehab30@gmail.com Mahip Bartere mahip.bartere@gmail.com Shankar Amalraj shankar.amalraj@ghru.edu.in Martin Sagayam martinsagayam@karunya.edu <p>With the rapid use of the internet, a huge amount of data is generated over the network with every passing second whereas, on the other hand, the user demands information that is relevant to his personal search. The processing of such huge data is a challenging task. To serve the user with his specific required information, there is a need for an information retrieval mechanism that can process this large data. A recommendation system is such required technology that retrieves information in order to improve users’ access and thereby recommending items that are relevant to his explicitly mentioned behaviour and preferences. The recommendation algorithm analyses the huge data set and focuses to recommend accurate content to the user. There are several recommendation systems in use today, some popular among them are Netflix, YouTube, Tinder, and Amazon, etc. In this article, discussion is done on various types of recommendation systems, issues of recommendation systems and use cases of widely used recommendation engines and their potential benefits. The work introduced in this paper is an integration approach of domain-specific and item-based recommendation system. The evaluation of the proposed approach is done on Amazon Product Dataset and the performance is measured on evaluation metrics Precision and Recall. Experimental Results prove that the approach introduced in this paper performed well as compared to existing methods.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Sneha Bohra, Dr. Mahip Bartere, Dr.Shankar Amalraj, Dr.Martin Sagayam https://spast.org/techrep/article/view/2282 Math Accessibility for Blind People in Society using Machine Learning 2021-10-05T11:05:49+00:00 sagar shinde sagar.shinde5736@gmail.com <p>Math plays a crucial role in each and every sector belonging to human beings in society. Sometimes, it is very difficult to recognize math equations &amp; symbols due to variation in writing, change in stroke, touching symbols and many more. The blind people get very low success in math recognition as compared to character and digit recognition. It is needed to develop blind math application in which various math equations and symbols have to be recognized. The proposed system uses a machine learning approach to identify the various math equations and symbols by extracting various statistical and complex features with well known classifier viz. support vector machine, neural network, K-nearest neighbor. The confusion matrix and receiving operating characteristics measure the accuracy and efficiency of the proposed system. The math documents have to be scanned and recognized. Finally, a text to speech converter has been made to get the contents of math documents for blind people in society. The proposed system will be helpful for blind math applications and it will not affect the health of blind people in terms of stress on the eye to recognize so the health can be maintained and documents can be read. The system can be treated as a smart system for society which focuses on the problems associated with blind people in society. The dataset is needed for implementing the proposed system and it can be generated through various age groups in society by collecting the handwritten math equations as well as math symbols as a database. The implemented system can be used by blind people to read the math documents, digits on bank cheque, calculate the valuation of currency etc. It can be treated as a smart society application through the utilization of modern technology.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 sagar shinde https://spast.org/techrep/article/view/2939 An INTEGRATING ADVANCED STATISTICS TO PREDICT THE OUTCOME OF NATIONAL BASKETBALL ASSOCIATION GAMES WITH MACHINE LEARNING 2021-10-26T13:47:25+00:00 SUNANDA DAS das.sunanda2012@gmail.com Rishab Suresh sachinsudhir24@gmail.com Sachin S Saligram sureshrishab6@gmail.com Nakul J Krishnan nakul11999@gmail.com Somya Vashisht somyav45@gmail.com <p>Sports stats and analysis has been propelled by sports fans through the ages. What started as a method of simply jotting down shots and other details about teams and players in a sheet of paper has evolved into a much complex and technologically advanced system of statistical encapsulation.Team management,ticket pricing and betting has become dependent on these statistics ,as a result a simple game has elevated to a whole new level.Due to a plethora of data available and computerization of these data,the demand for sports advice and evaluation has drastically increased.National Basketball Association (NBA.) has acclimatized to these standards.In this study,win-loss percentage for hybrid basketball game scheme was calculated using seven designed features:elo ratings,power rankings,defence +/-,offensive +/-,player ratings,box +/-,basic stats(ppg,apg,rpg,bpg).The study takes three NBA seasons into account 2018-19,2019-20,2020-21 ,because of Covid -19 the regular 2019-2020 season was cancelled midway and due to that 2019-20 NBA season is considered to be an anomaly , as a result the NBA devised a tournament with teams selected through statistical analysis where they directly played playoffs to determine said season’s winner. This was followed by an injury-riddled 2020-21 season (due to a shortened offseason after the 2020 season i.e. players had less rest hence the number of injuries went up and teams didn't perform the way they were supposed to)&nbsp; Therefore using 3 seasons of nba data was deduced to be the right method and hence this can help calculate an accurate model for the season to follow . The final score of the hybrid model was predicted by multivariate regression,XGboost ,stochastic gradient boosting,random forest,K-nearest neighbour and Extreme Machine Learning.The result of this study aids future researchers and statisticians to pursue newer and better models in order to broaden the scope of sports analysis for sports moguls and fans alike.</p> <p>Initial results conducted from a previous study, the variables were similar although of previous years and using different Models (SVM, Random Forest and&nbsp; Gradient Boosting). The accuracy came out to be 0.50, 0.65 and .79. The aim of this experiment is to retrieve better or similar rates using the selected time frame in order to predict the current season.</p> 2021-10-26T00:00:00+00:00 Copyright (c) 2021 SUNANDA DAS, Rishab Suresh, Sachin S Saligram, Nakul J Krishnan, Somya Vashisht https://spast.org/techrep/article/view/1669 The Fetal Distress Classification using Deep CNN 2021-09-30T07:09:48+00:00 Rutuja Jadhav rutujaj96@gmail.com <p>Fetal distress and hypoxia (oxygen deprivation) is considered a serious condition and one of the main factors for caesarean section in the obstetrics and gynecology department. It is considered to be the third most common cause of death in new-born babies. Fetal distress occurs in about 1 in 20 pregnancies. Many fetuses that experience some sort of hypoxic effects can have series risks such as damage to the cells of the central nervous system that may lead to life- long disability (cerebral palsy) or even death. Continuous labor monitoring is essential to observe fetal wellbeing during labor. Many studies have used data from fetal surveillance by monitoring the fetal heart rate with a cardiotocography, which has succeeded traditional methods for fetal monitoring . To detect such fetal distress, CNN model is implemented, that detects whether a baby is receiving adequate amount of oxygen or no. CNN and LeNet-5 models are used with which results are observed. The Observations are made on the accuracy scores with different hyper- parameters and its values. The project is divided into data pre-processing, feature extraction and training and testing phases of the 5 different data class. The unbalanced data is balanced by technique of oversampling and under sampling and then fed to CNN model to get same input sample. This model describes the state of the fetus by using the classification based on pH values by training CNN Model which consists of 5 classes i.e. non-ecotic beats, fusion beats, supraventricular beats, ventricular beats and unknown beats and also the outcome of this project is to detect fetal distress at early stage to help doctors provide proper treatment to the mother before or after delivery. The accuracy obtained with CNN model is 76.91% and the accuracy obtained with Lenet-5 model is 80%.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Rutuja Jadhav https://spast.org/techrep/article/view/201 Detecting Fake Faces in smart Cities Security surveillance using Image Recognition and Convolutional Neural Networks 2021-09-08T06:35:29+00:00 Venkata daya sagar Ketaraju sagar.tadepalli@kluniversity.in <p>There are expected to be millions of sensors and devices connected to the Internet in intelligent cities. Sensors in a variety of applications can generate a large volume of data. Connected cars are an important element of an intelligent city. Citizen safety is an important element of quality of life in a Smart City in new urban environments. The safety issue has been a significant concern for everyone for a long time. A violation of safety in private spaces has become a danger for all to stop. If traditional security systems feel a violation of safety, they sound a warning. Image processing in combination with a thorough understanding of convolutional neural networks to identify and classify images helps recognize a violation of an advanced model, thereby significantly improving future protection. Thanks to the ability to remove complex characteristics from images with accurate algorithms for facial and body detection. The output of specific machine learning is exceptional, particularly deep learning transition. In every field of science and technology, the use of such technologies to advance current systems and models will be an essential step forward. The two can do much more than is thought feasible when combined and used in the area of defence, and this paper seeks to do the same.</p> 2021-09-08T00:00:00+00:00 Copyright (c) 2021 Venkata daya sagar Ketaraju https://spast.org/techrep/article/view/1750 Towards Sustainable Living and Health Care through Sentiment Analysis during Covid19 and Impact of Social Media 2021-09-30T10:10:36+00:00 Mausumi Goswami mausumi.goswami@christuniversity.in Ahini Abraham ahini.abraham@mtech.christuniversity.in <p>Artificial intelligence is the process of the machine to perform with the simulation of human intelligence. Computing within the field of emotions paves the recognitions to sentiment analysis. Sentiment analysis is the method of capturing the emotions behind a text whether or not it's positive, negative or neutral. This technology is additionally referred to as opinion mining or feeling computing. Sentiment Analysis uses the ideas of machine learning alongside an AI based process called NLP to extract and analyse the data, emotions, information from the text that square measure is being analysed. This technology tends to seek out the polarity in context of the emotions associated with the subjective data from the text. Deep learning has gained quality and plenty of algorithms are accustomed to analyse the texts in a wider manner. In sentiment analysis a weighted score is assigned to every set of entities whether or not it's a subject, word, theme or class within the subjective statement. The essential goal of sentiment analysis is to coach a model to predict sentiment by observing word connections and categorising them as positive, negative or neutral.</p> <p>People became additional vocal regarding their desires and demands in recent years with the expansion of access to social media platforms. These platforms like Blogs, reviews, twitter posts and discussions are often accustomed to monitor the emotions and may be used as an excellent promoting tool in their campaigns. Attitudes, ideas, feelings, and opinions, put together referred to as sentiments, play a crucial role in understanding a personality's conduct. This research is an attempt to review the most crucial health related issues during covid 19 among social media users. Objective is to help the society by identifying a few features from the social media data to propose a possible solution frame work or strategy for more sustainable solutions related to health care. It is proposed to use Machine Learning Techniques, specifically Deep Learning Techniques. The proposed hypothesis is : Usage of Deep learning techniques can further help to improve the effectiveness and efficiency of the sentiment analysis task based on health care data.</p> <p>The bigger picture is to understand the positive and negative emotions of the social media users by effectively scrapping the web related to health care data to further strengthen the health care. This could be a big step towards more sustainable health care in India.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Mausumi Goswami, Ahini Abraham https://spast.org/techrep/article/view/1787 A Comparative Study on Pre-Training Models of Deep Learning Techniques to Detect Lung Cancer 2021-10-08T13:07:34+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com Madhavi Aluka alukamadhavi2000@gmail.com Sumanthi Ganesan summi.ganesan@gmail.com Vijaya Pal Reddy drvijaypalreddy@gmail.com <p>Detection of lung cancer using neural network-based systems has seen a reasonable improvement. However, the possibility of false cancer detection seems to be one of the worrying factors in recent times due to various technical reasons. Recent research programs revealed that the machine learning (ML) based techniques were also found to make a greater contribution for lung cancer detection. However, the deep learning (DL) techniques seem to provide enhanced accuracy for various medical research areas. Therefore, in this work different types of DL pre-trained prediction models are tested to study the accuracy of each model. The pre-trained models are applied on the dataset consisting of nearly 3000 images consisting of cancerous and non-cancerous data. Particularly, VGG-16, Inception V3, and ResNet50 models were considered for this research. The results show a reasonable accuracy using VGG-16 model with fine-tuning and the image augmentation obtained greater accuracy of 96% and 93% for training data and validation data respectively.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, Madhavi Aluka; Sumanthi Ganesan; Vijaya Pal Reddy https://spast.org/techrep/article/view/353 LITERATURE SURVEY ON VIDEO SURVEILLANCE CRIME ACTIVITY RECOGNITION 2021-09-13T20:29:52+00:00 K Kishore Kumar kishorkadari@vardhaman.org <p>Presently, a video surveillance system is an important virtue for identifying crimes. The past works are related to crime detection using video surveillance are discussed here. The goals of this investigation want to provide a literature review about crime activity recognition using different techniques. The main demerits of video surveillance are facial utterance recognition and the method consumes more time for detecting the crime. An alert system provided in video surveillance improves the crime prediction and also it reduces the crime activity. This paper presents an overview of present and past reviews for developing future research. The published journals from 2000-2020 were analyzed to know about the video surveillance and crime detection methods in different sectors. A review of the analyzed researchers and their techniques are available in this paper. This survey is useful to improve the crime detection techniques using video surveillance. Moreover, it is a useful tool to gather information</p> 2021-09-14T00:00:00+00:00 Copyright (c) 2021 K Kishore Kumar https://spast.org/techrep/article/view/430 Development of Fault Diagnostics, and Prognosis System based on Digital Twin and Blockchain 2021-09-14T18:47:07+00:00 Laxmisha Rai laxmisha@ieee.org Gnagliga B Jonne bababejohn@gmail.com Doucoure Ibrahim i.doucour1@yahoo.fr Qingguang Chen chenqg@sdust.edu.cn Fasheng Liu fashengliu@163.com <p>The black box of airplanes created increasing interest for several decades, especially for its role in identifying aviation accidents. A black box can be defined as a base which records the operation and the exchanges between the switches and the pilots. However, there are less chances to recuperate the black box for each accident. Therefore, in this paper, a design of a system which could help to diagnostic and obtaining propositions of solutions is proposed. The system is able to provide information at the simultaneously as it is operating on it virtual twin or representation of the machine. For retaining the data, the concept of incorporating capabilities of block chain is used. Explicitly, the system is split into two parts, the real-time digital twin (DT) and the copy of digital twin. The real DT is composed of the actual machine, data collecting sensors, data processing and analysis units, and the virtual twin of the real machine. The second part of the system is composed of several independent databases, data processing and analysis sections, and its virtual twin. The purpose of this system is to let the DT to be easily accessed from anywhere in the world. With this, the system can retain the lost data, and retained data can help to reconstitute accident situation using simulation. Moreover, machine dysfunction issues can be identified and information can be used while manufacturing new systems. To achieve these tasks, substantial practical knowledge of operation of fault diagnostics and prognosis systems, concepts of signal processing, digital twin, and block chain is essential.</p> <p>In this paper, initially various details of prerequisites required by an engineer to conduct machinery condition monitoring is discussed. This include concepts of instrumentation, signal processing techniques such as vibration monitoring, motor volume signature analysis, and thermography where debris analysis and detection techniques are discussed. Furthermore, principles on how to operate maintenance operations in particular conditions illustrating the best techniques available for making a good maintenance decisions known as FEMCA (Failure Mode, Effect and Criticality Analysis) are described.</p> <p><br>The concept of prognosis will help us to determine the remaining predictable lifespan of the machine or the machine components using mathematical modelling and machine learning approaches. Briefly, we will prove how machine learning can help to make a full-proof system as to how other fault can be diagnosed [1]. In the past, several researchers focused on works on diagnostics, predictive maintenance, digital twin, and block chain. In [2], researchers developed a simplified software model which can simulate the digital twin for the application of diagnosing the status of a power transformer. In [3], researchers designed and implemented block chain based creation process for digital twin for guaranteeing security, trust, accessibility, and data provenance. They have created this process having characteristics of decentralized, tamper-proof, and immutable features. In [4], the details on recommending precautionary measures ahead of performing critical events by identifying the faults is studied. Here the role of integrating block chain and digital twin for fault diagnosis is discussed as a key challenge.</p> <p>The overview of the implementation digital twin and block chain for the proposed system is shown in Fig.1, with fundamental objectives of achieving confidentiality, accessibility, and avoiding any possibility of alteration of transactions etc.</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Laxmisha Rai, Gnagliga B Jonne, Doucoure Ibrahim, Qingguang Chen, Fasheng Liu https://spast.org/techrep/article/view/1334 Towards a Smarter Connected Society by Enhancing Internet Service Providers' QoS metrics using Data Envelopment Analysis 2021-09-28T10:00:33+00:00 Gracia S s.gracia@res.christuniversity.in P. Beaulah Soundarabai beaulah.s@christuniversity.in Pethuru Raj peterindia@gmail.com <p><strong>Towards a Smarter Connected Society by Enhancing Internet Service Providers' QoS metrics using Data Envelopment Analysis</strong></p> <p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Gracia S<sup>1</sup>, P. Beaulah Soundarabai<sup>2</sup>, Pethuru Raj<sup>3&nbsp;&nbsp; </sup>&nbsp;&nbsp;</p> <p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ¹<sup>,</sup>²Christ University, Bangalore, Karnataka, India<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;³Reliance Jio Platforms Ltd., Bangalore, Karnataka, India</p> <p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Email ID: s.gracia@res.christuniversity.in</p> <p><strong>&nbsp;</strong><strong>Abstract </strong></p> <p>The recent pandemic has resulted in a paradigm shift [1] that has led to the need for a smarter connected workplace not just in office but at home as well.</p> <p>Since 2010, there has been considerable competitive pressure from wireless networks on wireline new broadband networks. This competition further depends on the extent of market share of legacy xDSL technology and infrastructure owned by monopolistic operators. Besides infrastructure-based competition by both wireline and wireless networks, there is also service-based competition. Additionally, mandatory access regulations cause added competition by allowing new entrants in the broadband market to utilize already existing infrastructure [2]. With rising competition, it is important for telecom service providers to evaluate the Quality of Service (QoS) they provide on the network. This research focusses on benchmarking QoS metrics of small and medium wireline service providers, so they can have sustained growth and keep pace with the top tier providers. Small and medium providers may not have a strong analytical presence like top players; many studies have been carried out on large service providers who have inhouse analytics and data science teams and this study provides managers of small and medium service providers with similar quality of analysis and insights. This study, therefore, helps them achieve feasible targets for key QoS metrics. The proposed Data Envelopment Analysis – Slack Based Measure (DEA-SBM) [3] technique helps these managers in goal setting. DEA, is a tried and tested technique that has been used in many industries [4], but hardly used in the Indian Telecom sector. Hence the application of this technique is a value add towards evaluation of network performance and keeps service providers from trying to reach unrealistic output targets and enables them to be in par with their competitors. Wireline service providers have been chosen for the study since, although mobile (wireless) networks have been deployed nationwide in most countries, the average quality of data transmission available to individual users today is still below the levels provided by wireline hybrid-fiber networks [2]. India, at the end of January 2021, had 20.08 million wireline subscribers, per Telecom Regulatory Authority of India (TRAI) [5].</p> <p>Key Performance Indicators (KPIs) used in this analysis are – Fault repair (&gt;90% in 1 working day and &gt;=99% in 3 working days), Response time to customer for voice-to-voice operator assistance (in 60 sec. &gt;60% and in 90 sec. &gt;90%), Broadband connection speed from ISP to node (Download speed) and Service availability/uptime [6]. Benchmarks are arrived at, using the Slack Based Measure (SBM) in Data Envelopment Analysis (DEA). Twenty Decision Making Units (DMUs – ISPs) were used in the analysis with eight of them needing to improve their QoS on some of the mentioned parameters. Relative benchmark providers for all providers needing improvement with their weightage are found and optimal targets by each QOS metric is mathematically arrived at.</p> <p>Below are two examples highlighting key aspects of using this technique. Figure 1 shows the performance of the providers for the metric, ‘Response time to customer for voice-to-voice operator assistance in 60 sec. (&gt;60%)’, that needs to be worked upon the most, by majority of the service providers that have been identified as those needing improvement, along with its corresponding output slacks and output target set by the DEA-SBM approach.</p> <p><img src="https://spast.org/public/site/images/graciasamuel/mceclip0.png"></p> <p><strong>Fig.1.</strong> Target setting using SBM for QoS metric - Response time to customer.</p> <p>Table 1 gives an example of how two of the eight service providers requiring improvement are relatively benchmarked with the relevant providers along with their weightages.</p> <p>Table 1. Optimal benchmarks using the Envelopment model</p> <p>&nbsp;</p> <table style="height: 274px;" width="599"> <tbody> <tr> <td width="168"> <p><strong>Service Provider</strong></p> </td> <td colspan="4" width="438"> <p><strong>Optimal Lambdas with Benchmarks</strong></p> </td> </tr> <tr> <td width="168"> <p>Airlink Communications Pvt. Ltd</p> </td> <td width="54"> <p>0.160</p> </td> <td width="176"> <p>Meghbela Cable &amp; Broadband Services</p> </td> <td width="51"> <p>0.840</p> </td> <td width="158"> <p>ONEOTT Intertainment Limited</p> </td> </tr> <tr> <td width="168"> <p>Indinet Service Pvt Ltd</p> </td> <td width="54"> <p>0.333</p> </td> <td width="176"> <p>RailTel Corporation of India Ltd</p> </td> <td width="51"> <p>0.667</p> </td> <td width="158"> <p>World Phone Internet Services Pvt Ltd</p> </td> </tr> </tbody> </table> <p>&nbsp;</p> <p>This helps managers to save time and plan in a focused way to take the organization forward. While there are several ways to theoretically reach full efficiency, the DEA technique employed in the study provides managers with the optimal path for each lagging QoS metric to attain full efficiency. It is to be noted that in the absence of such an optimal/best path being provided, it would take managers significant time and effort to arrive at realistic and attainable targets.</p> <p><strong>&nbsp;</strong></p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Gracia S, P. Beaulah Soundarabai, Pethuru Raj https://spast.org/techrep/article/view/2109 Intelligent Prediction of Loan Eligibility using Soft Computing Towards Digital Banking Sector 2021-10-02T11:53:46+00:00 Priyanka ppshinde.gcek@gmail.com Kavita S. Oza skavita.oza@gmail.com R. K. Kamat rkk_eln@unishivaji.ac.in <p><strong>Machine learning algorithms can be used in variety of fields for the prediction and decision making. Banking sector has vast scope where machine learning algorithm can be implemented and predict better solutions. Loan is very important term which plays role in all financial position of general public. In order to satisfy additional needs which cannot be afforded within income of a person, credit money can be taken from bank and other financial institutions against the agreement of paying additional money to be returned in the form of interest. But while providing loan to any person bank should check eligibility of the person in order to assure that loan can be repaid by person well within time. Bank should check eligibility in order to obtain security of the amount paid. Loan eligibility gets checked against different criteria and bank comes up with the decision with regards to loan payment. Certainly, the person who does not meet defined criteria will not be paid with the loan amount and gets rejected by the bank. The person who fits all predefined conditions checklist of the bank, can be paid with the approved loan amount. The decision to be made by bank only and all rights reserved with the bank. Sometimes checking eligibility of the customer becomes tricky and time consuming while it needs separate efforts to be taken by bank executives to come to decision to approve or deny the loan amount. While predefined eligibility includes monthly income, real assets in hand, previous loan history and number of other important criteria, credit card information of the person plays important role based on the history of credit card payment. Cybil score gets calculated based on the transaction and repayment of the credit card bill. Implemented algorithm uses all credit card information and calculates person’s eligibility towards loan. Machine learning algorithm has been implemented on the gathered data to obtain results. Data collected from various sources of banking sector and data pre-processing has been done to filter the data in order to remove unwanted facts and figures. Hence training set has been obtained on which machine learning algorithm has been applied to come up with the model which helps to elect on the loan payment decision. </strong></p> 2021-10-03T00:00:00+00:00 Copyright (c) 2021 Priyanka, Kavita S. Oza, R. K. Kamat https://spast.org/techrep/article/view/1490 Detection and Transmission of Arrhythmia Symptoms Using Portable Single-Lead ECG Devices 2021-09-30T19:27:00+00:00 Bhuvaneswari Arunachalan abh.mca@psgtech.ac.in <p>Arrhythmia is an abnormality in the heart beat rhythm that causes severe and fatal complications in personal health and well-being. This problem occurs due to irregular activities of the heart that typically maintain a steady heartbeat, a double “ba-bum” beats with even spaces in between each. One of these beats is the heart contracting to provide oxygen to blood that has already circulated, and the other involves the heart pushing oxygenated blood around the body. Atrial fibrillation (AF) is a type of arrhythmia that occurs when there are irregular beatings in the atrial chambers, and nearly always involves tachycardia. Instead of producing a single, strong contraction, the chamber fibrillates, or quivers, often producing a rapid heartbeat. This is the most common type of serious arrhythmia affecting millions of people worldwide and is associated with increased all-cause mortality, mainly adults&nbsp;over 65 years of age. Up to 20% of patients with ischemic stroke have underlying AF, and detection allows the initiation of anticoagulation, which is associated with a significant reduction in stroke recurrence [1]. But, AF is often asymptomatic in most patients with stroke. Other patients have troubling symptoms such as palpitations or dizziness, but traditional monitoring has been unable to define an AF instantly [3]. Early diagnosis of AF may have several benefits, including individualized lifestyle intervention and anticoagulation treatment, and may be associated with a reduction in complications and healthcare costs [2]. However, AF detection is difficult because it may be episodic. Therefore, in case of emergency, periodic sampling and monitoring of heart rate and rhythm could be helpful in better diagnosis. Identification of AF on time can help in providing life saving treatment.</p> <p>&nbsp;A 12-lead electrocardiography (ECG) is the most commonly used diagnostic device for identifying abnormalities in heart rhythms, especially Arrhythmia. The characteristic sign of AF is the absence of a P wave in the ECG signals. The P wave is formed when the atria (the two upper chambers of the heart) contract to pump blood into the ventricles. In presence of AF, there will be many “fibrillation” beats instead of one P wave.&nbsp;The normal duration of a QRS Complex, which is formed when the ventricles (the two lower chambers of the heart) are contracting to pump out blood, is between 0.08 and 0.10 seconds [4][5]. In case of AF, the QRS complexes are “irregularly irregular”, with varying R-R intervals. This results in chaotic T waves, measuring irregular resting period of the ventricles. Figure 1 shows sample output ECG signals in normal and AF conditions.</p> <p>Recent advances in technology have allowed for the development of single-lead portable ECG monitoring devices. A person can measure their heart rate using their pulse in different locations of the body: the wrists, the insides of the elbows, the side of the neck, and the top of the foot. Portable ECG devices use finger contact to create a single-lead ECG trace and have a high degree of sensitivity for identifying AF [6]. The in-built memory of these devices allows for single or multiple time-point screening. These devices permit multiple 30–60s recordings to be captured, and downloaded to a computer. Most interface with a web-based cloud system where ECG rhythms can be transmitted to remote specialists, allowing rapid analysis and diagnosis [7][9]. Interpretation from a healthcare specialist or by automated machine learning algorithms has achieved high sensitivity and specificity for AF detection. However, in case of continuous data collection and presence of signal noise, distinguishing the presence of AF is a real challenge.</p> <table> <tbody> <tr> <td width="44">&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td width="540"> <table width="100%"> <tbody> <tr> <td> <p><strong>Figure 1 Sample ECG output signals: a) normal condition b) </strong><strong>Atrial fibrillation condition</strong></p> </td> </tr> </tbody> </table> &nbsp;</td> </tr> </tbody> </table> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>To mitigate this cognitive challenge of computing multiple aspects of the ECG signals, modern machine learning algorithms and decision support tools are developed[8]. These tools can assist healthcare professionals in time of need to identify AF signals and provide much needed treatment. This paper proposes a convolution neural network (CNN) model that classifies the ECG recordings from a single-channel handheld ECG device and detects four distinct categories of rhythms: normal sinus rhythm (N), atrial fibrillation (A), other rhythm (O), or too noisy to be classified (~). The samples of AF rhythm - A signals are collected and transmitted to the clinicians for instant diagnosis. &nbsp;The aim is to present a system for detecting the AF signals accurately and for predicting AF measurements through real-time data transmission for enabling life-saving treatment on time.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Bhuvaneswari Arunachalan https://spast.org/techrep/article/view/2180 FRFT LOW PASS FILTER BASED MAINLOBE ANALYSIS IN WIDE BAND ARRAY 2021-10-01T16:21:24+00:00 bharathasreeja bharathasreejaece@rmkcet.ac.in <p>This paper presents the new approach to the frequency domain of wideband Linear Array, using elemental low pass filter through Fractional Fourier Transform (FRFT). This approach attempts to overcome the main drawbacks of distorted spectrum at the received signal and the variations of main lobe pattern with respect to input frequency variations. In this approach, low pass filter (FIR) is designed by using different windows like Hamming, Kaiser- Bessel, and Nuttall weight methods. Thus the performance of the low pass filter is evaluated by main lobe detection using different window methods.</p> 2021-10-03T00:00:00+00:00 Copyright (c) 2021 bharathasreeja https://spast.org/techrep/article/view/2218 An Interactive Learning Educational Platform 2021-10-01T13:25:02+00:00 sathya A sathya.a@rajalakshmi.edu.in <p>Programming is one of the most required skill set in today’s IT world. Students find it difficult to learn complex programming structure with different syntax. In general, they can learn through online courses or by using books. However, sometimes, it shuttles the thrust of learning programming as it follows some traditional methodologies and it is true that, there is no innovative or interesting way to learn programming conceptually[8-9]. Hence, an interactive and gamified mode of learning is essential and has been developed using unity 3D[6-7]. Educators face major challenges as a result of the shift from the Information Age to the Experience Age. For example, students are passive and disengaged and may struggle to see the relevance of what they are learning to their lives; also, important skills needed for 21st century learners – such as empathy, systems thinking, creativity, computational literacy, and abstract reasoning – are difficult to teach. Virtual reality, an immersive, hands-on tool for learning[1-5], can play a unique role in addressing these educational challenges. In this paper, we present examples of how the affordances of virtual reality lead to new opportunities that support learners. We conclude with a discussion of recommendations and next steps.</p> 2021-10-03T00:00:00+00:00 Copyright (c) 2021 sathya A https://spast.org/techrep/article/view/2921 Dr CYBER ATTACK ANALYSIS USING ARTIFICIAL NEURAL NETWORKS 2021-10-22T17:30:55+00:00 Fahmina Taranum ftaranum@mjcollege.ac.in <p>In today's world one of the great challenges is the development of a successful and automatic cyber attack detection. This project present Artificial Intelligence techniques for cyber threat detection. The proposed methodology transforms crowd of assembled security events to individual event profiles and usage of deep learning based cyber attack recognition approach. For this aim, the event profiling of data for data preparation, pre-processing also various Artificial Neural Networks such as CNN and LSTM is developed. The benchmark dataset NSLKDD is considered for appraisal. For assessing the performance in contrast to the existing approaches, various experiments will be conducted using conventional machine learning methods (SVM, k-NN, RF, NB, and DT). Thus, the evaluation outcome of the study assure that proposed approach has potential to be operated as learning-based models for network intrusion-detection and exhibit that performance exceed the standard machine learning approaches.</p> 2021-10-22T00:00:00+00:00 Copyright (c) 2021 Fahmina Taranum https://spast.org/techrep/article/view/928 Design of Efficient Algorithms for Secure Communication Contextual to Internet of Things 2021-09-16T12:09:27+00:00 Jyoti Neeli jyoti.neeli@gmail.com <p>Internet of Things- (IoT) is well-known in recent years as a trending topic. Many researchers around the world are working hard to address security-related issues in IoT. However, due to the heterogeneous nature and scale of nodes and different devices in the IoT-ecosystem, addressing security issues is a major challenge. The Internet of Things is a fusion of many technologies that have their own traditional security vulnerabilities and need to be addressed in an IoT environment.&nbsp; The proposed study has reviewed existing literatures in the context of IoT security and explored security vulnerabilities in the existing techniques. The critical findings exhibited that ensuring the topmost level of resistance to a variety of threats and potential security attacks in IoT is still an open and unresolved issue, and the underlying reason behind this is the computational complexity associated with designing security mechanisms.</p> <p>The paper proposes a lightweight and responsive encryption technique that provides minimal resource consumption from sensor nodes. A significant contribution is the introduction of a novel bootstrapping of key mechanism, which has a unique secret key generation capability that can maintain forward and backward secrets simultaneously.</p> <p>The proposed security mechanism is based on an analytical approach and implemented on a numerical computing tool. Framework in fig.1 introduces a safe and trustworthy environment for performing seamless communication and data sharing processes. The framework uses two distinct forms of the algorithm, where algorithm-1 is subjected to key-bootstrapping as a primary operation for enrolling sensor nodes into IoT gateway-node. Algorithm-2 is subjected to key generation operation using a low-cost encryption mechanism.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Jyoti Neeli https://spast.org/techrep/article/view/2554 Face Mask Detection for Real Time Video Streams 2021-10-14T18:04:25+00:00 Vedant Pople vedantpople4@gmail.com Ayush Kanaujia ayushkanaujia2000@gmail.com Vijayan R rvijayan@vit.ac.in Mareeswari V. vmareeswari@vit.ac.in <p>The Face Mask Detection model is used to make sure a person is wearing Mask or not. This model results from the grappling situation presented by COVID-19 pandemic, resulting in the mandatory use of Masks at public places. This research helps us understand a broader perspective about the Face Detection models by using different state of the art models, and fine tuning them to get better results. Through this we also aim to make a lightweight model, which can be implemented in resource starving IoT devices and Mobile applications. Though the novelty lies in the fact that, as this model deals with Face detection in the first place, the privacy about the user’s face is maintained, there is no storage required, all the results are displayed in Real Time. This is achieved by using Object Detection technology which enables us to use the camera and computer integrated way to realize the face mask detection so the purpose of non-contact automatic detection is achieved. This is important as Security agencies need to plant actual personnel to make sure all the people in public are wearing ‘Masks’, this model will reduce their work and lessen the risk of people being contacted by COVID-19. There are also many other large-scale implications in social, economic and environmental spheres like making a safe environment, even for the people who are making it to workplaces, amidst the pandemic. There are many challenges leading to the culmination of making this Model. There are currently very few Datasets available, which have both with Mask and without Mask images. The next thing is making it feasible for resource starving devices so it is highly compatible for a variety of systems. Nonetheless, the major problems though lie in the Face detection part, <br>where the algorithm can be attributed to various factors like face occlusion, face scale variations, improper illumination and density. Furthermore, the traditional object detection model’s algorithm adopts the selective search methods in feature extraction, leading to problems like poor generalization ability, redundant information, low accuracy and poor realtime performance. This model addresses the flaws in the existing systems as the model is Fine tuned to give the best-in-class performance, the outlines of Face mask are distinguished in a better way, and the models in turn can also recognise various tedious circumstances like improperly worn mask, covering face with hand, etc. The performance desired is achieved by using Mobile Net V2 model, which is a very effective feature extractor for object detection and segmentation and is known to meet the resource constraints of a variety of use cases. The proposed model also works better than the novel nearest feature line (NFL) classification model for face recognition. This method is based on the nearest distance from query feature points to each <br>feature line. The suggested model doesn’t revolve around a deep convolutional network trained directly to optimize the embedding itself, it uses an intermediate bottleneck layer. The complete model is built in 2 phases, the first one consisting of making a Face mask detection model trained to detect the Face and mask, and then placing it in the Real Time environment by using the OpenCV for actually predicting the usage of Face Mask. Through this process, making a model that is trained with Images can be used to detect Face masks in Real-Time video streams. The model at the end, will be able to mark the region of interest, with a Red or Green coloured box, and display the accuracy with which the prediction of the mask is being done.<br>Hence, The Novelty of this project lies in the fact that, the implementation of the model is being done on the Real-time video streams, although it is being trained using images. The model uses Mobile Net V2, a lightweight architecture, based on an inverted residual structure, with input and output of the residual blocks being thin bottleneck layers unlike other traditional <br>models, and also uses the depth-wise convolutions to filter features in the intermediate expansion layer. Commonly used models for feature extraction include a set of fully connected layers at end. Architectures such as Inception V3, have reduced the number of parameters in their last layers by including a Global Average Pooling operation. Other modern architectures like Xception leverage from the combination of the use of residual modules and depth-wise separable convolutions. The model is then compared with other models like ResNet, Inception Net V3, and Xception Net to measure the performance using various parameters. Face Mask detection of Real Time video streams is a big boon for people as well as for authorities ensuring proper enforcement of covid appropriate behaviour. People nowadays are bound to wear masks and maintain social distancing in public places. This face mask detection model will help people to take care of this in public places. As there are various stages of <br>unlocking due to some relaxation in rules and some public places like malls, cinema halls, government and private offices are allowed to open, this face mask detection model can be used at the entrance of such places to mass check for usage of masks by people entering. This will also save human labour in the form of guards so social distancing can be maintained at entrances. This model will further ensure the safety of personnel involved in ensuring COVID appropriate behaviour as it will reduce the risk of direct contact with infected persons.&nbsp;</p> 2021-10-15T00:00:00+00:00 Copyright (c) 2021 Vedant Pople, Ayush Kanaujia, Vijayan R, Mareeswari V. https://spast.org/techrep/article/view/1106 Payment Card Fraud Detection using Machine Learning Techniques 2021-09-21T09:26:09+00:00 Shubhra Prakash shubhraprakash@live.com Sangeet Moy Das sangeet.das@gatech.edu <p><strong>Abstract</strong><br>The exponential growth in electronic transactions represents a fundamental shift in the way<br>people purchase goods and services and the transition to digitization and a moneyless<br>economy. With the spread of e-transactions, financial fraud has increased financial losses,<br>causing billions in damage. Also, the fraudsters are inventing new ways to make fraudulent<br>transactions appear legitimate. This has led to the growth of unknown vectors of fraud where<br>existing detection methods have been less effective. In this context, a review of published<br>methods for detecting payment card fraud with a focus on benchmarking artificial intelligence<br>and machine learning techniques is required.<br><strong>Objective</strong>: This study presents the conduction and results of a systematic review that aims<br>to investigate Payment card fraud detection using machine learning techniques<br><strong>Methods</strong>: The systematic review was carried out according to PRISMA (Preferred Reporting<br>Items for Systematic Reviews and Meta-Analyses) guidelines. Science Direct and IEEE<br>Xplore were used as the scientific databases to search for research articles published in<br>English and were filtered based on defined inclusion and exclusion criteria. At the time of<br>writing (September 20, 2021), 901 studies were retrieved, out of which 223 were included.<br><strong>Results and Conclusion:</strong> This review provides a systematic analysis of the current status<br>of machine learning techniques for payment card fraud detection. The detection methods<br>were compared based on several evaluation metrics and pressing issues related to domain<br>(like class imbalance and publicly available datasets) were addressed.</p> 2021-09-21T00:00:00+00:00 Copyright (c) 2021 Shubhra Prakash, Sangeet Moy Das https://spast.org/techrep/article/view/540 Automated Grading Of Fruits Based On Non-Destructive Quality Assessment Using Hyperspectral Imaging and Deep CNN Model 2021-09-16T11:19:03+00:00 Rahul Ganesh P reonrahul8@gmail.com <p>Good quality fruits demand is expanding due to the ascent in populace. Gross domestic product of the numerous nations depend upon its export. Significant piece of GDP relies upon its fruit export business After harvesting, they are washed, sorted, graded, pressed and put away. Out of every one of these stages, grading and sorting of fruits are vital steps. The main aim is to plan a automated system that improves the standard, upgrades the creation productivity ,decreases work cost of the technique and<br>Internal quality of the fruits. As per Agricultural and Processed Food Products Export Development Authority Pomegranates, Mangoes, Bananas, papayas, Oranges account for larger portion of fruits exported from our country .Efficient detection is achieved using the hyperspectral imaging, CNN framework ,superior performance than treating sorting and grading based on extra classes in the CNN framework based on proposed architecture.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Rahul Ganesh P https://spast.org/techrep/article/view/2051 Metaheuristic Techniques for Classification used in Identification of Plant Diseases 2021-09-30T19:10:21+00:00 Madhu Bala madhuanand87@yahoo.com <p>Agriculture is one of the important field on which economic development of a country depends. If proper care is given to the growth of crops, it returns to the efforts put in by farmers throughout the year. However, since past few years it is observed that the quality and production of crops are degrading due to the diseases caused by some biotic and abiotic agents. Biotic agents include bacteria, viruses and fungi whereas abiotic agents are climatic conditions such as humidity, temperature, soil etc. It is deemed necessary to detect crop diseases in time. Conventional methods of identifying a crop disease require a dedicated individual or an expert who can manually detect such diseases. But it is very time consuming process and increases the cost too. Here, arises the need to develop automated systems that can not only detect the diseases but saves time and cost. Several works have used machine learning and deep learning techniques effectively and contributed a lot in this domain. How-ever, most of these techniques have worked on small datasets and very few researches have been done using hybrid techniques. Further, it is observed that most of the researches have focused on identification of diseases in food grains and very few have worked on disease identification of fruits so this area can be explored for further research,</p> <ol> <li>Ramesh et al in 2018[1], implemented ANN classification method to detect diseases of rice plants. Specific features are extracted to distinguish between healthy and diseased leaf. They worked on rice blast diseased specifically.</li> </ol> <p>Paper [2] and [3] also worked on detection of rice plant diseases using SVM classification method. The former paper compared different techniques such as SVM, Discriminant Analysis, KNN, Naive Bayes, Decision Tree, RF and Logistic Regression. SVM gave the highest accuracy than other approaches. The later used a heuristic approach i.e. SVM and HOG both and achieved an accuracy of 94.6%. From this, it is clear that SVM can be considered as one of good classification techniques to achieve good accuracy. Few more classification techniques are mentioned in Table 1.</p> <p>Table 1: Classification techniques used for identification of Plant Diseases</p> <table> <tbody> <tr> <td width="84"> <p><strong>Reference</strong></p> </td> <td width="72"> <p><strong>Year</strong></p> </td> <td width="81"> <p><strong>Crop</strong></p> </td> <td width="196"> <p><strong>Classification Technique</strong></p> </td> <td width="80"> <p><strong>Disease Detected</strong></p> </td> <td width="96"> <p><strong>Result</strong></p> </td> </tr> <tr> <td width="84"> <p>[4]</p> </td> <td width="72"> <p>2020</p> </td> <td width="81"> <p>Tomato</p> </td> <td width="196"> <p>i) Multilayer Perceptron (MLP)</p> <p>ii) Stepwise Discriminate analysis (STDA).</p> </td> <td width="80"> <p>Target Spot,</p> <p>Bacterial Spot</p> </td> <td width="96"> <p>MLP: 99%</p> <p>STDA:96%</p> <p>&nbsp;</p> </td> </tr> <tr> <td width="84"> <p>[5]</p> </td> <td width="72"> <p>2020</p> </td> <td width="81"> <p>Not Specific</p> </td> <td width="196"> <p>CNN</p> </td> <td width="80"> <p>Any kind of disease</p> </td> <td width="96"> <p>96.50%</p> </td> </tr> <tr> <td width="84"> <p>[6]</p> </td> <td width="72"> <p>2021</p> </td> <td width="81"> <p>Citrus Plants</p> </td> <td width="196"> <p>i)SVM</p> <p>ii)RF</p> <p>iii) Stochastic Gradient Descent (SGD)</p> <p>iv)DL((Inception-v3, VGG-16, VGG-19)</p> <p>&nbsp;</p> </td> <td width="80"> <p>Black Spot,</p> <p>Melanose,</p> <p>Canker</p> </td> <td width="96"> <p>VGG-16–89.5%.</p> </td> </tr> <tr> <td width="84"> <p>[7]</p> </td> <td width="72"> <p>2018</p> </td> <td width="81"> <p>Rice</p> </td> <td width="196"> <p>ANN</p> </td> <td width="80"> <p>Rice Blast</p> </td> <td width="96"> <p>Training:99%</p> <p>Testing:90%</p> </td> </tr> </tbody> </table> <p>&nbsp;</p> <p>This paper will reflect the comparative analysis of classification techniques in terms of accuracies shown in Fig.1 and will help the researchers to identify best practices for their research so as to enhance the efficiency and accuracy of diseases detection systems being developed.</p> <p><img src="https://spast.org/public/site/images/madhuanand87/capture.jpg" alt="" width="482" height="204"></p> <p>&nbsp;</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Madhu Bala https://spast.org/techrep/article/view/2712 PREDICTING AIR TRAFFIC DENSITY IN AN AIR TRAFFIC CONTROL SECTOR 2021-10-17T11:03:37+00:00 Tina Vimala Asirvadam tina.asv1187@gmail.com S Sonali Rao kannanarchieves@gmail.com T. Balachander kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Aviation Industry plays a very important role in the economic development of a nation. An efficient Air Transport system results in economic and social benefits. In order to ensure benefit for both the industry and the economic sectors it interacts with, a proper assessment of air transport needs to be made, taking into consideration the associated resources that are to be provided. Air Traffic over the Indian skies and at Airports has seen a rapid increase and it is also expected to grow in the future too. The resultant increase in demand requires a corresponding effort in effectively balancing the demand with capacity. The concept of Air Traffic Flow Management (ATFM) enables improved management of demand and capacity and helps the stakeholders to deal with the increased complexity of Indian air routes. In the Civil Aviation Industry, forecasts are vital to the Planning process of States, Airports, Airlines, Air Navigation Service Providers (ANSP) and other allied organizations. It helps States in the orderly development of Civil Aviation and also in the planning of Airspace and Airport Infrastructure. It assists Airlines in air route planning and flight scheduling. Being able to predict the traffic in Air Traffic Control (ATC) sectors is thus vital for effectively managing the flow of Air Traffic. It gives an indication of an impending breach of a sector capacity, so that flow restrictions or delay procedures or bifurcation of the ATC sector, is implemented, to avoid potential overloading of a sector, which in turn will lead to safety breach. Determining the traffic in an ATC sector involves analysis of various hidden factors that require careful evaluation. Given a sector and an hourly analysis of real time Air Traffic data of the sector, the prediction task is implemented using two machine learning algorithms- Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM), that analyze the data and detect a pattern, which is then used to predict the traffic in the sector at any given time in the near future. Given the real time data and the experimental results obtained from this study, it is evident that Long Short Term Memory provides a better indication of the purpose of this study.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Tina Vimala Asirvadam, S Sonali Rao, T. Balachander, Mayakannan Selvaraju https://spast.org/techrep/article/view/1471 An Approach for Statement Coverage Based on Test Case Prioritization Technique 2021-09-29T12:08:20+00:00 Santosh Kumar Kar santoshkumarkar@gmail.com Brojo Kishor Mishra brojomishra@gmail.com Susanta Kumar Das bususanta@gmail.com <p>Currently, we are in the age of digitalization. With the facility achieve or availed in the 5th generation data communication for high-speed portable device make the utility of digitalization to each person for their daily utility in business operation, education, finical transaction, healthcare monitoring . The fabrication technology able to produce low power, low cost device, high speed processing devices to support to this . The software need to support for this kind operation need to smart and robust enough. So the software environment development regarding this need a proper planning, development cycle and pass through a well define Software testing cycle, which is an integral part before deploying in real-time. Hence, software plays a key role in modern society. The utility of the software platform is rapidly increasing due to the digitalization of the platform, which is an integral part of every person in society. So the software needs to be robust by nature to handle the online or offline process to handle in a precise manner. So the testing of the software need to handle carefully planning, effetely and efficiency for a better precision based for the complex processing handling according to the task to perform. So the testing of the software product is an integral and important phase of any SDLC (Software Development Life Cycle) [1].The testing phase in the SDLC considers all possible combinations for the functionality and dissimilar combination output according to the product definition. The similar and dissimilar view for the software product depends upon the utility and design specification. The test pattern for the system depends upon the skill level of the tester and the testing environments. In comprehensive testing of the software product, in the SDLF all possible and alternative combination of the variable is considered and that combination is applied to the resource constraints [2]. The testing phase of any SDLF is, where the software product tester or testing group hold the responsibility of, formulate a test plan considering all combination of input variable set and divided into test cases need to be achieved for the functionality of the product utilities. The validations of test cases check the degree of robustness for use and reflect its sustainability on deployment. The deployment of test cases with proper priority is needed to consider for better results in a testing cycle. While planning for the testing of a software product or the environment, the tester or the testing skill personal need to consider the platform of deployment and the complexity need to handle properly. The planning phase need to address the deployment of the skilled testing personal should deploy according to the need. The leader or the manager needs to address the testing issue properly. Considering all the issue and factor, we consider our all the issue above for testing approaches. In our work, here we deploy an effective and unique test prioritization technique, which is based on the statement-coverage criterion method to state the prioritization of all the test cases under consideration [3]. The experimental work carryout with the concept above shows a better and effectiveness of process to deploy in the testing phase is a cost-effective solution. This will help towards the optimization of cost and time in the testing phase of SDLF.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Santosh Kumar Kar, Brojo Kishor Mishra, Susanta Kumar Das https://spast.org/techrep/article/view/1544 The Use of Digital Technologies by the MSMEs to Preserve Cultural Heritage of India 2021-10-02T06:34:28+00:00 Lipika Mohanty sailansu_das@yahoo.co.in <p><strong><u>Abstract</u></strong></p> <p>Digital technologies are electronic instrument, systems, devices and resources which storedata. It helps communication, social networking and computing. Any information used in a computer is known as digital technology. There are 20 different digital technologies like websites, buying and selling online, smart phones, e-books, etc. Now it is challenging for the mankind as movement has been restricted. Organizations associated with arts, culture and heritage have been closed due to Covid-19 pandemic. We are isolated and cut off from our physical networks. But digital technology has saved us and facilitated connecting us with the people and places. Digital technology is now changing in many areas of our lives, such as cultural fields, realizable itemss of art. Art in digital mode like 3D virtual reality (VR) now permits artists to overturn conventional art forms. It is also an essential tool for creation and imagination of varied artistic needs. The development of digital technology has changed the traditional mode of buying and selling by setting of e-commerce stores. They provide large varieties of product with sellers from all over the world through which we can compare the quality and quantity of specific items in that price point and become easy for us to buy any product of our need. Now a day’s digital art is becoming quite popular in all over the world because of digitalization and artist are upgrading themselves in line with the new varieties of art forms and getting appreciation through digitalization. We can generally work faster in digital art work than traditional art work. In digital art, a huge complicated painting are easily achievable but not in traditional mode. But switching from a pencil or brush to a digital screen and stylus will not make us a magical artist. We will have to study and work hard for digital art.</p> <p>However, through digitalization many art form samples are default present in every digital tool, by which the art forms are losing their uniqueness and artistsare losing their creativity levels. A digital culture is the product of the endless persuasive technology around us and results of destructive technical innovation. It is applicable to multiple topics but it has come down to one over searching them the relationship between humans and technology. Digital Heritage is made up of computer based durable materials to be kept for future generation. Digital heritage is spread over different communities, Industries, stores and regions. The use of established tools embedded with advanced technology has also been a limitation for MSMEs. Digital lending is transforming the MSMEs sector. It was not seen before historically. Records in digital mode can be straightaway approached for processing and distinguished credit scaling mechanics that can be officially followedin lieu of the incumbent credit rating model. In this way, digital lending can vastly aid the MSME segment. Digital technology increases the quality of service provided by business owners. It reduces consumption of our time and money. It helps us in better communication, it reduces cyber crime risks for common people. But it makes us extremely dependable on artificial intelligence. It is highly expensive for a common man to afford. It creates significant shortages of jobs. The risk percentage for malfunctioning is extremely high because all the tasks are machine driven and the minor labs in the functioning can create a threat that may not be controlled.</p> <p>Research Areas:- Innovation in art, design, technology, cultural heritage. I use critical discourse analysis and case study research methodology.</p> <p>Keywords:Digitalization, Innovation, Diversity management, e-commerce, MSMEs.</p> <p>&nbsp;</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Lipika Mohanty https://spast.org/techrep/article/view/1584 Analysis of the Impact of Yoga on Health Care Applications and Human Pose Recognition 2021-09-29T19:22:56+00:00 Nagalakshmi Vallabhaneni lakshmi999.vallabhaneni@gmail.com P. Prabhavathy pprabhavathy@vit.ac.in <p><strong>Abstract</strong></p> <p>Human Pose Recognition is a powerful computer vision strategy that has revealed several problems. Separating human exercise is beneficial in various fields, including health observation, biometrics, and a wide range of medical care applications. These days, yoga poses are famous for practice because they can improve muscular quality and increase breathing exercises. However, because evaluating yoga stances is complicated, experts will be unable to profit from the activities in the long run. For people who want to practise yoga at home, IoT-based yoga systems are required. Several studies have suggested that camera-based or wearable devices can more accurately arrange yoga pose discovering approaches. On the other hand, camera-based methods have security and protection issues, and wearable device-based strategies have proven irrational in existing applications. A solid foundation and ongoing research in pose assessment are required to construct such a framework. First, using real-time data, this paper investigates the effect of yoga on people experiencing various anxiety levels. Second, a comprehensive survey of yoga pose detection frameworks, ranging from machine learning to deep learning techniques and assessment measurements, was carried out.</p> <p>Keywords: Deep learning, Impact of Yoga, healthcare applications, pose recognition, IoT</p> 2021-10-01T00:00:00+00:00 Copyright (c) 2021 Nagalakshmi Vallabhaneni, Dr P.Prabhavathy https://spast.org/techrep/article/view/769 Gender Prediction based on Morphometry of eyes using Deep Learning Models 2021-09-15T15:21:42+00:00 Talluri Aruna Sri atalluri@gitam.edu Dr. Sangeeta Gupta sangeetagupta_cse@cbit.ac.in <p style="text-align: justify;">In the modern days, the growth of online social networking websites and social media leads to an increasing adoption of computer-aided image recognition systems that automatically recognize and classify the human subjects. One such familiar one is the anthropometric analysis of the human face that performs craniofacial plastic and reconstructive surgeries. To analyze the impact on facial anthropometrics, it is also essential to consider various factors such as age, gender, ethnicity, socioeconomic status, environment and region. The repair and reconstruction of facial deformities to find the anatomical dimensions of the facial structures as used by Plastic surgeons for their surgeries are a result of the physical or facial appearance of an individual. In addition, the factors like culture, personality, ethnic background, age, eye appearance and symmetry contributes greatly to the facial appearance or aesthetics. Gender classification based on biometric images is one of the prominent modes to identify the person as either male or female. Classification based on the spectral division yields varying results based on the adopted imaging technique amongst single or multi spectral one. However, it is essential to capture the facial features that help in the detection process, as if the features are too many, then the time taken to train them will drastically increase and if the features are too low, then the training carried out may not result in accurate analysis. For example, the description of gender can be classified within several individuals using their voice and facial features. The frequency of a male voice is highly different from the frequency of a female voice so using this pointer, the system can identify the gender of an individual using its voice and also their facial features by scanning features, texture etc. The main goal is to be user interactive with the system so that the gender differences are produced effectively and in an accurate manner. The correct and relevant information will be enough for the system to recognize the particular gender or the data output that is intended to be received upon. This is helpful in a way to constructing the data so that no inaccuracy is detected by the data. Hence, it is essential to select the features in an optimal manner to achieve better accuracy. Also, if the samples from training and test set are picked infrequently, then the accuracy to capture gender based classification will be unstable. It is also essential to assess the adopted spectral based techniques work well for increased number of subjects. Hence, analysis of human activities using data mining or machine learning techniques can be useful to infer properties such as the gender or age of the people involved. Towards this end, the proposed work focuses on&nbsp;gender recognition thereby building a model to scan the eye image of a patient and determine if the gender of the patient is either male or female by applying deep learning methods. It is identified from the work that deep learning network yields a better performance for gender&nbsp;based classification based on the morphometry of eyes.</p> <p>&nbsp;</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Talluri Aruna Sri, Dr.Sangeeta Gupta https://spast.org/techrep/article/view/2280 Analysis of Algorithms to Control the Congestion by Improve Energy Efficiency in WSN 2021-10-15T16:27:36+00:00 Vanitha G vanithaphd19@gmail.com <p>Wireless Sensor Networks (WSNs) are likely to have a wide variety of applications and develop their usage in recent years because they provide the minimum cost solution for deployment and management. These networks generally consist of distributed independent sensor nodes that interact with each other to process and broadcast the sensed information via a wireless transfer channel. In WSNs, there are one or more sink or base stations and many sensor nodes circulated over the broad region known as the transfer region. During data transfer from the number of sensor nodes, the network can be congested due to the high amount of traffic. As a result, congestion is a major significant challenge in WSNs because it directly impacts energy efficiency and the network lifetime of sensor nodes in the network. Besides, it degrades an overall channel capacity and increases the risk of packet loss. To solve these challenges, an effective congestion-aware routing protocol is essential in WSNs. Over the past few decades, many congestion control or avoidance routing protocols have been designed by different researchers. This paper aims to discuss those different congestion control or avoidance routing protocols in WSNs with their drawbacks for identifying the upcoming scope of congestion-aware routing protocols in WSNs.</p> <p>. Here, to prevent congestion depending on the traffic priorities of RT packets, various algorithms have been developing in the previous decades. It must handle the congestion due to the mix of RT and Non-RT (NRT) packets effectively. WPDDRC algorithms develop and combine the DDR of a particular node with the WP traffic class. However, it does not consider the buffer occupancy and queue size because the queue length is higher than the buffer occupancy, which leads to high packet loss and delay. This article proposed an adaptive queuing system with the WPDDRC called a proficient control (PRC) algorithm to tackle this issue. In this algorithm, two independent virtual queues are considered single physical length (lines), which accumulate the input packets from every child's node depending on the source's traffic significance and priority. If the arrival packet is received, the PRC detects the congestion by using the virtual queue status to adjust the child's transmission rate. Finally, results indicate that the PRC algorithm's efficiency compared to the congestion control algorithms.</p> <p>A Proficient Rate Control (PRC) technique develops by using traffic type priority and virtual queue conditions. Conversely, it does not consider the problem of fair bandwidth assignment while handling the congestion in WSNs.Here, a PRC with Fair bandwidth Allocation (PRC-FBA) technique is proposed, considering traffic type priority and fair bandwidth assignment. First, the challenge of bandwidth assignment in WSN investigates under Signal-to-Noise plus Interference Ratio (SINR) model, which intends to discover a trade-off between fairness and network efficiency. Then, a novel bandwidth utility factor defines fairness and efficiency and an Approximate solution in the relationship of a node using time slot assignment. Besides, the problem is formulated as non-linear programming and split into two sub-problems. So, the 2-phase technique introduces. In the primary phase, the relationship of the nodes has computed. In the secondary stage, time slots have assigns to maximize the utility factor and give fair bandwidths in WSNs. At last, the simulation outcomes exhibit the effectiveness of the PRC-FBA technique compared to the conventional congestion handling techniques..</p> <p>&nbsp;</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Vanitha G https://spast.org/techrep/article/view/199 Survey of sentiment analysis and its impact on data extraction 2021-09-07T10:50:45+00:00 Narahari Ajmeera ajmeera.narahari@gmail.com Kamakshi P hod.it@kitsw.ac.in Vishnu Vardhan B mailvishnu@jntuh.ac.in <p>With the advent of information exploration over the years, the most important thing to make the business decision is to consider the opinion of the peoples. Sometimes the opinions which are transformed to sentiments plays a vital role in the aforementioned areas. The opinion of the people is shared by the e-commerce, social media networks like twitter, facebook, blogs, forums, and etc., The opinion is categorized into positive, negative and neutral opinion. To extract the sentiments, opinions there are different approaches available in the literature such as support vector machine, Naïve Bayes, Neural Networks, N-gram, lexicon based approaches [12-15] etc. In this paper it is aimed at comparing different type of machine learning approaches such as supervised [7], unsupervised and semi supervised learning algorithms which are useful to extract the various opinions over the Net. It is further surveyed to understand various performance measures when the extracted opinions were transformed to sentiments.</p> 2021-09-08T00:00:00+00:00 Copyright (c) 2021 Narahari Ajmeera, Dr. P. Kamakshi, Dr. B. Vishnu Vardhan https://spast.org/techrep/article/view/2421 Analysis of Traffic Prediction using Machine Learning for Intelligent Transportation System 2021-10-12T01:32:05+00:00 Armstrong Joseph armstrongjoseph30@gmail.com Bhanu chandra bhanusmart369@gmail.com <p>The goal of this study is to create a tool for anticipating accurate and timely traffic flow data. Everything that can affect the flow of traffic on the road is considered part of the traffic environment, including traffic signals, accidents, rallies, and even road repairs that can generate a traffic bottleneck. A driver or rider can make an informed decision if we have prior information that is very close to approximate about all of the above and many more daily life situations that can affect traffic. It also aids the development of driverless vehicles in the future. Traffic data has been growing tremendously in recent decades, and we have moved toward big data concepts for transportation. Available traffic flow prediction approaches use some traffic prediction models but are still unsuitable for real-world applications. This fact prompted us to pursue a solution to the traffic flow forecasting problem based on traffic data and models. Because the amount of data available for the transportation system is enormous, effectively forecasting traffic flow is difficult. We planned to employ machine learning, genetic, soft computing, and deep learning techniques to analyse massive data for the transportation system with a lot less complexity in this project. Image Processing techniques are also used in traffic sign identification, which aids in the proper training of robots.</p> 2021-10-12T00:00:00+00:00 Copyright (c) 2021 Armstrong Joseph, Bhanu chandra https://spast.org/techrep/article/view/2458 Profitability Visualization in Catalogue Management System 2021-10-13T05:18:56+00:00 Gurjapna gurjapna.kaur@gmail.com <p>In the past few years, the amount of product has increased exponentially on e-commerce [1].<br>Retailers, wholesalers have struggled to keep pace with the rise in the bulk of products. This<br>research intends to carry out the possibilities for deciding the product to be used for promoting<br>business or brand on any e-commerce platforms [2]. Also, stresses upon that how much<br>product and profit visualization [3] is one of the major and foremost factors to keep in mind<br>before switching to online e-commerce industry. The major challenge for a seller is to decide<br>which product should be considered for selling on different platforms. Also, a relation between<br>a product and its profit. The approach we present here in this paper assess that which product<br>falling under category should be used to promote on online e-commerce platform in order to<br>earn online face value of the brand or the product. The model used in this paper for visualizing<br>the product is fruitfully captured and assessed using data visualization tool Tableau by<br>observing the pattern which product has highest buyer or inclination towards the particular<br>product. It also caters, that once category is finalized then new products can be easily targeted<br>in order to have more customers. It addresses the importance or need of visualization of<br>product as it can aware or notify the seller in advance whether he is selling the right product<br>or not.<br>Also, visualizing product and its profitability are very much crucial as it helps in getting<br>knowledge about relationship of product with the business. In a sense it means, how much a<br>chosen product is enhancing the growth of the business and at initial stage, a person can take<br>a call of either continuing or discontinuing with the chosen product. So, product and profit<br>visualization are the only medium to synthesize huge amount of data and absorbing its<br>essence and it also saves lot of manual team and research work.<br>Fig 1: Catalogue Management System<br>Profitability Visualization in Catalogue Management System<br>SPAST Abstracts Ms. Gurjapna Anand, Mr. Amar Saraswat, IGCSTS-1, 2021<br>In this paper, we introduce an application frame with name catalogue management system<br>which is developed using PHP, code ignitor framework. Fig1: Catalogue management system<br>is developed to ease the work for any retailer or wholesaler. It is used to gather the vast<br>volume of data for visualizing the products or the data and its relevant growth (profitability).<br>This model performs six (6) functions as below mentioned: (a)Maintaining actual cost or cost<br>price, (b) setting retail price, (c) wholesale price, (d) website price and (e) e-commerce price.<br>The actual cost or cost price means the actual or real cost of the price, retail price refers to<br>the price given to any retailer, by wholesale price we mean the price given to any wholesaler,<br>website price means the price listed on their website and e-commerce price links to the price<br>which is used for listing on any e-commerce platform [5].<br>We demonstrate the approach using sales data for retail cum wholesale-based organization<br>with name Sans Classic parts. The visualization helps to see the trends, pattern of how a<br>particular product is used for enhancing the growth of a business and also evaluating the profit<br>generated from that. The analyses of sales data suggested here can help people to identify<br>the product and also see the relationship between the product and its profit analysis.</p> 2021-10-13T00:00:00+00:00 Copyright (c) 2021 Gurjapna https://spast.org/techrep/article/view/1021 Application of Application of model programming on N-tier platforms in the context of large databases 2021-09-20T09:09:04+00:00 Aziz SRAI aziz.srai.dev@gmail.com <p>The main goal of model-driven software development is to design applications by separating concerns and placing the notions of models, metamodels and model transformations at the center of the development process. In this article devoted to the application of the MDE approach (MDA approach) within the framework of Multilayer applications, we are particularly interested in the concepts, languages ​​and tools associated with the transformation of models, central paradigm of the MDE. This focus on the notion of transformation should not make us forget that a transformation being ultimately an executable program, all the problems linked to the development of the software (test, verification, traceability, etc... ) can be applied to it. Likewise, we have shown by the work that the application of this type of approach on multilayer platforms which gives a remarkable gain particularly in development time and in development cycle.</p> <p>This work also presents a major entry for research, which focuses on the application of model programming in the context of multilayer software architectures. We tried in this work to first show the applicability of the MDA approach on N-tier platforms through the study of several different technology architectures. In a second time, we have enriched by this work the field of software engineering by new research track in the context of programming by model.</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Aziz SRAI https://spast.org/techrep/article/view/391 Contributions to Hadoop File System Architecture by revising the file system Usage Along with Automatic Service 2021-09-14T09:01:43+00:00 Praveen Kumar Mannepalli udit.mamodiya@poornima.org <p><span style="font-weight: 400;">The usage of unstructured data is becoming obvious by companies and social media is raised heavily from past decade. The sharing of images, audio, video content by the individual user and corporate can be observed everywhere. The current work focused on the Hadoop framework revision contributions so as to improve the performance of the eco system in the context of space and time parameters. The architecture basically provides the usage of Hadoop Distributed File System (HDFS) and MapReduce (MR) we are proposing certain revision contributions so that the process of importing and processing of the tasks can get the benefit of time and space usage in the effective and efficient manner [12]. The work provides the service running in two different ways which reduces the time requirements of the cluster management, in the distributed environment this revision helps in the reduction of waiting time for the start of the service. The other context we have focused on the local file system handler in the data storage and processing of the data, the provision of using the file system according to the proposed architecture will handles the CPU context switch while performing the import and export process in the running of the jobs [13-14]. The outcome of the work is revision architecture to reflect the service initiation by all the machines in the cluster and file system revision approach to minimize the CPU context switch while performing the storage and processing relevant aspects of the Hadoop cluster [15].</span></p> 2021-09-14T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/2573 Chatbot Development using Deep Learning Techniques 2021-10-17T12:23:19+00:00 Dr. Nagaratna Hegde nagaratnaph@staff.vce.ac.in V. Sireesha v.sireesha@staff.vce.ac.in V. Sireesha v.sireesha@staff.vce.ac.in K. Chandra Sravanthi chandrasravanthik@gmail.com <p class="Default"><span style="font-size: 11.0pt;">As technology evolves companies are looking for interesting ways to communicate with their customers and users. Chatbots are becoming quite common in the business world. Chatbots provide a platform that evolves with artificial intelligence. Chatbots are capable of mimicking a conversation with a human being and is widely used in e-commerce and other messaging applica-tions. </span></p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Dr. Nagaratna Hegde, V. Sireesha, V. Sireesha, K. Chandra Sravanthi https://spast.org/techrep/article/view/1332 The PREDICTION OF HEART DISEASE USING MACHINE LEARNING 2021-09-28T09:14:14+00:00 Bhavyashree R bhavyashreer.sse20@rvce.edu.in <p>In today's era death are claim to congestive heart failure have become primary matter. In today's situation one person will die per minute due to congestive heart disease. Enormous details are available and it should be stored constantly because to get the information with a lot of information retrieval. There is a large amount of data which is available in the healthcare system.</p> <p>The machine learning technique is used to find the heart disease or not. In this paper dataset is used as Cleveland database as UCI repository can be used in this work. Firstly collect the heart disease dataset&nbsp; and extract them and preprocess them. The intent of this paper is to predict congestive heart disease or not by using their basic attributes and then they can easily predict the&nbsp; congestive heart disease by using Logistic Regression Model according to the information given in dataset.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Bhavyashree R https://spast.org/techrep/article/view/519 Camera Self-Triggering Mechanism for Optimal Image Capturing 2021-09-16T11:17:38+00:00 Hasitha Bandara hasithae@wyb.ac.lk Susantha Wijesinghe susantha@wyb.ac.lk Manjula Wickramasinghe manjulaw@wyb.ac.lk <p>Triggering a camera to capture images in optimal capturing point is vital for the vision-based detection systems such as product quality monitoring systems [1], object tracking systems [2], driving assistance systems [3], etc. It increases the accuracy of the decision made by the system, and it reduces the computational burden of the system and distortion of the captured images. There are numerous tracking algorithms proposes for different tracking tasks but the problem remains [4] and some existing tracking systems require a high computational cost [5]. The defect detection of conveyor targets is a common activity of product quality monitoring systems in the manufacturing industry. Therefore, they have special attention to track the conveyor targets in the correct position. Most of the available systems have been developed using a photoelectric sensor [6-7] and multi-camera-based sensing system for triggering the camera. It requires additional hardware [8], and may cause additional computational burden for the system. In this paper, we propose a self-triggering mechanism for optimal image capturing in conveyor targets on a production line that eliminates additional hardware requirements and computational burden. This self-triggering mechanism can easily be integrated to any vision-based detection system.</p> <p>The proposed self-triggering mechanism has three steps; a background subtraction based conveyor target identification, Scale Invariant Feature Transform (SIFT)-based keypoints mapping [9] and reference point calculation, and estimation of optimal capturing point according to the speed of the conveyor target.</p> <p>The background subtraction is the first step of the proposed self-triggering mechanism. It uses as an initial identification of changes in image sequence in between background image and target image frame. The background image is an image which is captured by the same camera without any target object. The target image frame is an image which is captured while running the production line. It assumes that the camera used to capture both target image frame and background image is passed its latency time. The background subtraction is a simple binary level XOR operation between binary image of background image and binary image of target image frame. The decision is taken after comparing the pre-defined threshold of black-to-white pixel ratio of the resultant image of the XOR operation. The theoretical threshold value is zero but in real situation, it is a non-zero value due to the internal and external noises. However, it is very small. In our experiment, the threshold value has been estimated experimentally by trial and error. If any conveyor target found, the SIFT–based keypoints identification algorithm is applied to the corresponding target image frame to identify the target graphic. Otherwise the system captures next image frame and the background subtraction process is applied for the that image frame. Applying the background subtraction before applying the SIFT-based algorithm reduces the processing time of target detection and it minimizes the non-detected targets passing through the system. At the same time, it reduces the computational burden of the system.</p> <p>The target graphic identification is done by checking number of matched keypoints of the image frame with a pre-defined template. The template is a photograph of sample of non-defective target graphic. If the target frame has a sufficient number of matched keypoints, the system will identify that the image frame has a target graphic. Otherwise the system will capture the next frame for the background subtraction process again. The threshold value of matched keypoints depend on the quality of both template and target images and system requirements such as resolution of the image and nature of the target. If system detects a target graphic, the x-coordinate of reference point is calculated to identify the position of the identified graphic. The reference point is a single pixel point which is used to represent all matched keypoints. It is calculated from the average value of x-coordinates of matched keypoints and round off to the nearest integer. After identifying the position of target graphic, the system estimates the time taken to move the reference point of the target graphic to the centre point of the frame according to the speed of the target and camera triggering signal is generated accordingly.</p> <p>The proposed self-triggering mechanism was experimentally tested on a custom conveyor systems at two different speeds for samples of transparent bottles. The experimental setup consisted with a 2 MP general purpose camera, a LED light source with light diffusing plate, a single board computer and a typical conveyor belt. &nbsp;In our test, we used 640×640 images as the target image frame and background image, and a 400×180 image as the template image. The threshold values of black-to-white pixel ratio and matched keypoints were 0.7 % and 6 respectively. The shutter speed of the camera was 30 fps. According to the test results, the propose system could identify all the test targets and it has 95% and 90% accuracy of triggering point identification for the conveyor targets in the speeds of 0.1 ms<sup>−1</sup> and 0.15 ms<sup>−1</sup> respectively. Thus we conclude that the proposed self-triggering mechanism has high potential applications in machine vision systems that eliminates the requirement of additional triggering hardware.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Hasitha Bandara, Susantha Wijesinghe, Manjula Wickramasinghe https://spast.org/techrep/article/view/1382 Early Diabetes Detection Using Combination Polynomial Features and SelectKBest Classifier 2021-09-28T15:25:56+00:00 Sai Vinay Naidu ratnavinayam@gmail.com Chaitanya Mullapudi chaitanya.krishna1002@gmail.com Hemprasad Yashwant Patil hemprasadpatil@gmail.com <p>Because of unhealthy and excessive food habits, the number of people suffering from diabetes rose from 108 million in 1980 to 480 million in 2014(WHO)[1]. Blindness, kidney failure, heart attacks, strokes, and lower limb amputation are the major symptoms of diabetes. Between 2000 and 2016, there was a 5% increase in premature mortality from diabetes [1]. Detecting the symptoms and taking precautions should come as a top priority to fight against diabetes. We can achieve early detection of diabetes by analysing the individual’s medical record of certain symptoms. In this work, we introduced early detection of diabetes using Artificial Intelligence and deep learning with polynomial function features [2-4].</p> <p>We wish to create an error free model where there is little space for the model to make mistakes. Many models which we have researched have considerable errors which can lead to wrong prediction of the disease, a decision which can impact patient’s life. We strive to eliminate the errors to make an almost perfect model by prioritizing the most important features to apply polynomial functions, which is the novelty of our paper. In this proposed work we took dataset from UCI machine learning repository. Then we took the data and fed it to the SelectKBest classifier to get the priority order of symptoms. We took top 8 scores into our pre-defined polynomial functions. We used binomial, cubic and quaternary functions in our polynomial features. Using the new data from polynomial functions, a prediction is performed using different algorithms like Logistic Regression, Random Forest, K-Nearest Neighbours (KNN), Support Vector Machine (SVM) and Artificial Neural Networks (ANN). Among all these algorithms, we found that ANN is more precise with 99.04% accuracy.</p> <p>Our objective is to obtain an almost perfect model which can successfully be able to diagnose diabetes in a person. The polynomial features play a key role in increasing the accuracy of the model. Prioritizing the most important features using SelectKBest classifier will help in getting the most important symptoms to which the polynomial functions are applied. The implementation of model will help the people. A perfect application of this paper is to install “Diabetes-check machines” in hospitals where people can know whether they have diabetes or a simple application which can be freely accessed by the public. All they have to do is to answer a few questions on the symptoms they might have and submit them. This will help in reducing unnecessary costs for the patient if the person doesn’t have diabetes.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Sai Vinay Naidu, Chaitanya Mullapudi, Hemprasad Yashwant Patil https://spast.org/techrep/article/view/1452 OWLISH 2021-09-29T12:22:32+00:00 Salomi Samsudeen salomi.m@kpriet.ac.in <p>According to statistics over 1.32 lakh road accidents occurred in 2020 which is lowest over the past 11 years and 1.2 lakh die due to accidents. More than 1.35 million lives are lost each year and 50 million people sustain injuries. Globally road accidents are the tenth leading cause of death. Now a days Tamil Nadu Government provides coffee to the drivers who drive especially during 3AM to 5AM to make them active, and to reduce accidents caused by the drowsiness of the drivers. Truck drivers who transport the cargo and heavy materials over long distances during day and night time, they often suffer from lack of sleep and drowsiness are some of the leading causes of major accidents on Highways. The automobile industries are working on some technologies that can detect the drowsiness and alert the driver about it. In this paper, we proposed a model that senses sleep<strong> and alerts drowsy drivers.</strong> The basic purpose of this system is to track the driver’s facial condition and eye movements and if the driver is feeling drowsy, then the system will trigger alert sound and make the drivers awake. This will be more useful for the drivers who travel during night time.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Salomi Samsudeen https://spast.org/techrep/article/view/2880 A COLLABORATIVE AND EARLY DETECTION OF EMAIL SPAM USING MULTITASK LEARNING 2021-10-19T13:15:10+00:00 Balika J Chelliah kannanarchieves@gmail.com Anand Sasidharan kannanarchieves@gmail.com Dharmesh Kumar Singh kannanarchieves@gmail.com Nilesh Dangi kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose:This paper includes a unique solution that attempts to use deep neural network, a machine learning technique which detects any pattern of recurrent words which may have been classified as spam.</p> <p>Methodology:The algorithm used in this paper is Deep Neural Network. Neural Networks work quite similar to a human brain. They are very much capable of producing extraordinary amounts of output from limited input. This is because they are constantly learning and improving from each input provided. Data loss is not a problem for neural networks because they store all the data within themselves instead of a database. Like mentioned earlier, they are constantly learning and therefore can come up with solutions for real-time problems by comparing them with existing problems.</p> <p>Findings: In the proposed system, two important techniques of neural network are Dropout and Activation.&nbsp; When detecting spam, especially in larger neural networks, a lot of generalization takes place. Dropout technique prevents this generalization error to its maximum extent. When generalization error is fixed, it becomes quite easy for the neural network to understand the rules of the English language and therefore figure out relationships between words like how CPU is important to a computer is similar to how Brain is important to Humans. This opens up a large field of play in various ways. When the neural network learns how to function like above, it can be used for much smaller datasets and so on, therefore improving its efficiency and its functionality to perform almost as much as a human brain. Basically, it understands that when two words are coming together next to each other a lot of times, they form some meaning and it is on the basis of this that the entire learning process of neural networks is based on which is similar to how humans think.</p> <p>Originality/value: This work provides a conclusive proof on deep neural network being superior to other methods and techniques in terms of spam detection.The future work is to work on detecting botnets attacks on mobiles. Botnets are malicious machines that attack the user’s device.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Balika J Chelliah, Anand Sasidharan, Dharmesh Kumar Singh , Nilesh Dangi, Mayakannan Selvaraju https://spast.org/techrep/article/view/86 An Autonomous Driving Based on Deep Learning Image Recognition 2021-07-28T18:40:11+00:00 Muhammad Afzal Nazim muhammadafzalnazim@gmail.com <p>Image recognition refers, in the context of machine vision, the software's ability to recognize images in objects, places, people, writing and activities. Computers can use camera vision technology and artificial intelligence software to recognize pictures. The purpose of this research is to provide the latest technology technique for images recognition for driver. During the driving they can use this technology to detect deferent object. In the years before 2010, researchers used the local picture characteristics that they had identified using hybrid machine learning methods to address problems relating to image identification and classification. Since the year 2010, however, several deep learning techniques for image recognition have been developed and put through their paces. When it comes to general object identification contests, techniques that employ deep learning to identify images surpass tactics that were utilized before deep learning was introduced by a significant margin. In addition to explaining how deep learning is used to address problems in photo identification, this article covers the latest deep learning imaging technologies available at the time of writing (at the time of writing). It is hard and time-consuming to detect the appropriate mapping function from many training data and instructor labels in a time-consuming and difficult field of image recognition technology. This article discusses how deep learning is being used in the area of image identification, as well as the most recent advancements in autonomous deep learning driving, as a result of these advances. The research aim is to provide the latest technology with the use of deep learning for images recognition.</p> <p> </p> 2021-07-31T00:00:00+00:00 Copyright (c) 2021 Muhammad Afzal Nazim https://spast.org/techrep/article/view/133 Deep Learning Based Solution and Artificial Intelligence to Enhance Cyber Security 2021-08-29T06:54:30+00:00 Muhammad Afzal Nazim muhammadafzalnazim@gmail.com <p>Human cognitive processes are replicated by machines, in particular computer systems, and are known as artificial intelligence. An example of Artificial Intelligence applications includes expert-systems,language processing, voice recognition and machine learning. Artificial intelligence and cybersecurity&nbsp;have a wide variety of interdisciplinarities. The latest artificial intelligence breakthroughs have led to huge growth for cyber advocates and criminals. Cybersecurity experts enhance the safety of AI in the cyber environment via idea development. Cybercrimes used to launch cyber-attacks using Petri net formalizations, but today they employ sophisticated techniques, such as deep learning and machine learning, as technology progresses. The emphasis of this research is on AI and cyber safety as well as AI applications in many fields. First, the objective is to integrate AI with cyber security, ensuring that it can combat many attacks and not let a single attack successfully penetrate the security measures of systems. Then a thorough knowledge is described, along with programs such as the profound faith network, recurrent neural networks, and convolutionary neural networks, as one of the methods of defending against cyber assaults. Finally, it has been proven that AI is not limited to cyber security and is utilized in a range of industries, such as education, robotics, automation, and many more.</p> 2021-09-02T00:00:00+00:00 Copyright (c) 2021 Muhammad Afzal Nazim https://spast.org/techrep/article/view/823 Noise Estimation Using Back Propagation Neural Networks 2021-09-15T19:37:08+00:00 Devinder Singh faisalsyedmtp@gmail.com <p>In this paper, a new Backpropagation Neural Network-based noise estimation method is proposed to estimate Rician noise from MRI images. To Trained BNN features of the MRI image such as contrast, homogeneity, dissimilarity, asm, energy, entropy, meanx, meany, meanglcm, varx, vary, varglcm, correlation, skewx, skewy, skew, kurtosisx, kurtosisy, kurtosis etc are used. For training to BNN four hundred fifty images are used which are downloaded from Brain web.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Devinder Singh https://spast.org/techrep/article/view/1684 Skin Disease Detection using Image Processing and Soft Computing 2021-10-08T06:32:19+00:00 Ritika Sharma rmoudgil16@gmail.com <p>Skin is the most important and most exposed part of the human body. It protects the human body in many ways such as controlling and regulating the temperature, protecting from microbes, etc. Skin is sensitive to touch and hence, it requires more care and attention. It is found that despite taking much care, skin diseases are the most common types of diseases in the human population. These diseases are caused by various agents such as bacteria, fungi, viruses, etc.[1]. The effects of these diseases on the skin may become dangerous at some times. At the same time, prediction and detection of skin disease is a very stimulating task for skin professionals.</p> <p>Many advanced technologies have come into the picture to detect diseases much more quickly and perfectly, but such technologies are very costly and are limited in nature. These technologies also require very advanced equipment which may not be in reach of common people. Hence, there is a need to identify such diseases at the initial stage to protect them from spreading. Also, there is a need to develop a disease detection system using image processing that does not require high precision camera/equipment and is accessible for all. This will also help in disease detection at early stages without the need of any expert for the low-income groups. Henceforth, detection and prevention will become easy and cheap.</p> <p>Image processing techniques are playing an important role by building automated systems to detect such diseases at the initial stages thereby saving cost and time. Much research has been done in this area but there is a need for accurate and precise diagnosis that can be applied further in the diagnosis process not only by experts but known expert clinicians too [2][3]. The majority of researchers nowadays used supervised and unsupervised learning algorithms, Neural Networks, and Genetic algorithms to identify skin diseases. In literature, it has been proposed a dual-stage approach using Rule-based and image processing techniques which is a combination of computer vision and Machine learning. This approach is used to detect diseases like Eczema, Melanoma, Impetigo with an accuracy of 95 percentages and high solidity[4]. Identification of skin lesions plays important role in the diagnosis of skin cancer (Melanoma) using the technique ADAM [5] which removes the limitations of convolution Neural networks and improves accuracy. Rest of the literature is structured in Table 1.</p> <p>This paper presents a comparative study about various diseases found in human skin. The techniques used to recognize those diseases as early as possible. The study also compares the techniques, based on the total no of diseases it has identified. This paper analysed that combination of different techniques gives the highest accuracy and enhance the performance of the designed system.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>Table 1. Shows the comparative study of techniques used to predict and detect skin diseases. &nbsp;&nbsp;</p> <p>&nbsp;</p> <table width="605"> <tbody> <tr> <td width="97"> <p><strong>Ref. No./Year</strong></p> </td> <td width="129"> <p><strong>Number of diseases detected</strong></p> </td> <td width="151"> <p><strong>Name of Disease</strong></p> </td> <td width="132"> <p><strong>Techniques used</strong></p> </td> <td width="95"> <p><strong>Accuracy</strong></p> <p><strong>&nbsp;</strong></p> </td> </tr> <tr> <td width="97"> <p>[6]/2017</p> </td> <td width="129"> <p>2</p> </td> <td width="151"> <p>Acne, Psoriasis</p> </td> <td width="132"> <p>GLCM and wavelet decomposition using the classifier K-NN</p> </td> <td width="95"> <p>Acne-100%</p> <p>Psoriasis- 92%</p> <p>&nbsp;</p> </td> </tr> <tr> <td width="97"> <p>[7]/2018</p> </td> <td width="129"> <p>3</p> </td> <td width="151"> <p>Chronic Eczema, Lichen Planus, and Plaque Psoriasis</p> </td> <td width="132"> <p>GLCM, RGB, LDA, SVM, ANN, KNN</p> </td> <td width="95"> <p>87%</p> </td> </tr> <tr> <td width="97"> <p>[8] /2018</p> </td> <td width="129"> <p>4</p> </td> <td width="151"> <p>Bullous, Psoriasis, Cellulitis, Acne</p> </td> <td width="132"> <p>InceptionV3, Inception ResnetV2, MobileNet</p> </td> <td width="95"> <p>88%</p> </td> </tr> <tr> <td width="97"> <p>[9]/2019</p> </td> <td width="129"> <p>6</p> </td> <td width="151"> <p>Psoriasis, Cronic Dermatitis Seboreic Dermatitis, Pityriasis Rosea Lichen Planus, and Pityriasis Rubra Pilari</p> </td> <td width="132"> <p>Fast Correlation-based Filter (FCBF) and Correlation Feature Selection (CFS)</p> </td> <td width="95"> <p>91.2%</p> </td> </tr> </tbody> </table> <p>&nbsp;</p> <p>&nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Ritika Sharma https://spast.org/techrep/article/view/219 Identification of the Quality of Tea Leaves by using Artificial Intelligence Techniques: A Review 2021-09-15T19:31:31+00:00 ira gaba ira.gaba@res.christuniversity.in B. Ramamurthy ramamurthy.b@christuniversity.in <p>This paper summarizes the outcome of the survey carried out for quality identification of a tea leaf and eventually price prediction. Quality identification can allow to categorizing leaf in different grades, which helps the buyer and seller to acquire suitable quality to their need.&nbsp;Price prediction is an important feature, which can bring certainty at price and farmers can be benefitted more for their reasonable good quality. Additionally, if the leaf disease is identified at the initial stage that would also allow farmers to timely resolve the concerned issues and save their corps. In the field of agriculture, this has been always a research area to identify and predict the quality of tea leaves. Various artificial intelligence techniques are hot topics in the field of recognition and their effective combination can not only solve the problem but also enhance recognition accuracy. Therefore, there is an imminent need for a detailed survey on compiling techniques used for the identification of different varieties of tea plants.</p> <p>&nbsp;In this research, we aim to propose a review of the various techniques which can be utilized for determining the quality and price prediction. The Survey is hybrid in nature with a combination of different artificial techniques, which is a suitable approach to target effective tea leaf identification. Further for the classification of tea leaf images, various algorithms can be combined as well to obtain better results and different algorithms can be used for feature extraction on the basis of texture extraction, color extraction, and shape extraction.</p> <p>&nbsp;In summary, this paper will explain, the algorithms and Artificial Intelligence techniques in context and present an overview of the research.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 ira gaba, Dr https://spast.org/techrep/article/view/2367 Comparative Analysis of Air quality Prediction using Artificial Intelligence Techniques 2021-10-15T15:32:44+00:00 ShreeNandhini P shreenandhini2016@gmail.com <p><strong>Abstract </strong></p> <p>Air population is the primary concern in most urban areas because of its notable impact on the economy and health across the universe. The emergence of industry and automobiles made air pollution worldwide, which causes a highly critical issue and a more significant impact on humans' health than the contaminants. It causes health-related Problems like lung-related diseases, namely respiratory problems and cardiovascular disease, and increases cancer. Accurate monitoring of air quality is of great importance to daily human life. Through the minimize life threats in the non-linear data and feature extraction in the Timely multidimensional warnings of the process. Air quality prediction plays an essential role in the process. Predicting air quality called Improved Sparse Auto encoder with Deep Learning (ISAE-DL) was developed with diverse neural networks, improved sparse network and Long-Short-Term Memory (LSTM) for retrieving Spatio-temporal relations better prediction of air quality process. In ISAE-DL, the spatially and temporally similar locations were collected by applying k-Nearest Neighbor-Dynamic Time Wrapping Distance (kNN-DTWD) method. KNN-DTWD selects the exact nominee locations but neglects time-consuming delay. The consuming time delay long-term delay is essential for long-term predictions. The Experimental datasets are merged and transferred to ISAE for air quality prediction. Concentric circle-based clustering and terrain information is processed along with the Particulate Matter (PM) and meteorological data to Artificial Neural Network (ANN), LSTM, and Convolution Neural Network (CNN). In this paper, the proposed Enriched spatial-temporal sequence (EISAE-DL) improves the prediction accuracy by considering the long time delay based on locations. And Compared for Experimental algorithm Improved Sparse Auto encoder with Deep Learning (ISAE-DL) The experimental results show the effectiveness of the proposed EISAE-DL in terms of accuracy, precision, sensitivity, specificity, Area Under Curve (AUC), and Matthew's correlation coefficient (MCC).</p> <p>The air quality Prediction is used for Machine learning methods such as linear regression [3], neural networks [4], etc. However, the accuracy of these air quality prediction systems is hampered by the complicated array aspects, namely conditions of meteorology, emissions, and traffic pattern. The variables in the air quality dataset are collected from the sensors. These data consist of numerous noisy or anomaly information. This information is not handled in Spatio-Temporal Deep Neural Network (ST-DNN) [5], which means its efficiency is degraded because of anomaly information. An Improved Sparse Auto encoder with Deep Learning (ISAE-DL) [6]was developed for learning the distribution of the data spanning many dimensions for finding the anomaly data. It was flexible in its ability to handle a variety of data types and distributions. The features learned from an improved sparse auto encoder were influenced in networks to establish a complete system model, where the system is combined from the additional information. The learning process Technique is used to connect continuous and discrete features of data.</p> <p>In addition, spatially and discovered temporally similar data is combining by k-Nearest Neighbor Euclidean Distance (kNN-ED) and kNN Dynamic Time Wrapping Distance (kNN-DTWD) method. kNN-DTWD chooses the exact nominee locations but neglects time-consuming delay. However, long-term predictions are essential for delay intervals. So, this paper proposes the concentric circle-based distance partition method to handle the long-time delay location in the forecast. In this approach, the Manhattan distance is applied for grouping spatially and temporally similar places. The first step of this algorithm is to divide the spatially and temporally identical locations into four regions using its centre. The initial value of this centre is taken as 0. The remaining distances in spatially and temporally similar areas are were calculated from this initial centre. Concentric circle-based clustering is different from Euclidean distance-based-NN clustering. Here, a single centroid value is used to separate or group similar locations based on their characteristics</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 ShreeNandhini P https://spast.org/techrep/article/view/1729 HIGHLY DELICATE PIN ACCESSIBILITY FOR ATM USING HUMAN BODY COMMUNICATION 2021-09-30T09:14:46+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com Sushmitha K sushmitha33@gmail.com M.Mohana mpr_0802@yahoo.co.in Srinarayani K k.srinarayani22@gmail.com Ramya T k.srinarayani22@gmail.com <p>Frauds happening in ATM have increased over a period of time. With the evolution of Mobile Technology and Smartphones, wireless devices can be used for PIN authentication and transmission but they require network connectivity and only an active user can perform successful transactions though they provide security. Researchers are striving hard to eradicate the frauds happening by coming up with new technologies. In this paper, we use face and fingerprint recognition combined with Red- tacton technology which will provide additional security and minimize the chances of frauds happening in ATM. Here a body based communication is performed.&nbsp;</p> <p>A system for secure transactions in an ATM machine is proposed. These processes are used to reduce processing time for identification and identification processes. Here we are using a new technology called RED-TACTON, which has a transmitter and receiver section that communicates information through the human body. At first the client should put place a fingerprint to get to the ATM, and when the client touches the ATM, the client can touch the transmitter module where the password is put away, implying that the receiver will naturally send the information (password). In this system, hacking is not possible, security is high and it reduces fraudulent transactions.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, Sushmitha K, M.Mohana, Srinarayani K, Ramya T https://spast.org/techrep/article/view/3037 Hybrid Multi-User Based Cloud Data Security for Medical Decision Learning Patterns 2021-11-06T11:09:48+00:00 Manish Gupta 1990uditmamodiya@gmail.com Ihtiram Raza Khan 1990uditmamodiya@gmail.com B Gomathy 1990uditmamodiya@gmail.com Dr. Ansuman Samal 1990uditmamodiya@gmail.com <p>Machine learning plays a vital role in the real-time cloud based medical computing systems.&nbsp; However, most of the computing servers are independent of data security and recovery scheme in multiple virtual machines due to high computing cost and time.&nbsp; Also, these clouds based medical applications require static security parameters for cloud data security. Cloud based medical applications require multiple servers in order to store medical records or machine learning patterns for decision making. Due to high computational memory and time, these cloud systems require an efficient data security framework in order to provide strong data access control among the multiple users. In this paper, a hybrid cloud data security framework is developed to improve the data security on the large machine learning patterns in real-time cloud computing environment. This work is implemented in two phase’s i.e data replication phase and multi-user data access security phase. Initially, machine decision patterns are replicated among the multiple servers for data recovering phase. In the multi-access cloud data security framework, a hybrid multi-access key based data encryption and decryption model is implemented on the large machine learning medical patterns for data recovery and security process. Experimental results proved that the present two-phase data recovering and security framework has better computational efficiency than the conventional approaches on large medical decision patterns.</p> 2021-11-06T00:00:00+00:00 Copyright (c) 2021 Manish Gupta, Ihtiram Raza Khan, B Gomathy, Dr. Ansuman Samal https://spast.org/techrep/article/view/2440 An Outlook for Traffic Congestion using Tunneling Technology 2021-10-12T12:49:15+00:00 Dr. V.Sireesha v.sireesha@staff.vce.ac.in <p>Generally, we see a lot of traffic during these days. Traffic congestion is seen as the major issue that most civilians are facing despite measures being taken to reduce and control it. This is one of the most challenging situations for engineers, and planners. We have seen a rapid growth in vehicle ownership which has established congestion as an inescapable truth of urban life. Many attempts have been made to develop congestion reduction indices for heavily motorized countries. Even after the Metro Rail System has included in many metropolitan cities, we did not observe much change. Due to the lack of a unified definition for the congestion, several parameters to reduce it are in use. Speed, time of travel , time of delay, and service level are some of these parameters. They are evaluated in terms of identifying congestion and choose an appropriate measure to reduce it. Therefore, in our paper we have addressed this issue with the help of a system which transports the vehicles across various cities through tunnels which are laid underground. In this regard, our project aims to develop a prototype to alleviate traffic congestion and enable rapid transit across densely populated areas.</p> 2021-10-12T00:00:00+00:00 Copyright (c) 2021 Dr. V.Sireesha https://spast.org/techrep/article/view/963 EARLY HEART STROKE DETECTION USING K-NN ALGORITHM 2021-09-17T12:44:28+00:00 KUMAR KANDUKURI kumar.k@bvrit.ac.in A. Sandhya sandhya.a@bvrit.ac.in <p>Diagnosis of heart diseases have been improved in recent days with the help of machine learning (ML). The early prediction of heart disease is possible by analyzing the important parameters with the help of data mining techniques. &nbsp;In this study, K- Nearest Neighborhood (K-NN) is used for the classification of heart stroke with parameter weighting methods to improve accuracy and 11 parameters were identified for training the K-NN algorithm. The result shows that the accuracy using the K-NN algorithm (11 parameters) is more efficient to predict the early heart stroke detection. This proposed algorithm accuracy is found to be better than the existing algorithms like Random Forest and decision-tree and has an accuracy of average of 91.4%</p> 2021-09-18T00:00:00+00:00 Copyright (c) 2021 KUMAR KANDUKURI, A. Sandhya https://spast.org/techrep/article/view/1080 Detection of PCOS in Ultrasound Images using Digital Imaging 2021-09-20T13:27:22+00:00 Sivakumar Rajagopal rsivakumar@vit.ac.in Ramyalakshmi K ramyalakshmi.k2013@vit.ac.in Prabadevi B prabadevi.b@vit.ac.in N Deepa deepa.rajesh@vit.ac.in <p>Most of the ladies in this generation are suffering from early puberty, improper ovulation, excessive weight gain, and other hormonal imbalances. One of the major reasons for these is Polycystic ovary disorder (or polycystic ovarian disorder – PCOS). PCOS is a compound hormonal condition [1]. 'Polycystic' truly deciphers as 'multiple cysts'. ladies with PCOS can have Insulin protection because of hereditary components, Insulin protection because of being overweight (identified with eating routine and idleness), or a blend of both of these elements. Ladies with PCOS have elevated amounts of insulin or male hormones known as 'androgens', or both. The reason for this is misty, yet insulin protection is believed to be the critical issue driving this disorder. The accurate diagnosis of PCOS is essential. These days the conclusion performed by specialists is to physically tally the quantity of follicular cyst in the ovary, which is utilized to judge whether the PCOS exists or not. The ultrasound plays a vital role in examining various diseases [2]. The ultrasound image is taken as input as shown in fig.1 A and further processed using different filters for extracting the values accurately. This paper aims to find out the factor causing PCOS and to process the ultrasound images to visualize the cyst as shown in fig.1.B. The statistical analysis of PCOS and its symptoms were collected from the ladies of different age groups.&nbsp; From the analysis, it is evident that the causes for PCOS vary instantly based on different factors [3]. About 92 patient real records were processed. The results of various image filters and image segmentation techniques were compared. The results with CART are giving absolutely better results than other classifier methods to detect the factor. The experimental result shows that testosterone is the main cause of the given input but this factor may change based on food habits and physical activity. This study will help the medical practitioners and researchers for taking necessary precautions in treating infected ladies.</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Sivakumar Rajagopal, Ramyalakshmi K, Prabadevi B, N Deepa https://spast.org/techrep/article/view/1173 Prediction of water quality in Cauvery River by using PCA-SVM Method 2021-09-24T11:20:45+00:00 K.Kalaivanan kalaivanan.karuppannan@gmail.com Dr.J.Vellingiri vellingiri.J@vit.ac.in <p><strong><em>Abstract</em></strong></p> <p>Water pollution is a major source of infections. People's health is being jeopardized as a consequence of rising water pollution.&nbsp;It's&nbsp;possible that improved water quality predictions may contribute to a decrease in the number of deaths. The various prediction methods were developed in order to forecast the quality of the water available. This article proposes a water quality prediction technique using the Principal Component Analysis (PCA) and Support Vector Machine (SVM) model. Starting with the selection of features using Principal Component Analysis, a Support Vector Machine water quality prediction model is constructed, taking into account the sequence of water quality data. Finally, the proposed model is applied to the actual&nbsp;Cauvery River water quality dataset. This approach will lead to an assessment of the water quality of the Cauvery River as a consequence of the suggested procedure.</p> 2021-09-24T00:00:00+00:00 Copyright (c) 2021 K.Kalaivanan, Dr.J.Vellingiri https://spast.org/techrep/article/view/1431 A Contemporary review of the Cognitive Radio technique for Spectrum Sensing 2021-09-29T09:44:55+00:00 Ashish Chouhan ashish.chouhan87@gmail.com <p>The current and rapid development of wireless technologies (WiFi, WiMAX, UMTS, etc.) has resulted in a strong demand for spectral resources so much that current bandwidth (spectrum) management techniques have reached their limit and are not being used optimally. To overcome this problem, good spectrum management is required and therefore more efficient use of the spectrum is required. In this context, research has been carried out in the field of cognitive radio. Cognitive radio is a system that allows terminals to interact with their environment. This allows free frequencies to be recognized and used and thus contributes to better spectral efficiency. Cognitive radio (CR) is an intelligent radio device that explores the radio environment, makes decisions, and can be tuned to optimize spectrum use or other criteria such as the efficiency of a communication system. This development, inevitable in the modern world of radio communication, allows communication devices, which have become more autonomous, to choose the best communication conditions.</p> <p>Allocation of frequencies in cognitive radio networks is an important phase to decrease latency, increase data throughput, increase coverage and capacity, increase bandwidth, and optimize frequency utilization to provide spectrum. Quality of service is required for the best real-time applications. This article provides an overview of spectrum allocation algorithms in cognitive radio networks and describes the most appropriate spectrum allocation algorithms and their classification according to current literature.</p> <p>The development of this review is based on an analysis of more recent traditional publications with relevant references and attempts to provide a basis for the current literature on algorithms for spectrum allocation in cognitive radio networks. The most important outcomes define the prominence of an intuitive spectrum allocation, considering the user behavior, traffic load, spectrum properties, interference levels, requirement for various frequency channels and type of application. Thus, it is significant to provide adaptive frameworks to get efficient utilization of the available spectrum.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Ashish Chouhan https://spast.org/techrep/article/view/3605 ARTIFICIAL INTELLIGENCE FRAMEWORK BASED ON DECONVNET FOR SKIN CANCER DETECTION 2021-11-22T07:54:50+00:00 M. Sangeetha sangeetk@srmist.edu.in C. Karthikeyini ckarthiraja@gmail.com S. Vasundhara vasucall123@gmail.com D. Saravanan saranmds@gmail.com <p>Skin Cancer is one of the most common cancer forms in many countries, it is considered to be one of the dangerous types in the sense that it is lethal and its occurrence over time has been dramatically high. It is one of the deadliest cancers among all diseases and has a large rate of mortality. The efficiency of the earlier approaches to assess one of the most hazardous melanoma diagnosis in dermoscopic criteria are not up to the mark. Therefore, in this research, the work has been carried out in three stages in order to detect melanoma in an efficient manner. In the first stage, prior to the implementation of the image segmentation technique, noise elimination and pre-processing steps are carried out to remove the noise and to achieve better execution results. This segmentation model focuses on the separation of the interesting portions from the background and collects the necessary information from the neighboring pixels of the same category. Gaussian analytical patterns are used to handle the heterogeneous regions/sections of dermoscopy images whose mean and variance can be dynamic. This helps in extracting the required features efficiently and to achieve accurate segmentation. The implemented GFAC model is noise-free and resulted in a smooth border. The efficiency of the implemented segmentation model for skin images is evaluated using the PH2 dataset. The excellence of the improved gradient and feature adaptive contour methodology is tested in comparison with multiple state-of-art-methods based on the final segmented image, different parameters measured which includes segmentation accuracy of various dermoscopic images. Later, in the second stage, we have implemented a novel model known as the CSCMel Identification Model, which works by using the extraction of Color, Shape Features, and Classifier to identify the melanoma at an early stage. The CSC-Mel recognition method uses a cross-validation strategy and is tested using PH2 Dermoscopy Image Dataset. The advantage of the CSC-Mel Identification Method is proved by comparing the proposed model results with the existing best techniques with all the features to effectively estimate the performance of the proposed methodology. In addition, the third stage of work is based on the texture classification, which is proposed as the CSTC-Mel Identification Model. This method is robust and low dimensional for the texture description. It consists of three phases namely, feature Computation, feature encoding, and feature representation. In Feature Computation, we categorize the texture builtup parts and individual similarity by implementing first and second-order Gaussian operations based on steerable filters. Feature encoding through more than one level of thresholding or binary is considered to compute the feature computation in the form of texture. The feature representation is used to convert the discrete texture into the histogram representation. This CSTC Mel Identification model is tested using the PH2 dataset. Finally, it can be shown that the proposed model produces better results than all recent and advanced techniques in terms of all relevant parameters</p> 2021-11-22T00:00:00+00:00 Copyright (c) 2021 M. Sangeetha, C. Karthikeyini, S. Vasundhara, D. Saravanan https://spast.org/techrep/article/view/1284 Detection of Covid-19 using X-ray Images 2021-09-27T10:30:20+00:00 Vidya Shirodkar vidyashirodkar2@gmail.com Smitha G R smithagr@rvce.edu.in Smitha G R smithagr@rvce.edu.in <p>Covid-19 is making a great loss on the lives of humans over the world. To fight against the disease, it is necessary to detect the disease at the earliest stage. Due to the limitation in the availability of the testing kits, it wasn't easy to test each individual with RT PCR. Even this testing technique used to take a long time and is not accurate. Detecting covid-19 infection using the chest X-Ray of an individual would not only help to make Quarantine risk with other patients while their result is awaited. As x-ray machines were present in almost all the health care systems and being digitized there required no time to get an individual x-ray and also no need of travel. So, we proposed the ChexNet model with the deployment of CNN algorithm that gives a good accuracy to detect the infection from chest X-Ray images. In addition, the resulting model outcome will be the status of the patient whether an individual is infected or uninfected. The machine learning model ChexNet is trained, built and evaluated and the accuracy scores is observed to 91.72%.</p> <p>[1] &nbsp;Wang, L., Wong, A.: Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest radiography images.</p> <p>&nbsp;</p> <p>[2] Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.</p> <p>&nbsp;</p> <p>[3] Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE</p> <p>conference on computer vision and pattern recognition.</p> <p>&nbsp;</p> <p>[4] Y. Zhao and Y. Su, "Comparison of Three Prediction Models for the Incidence of Epidemic Diseases,"&nbsp;<em>2020 International Conference on Communications, Information System and Computer.</em></p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Vidya Shirodkar, Smitha G R, Smitha G R https://spast.org/techrep/article/view/2193 The Eco-Design Tools Based for Life Cycle Sustainable Assessment 2021-10-01T16:15:35+00:00 moseed mohammed qutamee@gmail.com Awanis Romli awanis@ump.edu.my Rozlina Mohamed rozlina@ump.edu.my <p class="Abstractfont">For The sustainability assessment is the emerging research that provides opportunities to measure and evaluate sustainability achievement. Sustainability focuses on integrating economic, environmental, and social aspects for developing sustainable products. Sustainability aspects have become essential throughout their entire life cycle from material selection to end of life strategies to contribute to product innovation, environmental protection and public health. The lack of knowledge is the critical challenge in assessing sustainability during product design, where, environmental goals, social requirements, and financial information in the manufacturing industry cannot be easily shared between phases of the product life cycle. In particular, reliable information related to manufacturing and assessing sustainability is often not available at the design stage with a lack of integration between sustainability criteria and product design process. The eco-design ontology is expected to contribute to the development of efficient and practicable sustainability tool during product design. Also, it offers a complete view to solve the lack of sharing information in the product life cycle, provide high quality and comprehensive recommendations to support the design processes for sustainable products. The significance of this research is to facilitate sustainable engineering tools to forecast and solve problems in product design.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 moseed mohammed, Awanis Romli, Rozlina Mohamed https://spast.org/techrep/article/view/1581 WATER SOURCE DETECTION USING SATELLITE IMAGE PROCESSING 2021-09-29T18:57:59+00:00 Usha Kiruthika usha.kiruthika@gmail.com Kanaga Suba Raja.S Subramanian skanagasubaraja@gmail.com V.Balaji balaji.iniyan@gmail.com R.S.Kumar cmc@eec.srmrmp.edu.in <p>Water resources have a major impact in different day-to-day activities. Whether it is consuming water or for commercial purposes, gallons of water are used all over the world.&nbsp; In order to use the resource to the fullest, it should be planned properly and will have effective water management techniques. Satellite Image Processing is one of the most effective ways of detecting water on the earth’s surface. By receiving the images from the satellite, we can able to easily detect the water. However, due to minor effects, we may face difficulties in differentiating the characteristics of water. For example, when there is a shadow of tall buildings on the water surface, it will be difficult to read the image of the water body as the water surface creates a mirror reflection on it. Hence, it is important that we differentiate between water bodies and shadows. The main objective of the paper is to look at the various approaches to extract information from different satellite images using satellite image processing.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Usha Kiruthika, Kanaga Suba Raja.S Subramanian, V.Balaji, R.S.Kumar https://spast.org/techrep/article/view/841 IMPACT OF ARTIFICIAL INTELLIGENCE ON E-BANKING AND FINANCIAL TECHNOLOGY DEVELOPMENT 2021-09-15T19:35:51+00:00 Abhishek Thommandru abhishekthommandrumtp@gmail.com <p>When it comes to financial technology, artificial intelligence plays a key role. As a branch of artificial intelligence, machine learning is an important tool. Artificial intelligence includes machine learning, which is a subset of AI (AI). As a result of machine learning, data structures may be better understood and adjusted depending on customer information. While conventional computer techniques in the IT industry are still used, machine learning has its own unique set of advantages. As a collection of well-written instructions, they are used by computer programs in order to describe or solve a conventional problem. Computers can utilize master learning approaches to prepare data inputs for factual research in order to provide conclusions within a specified range. Modeling test data is done on computers using frameworks to automate decision making based on input data.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Abhishek Thommandru https://spast.org/techrep/article/view/1707 CONVOLUTIONAL NEURAL NETWORK BASED PLANT NUTRIENT DEFICIENCY DETECTION 2021-10-08T07:55:23+00:00 A.Abirami kannan.maya1986@gmail.com <p>Purpose: The objective of this paper is to detect the Nutrient deficiency, which is a condition in plants in which the plant lacks particular amount of nutrients which are essential for the healthy state of a plant. If this situation keeps prevailing, it will have a adverse effect on the growth of plants.</p> <p>Methodology: An automated nutrition deficiency detection system has been proposed using Convolutional Neural Network, in this proposed methodology the input dataset images, following the pre-processing, the images are processed through various CNN layers, the deficit nutrient is detected.</p> <p>Findings: The manual conventional method used to detect the nutrient deficiency is a tedious process and does not detect individual deficiency classes. The existing system which made use of the conventional pre trained inception CNN model, used to detect the nutrient deficiency of the Okra Plant does not detect individual deficiency classes and also it has a less accuracy of about 86%. In this paper, an automated nutrition deficiency detection system has been proposed using Convolutional Neural Network. Spatial information in an image is obtained by using Convolutional Neural Network. On having the input dataset images, following the training process, the images are processed through various CNN layers, the deficit nutrient is detected. The proposed system can identify if the following nutrients- nitrogen, phosphorus, potassium are present in proper amounts and holds an accuracy percentage as high as 95%. The final output is a user interface which is made very simple and reliable for the farmers to understand. Just by providing the respective input image, the farmer gets to know what nutrient deficiency the plant is suffering from and also the remedial measures he can take to overcome it. The system also displays the threshold percentage for each of the nutrient deficiency i.e., the percentage below which the plant starts suffering from each particular deficiency. This system aims to serves as effective tool for Nutrient Deficiency detection.</p> <p>Sufficient amount of water, sunlight and nutrients a plant intake is the most essential part in agriculture system. Macronutrients and micronutrients requirement quantity vary from plant to plant. On comparing macronutrient and micronutrient the former is higher than the latter for the development of tissue and cell. Nitrogen (N), Phosphorus (P), Potassium (K) are the list of macronutrients. To diagnose the condition of the plants based on image a deep convolutional neural network is used. Starting by the process by resizing and preprocessing the input image, training of datasets is done. The dataset represents the color variation in the plant leaves which helps to identify the deficient nutrient. Further CNN layers are created and thus the deficient nutrient is classified.</p> <p>Originality/value: In this study, the proposed system can identify if the following nutrients- nitrogen, phosphorus, potassium are present in proper amounts or whether its deficient and holds an accuracy percentage as high as 95%.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 A.Abirami https://spast.org/techrep/article/view/274 Quantum Cursed Fingerprinting (QCF) 2021-09-11T07:56:28+00:00 PRANJAL SHARMA pranjaldub@gmail.com <p><em><span style="font-weight: 400;">Quantum communication has shown astonishingly fast development taking the advantage of quantum computation power that is being developed by technology giants like IBM, Google. The security issues and communication gaps caused by these quantum computers must be curled by the quantum computing methods. There are situations where the security of data is of less concern than knowing the fact “whether the data is tempered or not”. The Fingerprinting technique provides such knowledge by creating fingerprints of such important data and keeping the record of the temperament of data. Quantum Fingerprinting (</span></em><strong><em>QF</em></strong><em><span style="font-weight: 400;">) follows a procedure of creation of fingerprints of data and uses a referee which tells whether the fingerprint strings match or not. In a dilemmatic situation of fingerprinting, compromising either with security or speed, the paper focuses on achieving the removal of the referee and adding some security. This paper uses the power of Zero-Knowledge Proof (</span></em><strong><em>ZKP</em></strong><em><span style="font-weight: 400;">) to improve security as well as removes the use of referees in QF. The Zero Knowledge Proof is a protocol by which one party can prove to another party that they know the value “X”, without conveying any information apart from the fact that they are deemed to know “X”. Thus, embedding the curse of Zero-Knowledge Protocol to quantum fingerprinting gives Quantum Cursed Fingerprinting (</span></em><strong><em>QCF</em></strong><em><span style="font-weight: 400;">) the power of security and removal of the referee to match the fingerprints.</span></em></p> <p>&nbsp;</p> 2021-09-11T00:00:00+00:00 Copyright (c) 2021 PRANJAL SHARMA https://spast.org/techrep/article/view/1861 Smart Chronic Disease Consultation using Machine Learning 2021-10-09T14:07:33+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com M.Aravindan aravind.eie@rmd.ac.in C.Shilaja shilaja.research@gmail.com G.Nalinashini gns.eie@rmd.ac.in <p>In the quick world with such countless savvy frameworks infer our energy, the most alluring thing of inserted development is the applications usable by everyday person for day by day expectation purposes which accommodating for them a ton. A particularly intelligent clinical assistant framework is executed here with installed frameworks and MATLAB IDE for re-enactment. Plan and execution of a powerful integrative MATLAB model is made for identifying the regular constant infections and their manifestations present with the patients. The proposed framework goes about as a pre-screening model application hence the patients would self is able to dissect and get the idea of the drugs for normally happening persistent sicknesses through their live manifestations. The proposed model zeroed in on carrying out a Weighted Bias Network which look at and run various iterative circles to anticipate the practical infections and their side effects.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, M.Aravindan , C.Shilaja, G.Nalinashini https://spast.org/techrep/article/view/389 Using Static and Dynamic Malware features to perform Malware Ascription 2021-09-14T09:00:57+00:00 Jashanpreet Singh Sraw udit.mamodiya@poornima.org <p><span style="font-weight: 400;">Malware ascription is a relatively unexplored area, and it is rather difficult to attribute malware and detect authorship. In this paper, we employ various Static and Dynamic features of malicious executables to classify malware based on their family. We leverage Cuckoo Sandbox and machine learning to make progress in this research. Post analysis, classification is performed using various deep learning and machine learning algorithms. Using the features gathered from VirusTotal (static) and Cuckoo (dynamic) reports, we trained and tested them on Naive Bayes and Support Vector Machine classifiers. In a follow up experiment, we converted our malware into grayscale and coloured images to feed into a Convolutional Neural Network (CNN) for classification. For each classifier, we tuned the hyper-parameters using exhaustive search methods. Our reports can be extremely useful in malware ascription.</span></p> <p><span style="font-weight: 400;">Classification using VirusTotal&nbsp; Features (95,000 Samples)</span></p> <table> <tbody> <tr> <td> <p><strong>Accuracy</strong></p> </td> <td> <p><strong>Precision</strong></p> </td> <td> <p><strong>Recall</strong></p> </td> <td> <p><strong>F-score</strong></p> </td> <td> <p><strong>Time (s)</strong></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">84.99</span></p> </td> <td> <p><span style="font-weight: 400;">83.98</span></p> </td> <td> <p><span style="font-weight: 400;">84.99</span></p> </td> <td> <p><span style="font-weight: 400;">83.72</span></p> </td> <td> <p><span style="font-weight: 400;">3341</span></p> </td> </tr> </tbody> </table> <p><span style="font-weight: 400;">&nbsp;</span></p> <p><span style="font-weight: 400;">Classification using VirusTotal and Cuckoo Features (1,936 Samples)</span></p> <table> <tbody> <tr> <td> <p><strong>Accuracy</strong></p> </td> <td> <p><strong>Precision</strong></p> </td> <td> <p><strong>Recall</strong></p> </td> <td> <p><strong>F-score</strong></p> </td> <td> <p><strong>Time (s)</strong></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">67.98</span></p> </td> <td> <p><span style="font-weight: 400;">69.79</span></p> </td> <td> <p><span style="font-weight: 400;">67.98</span></p> </td> <td> <p><span style="font-weight: 400;">66.66</span></p> </td> <td> <p><span style="font-weight: 400;">1946</span></p> </td> </tr> </tbody> </table> <p>&nbsp;</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/2570 Sarcasm Detection: A Contemporary Research Affirmation of Recent Literature 2021-10-17T12:23:09+00:00 Dr. Nagaratna Hegde nagaratnaph@staff.vce.ac.in S.Fouzia Sayeedunnisa fouzia.qadri@gmail.com Khaleel Ur Rahman Khan khaleelrkhan@aceec.ac.in <p>&nbsp;Sarcasm identification is a confined research area in NLP, a specific case of opinion mining where the focal point in the process is identification of sarcasm, instead of sentiment extraction. Sarcasm is a specific type of opinion which is expressed as a negative feeling in form of anger, frustration or derision veiled by the intense positive words in the text. Detection of Sarcasm which is an elusive problem for machines has gain wide popularity in the research community in recent years. Accurate identification and analysis of sarcasm improves the performance of sentiment identification models. This manuscript details various sarcasm detection approaches, models and features used, issues, challenges and further research scope.</p> <p>The various machine learning and deep learning models used to identify sarcasm are detailed in this script</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Dr. Nagaratna Hegde, S.Fouzia Sayeedunnisa , Khaleel Ur Rahman Khan https://spast.org/techrep/article/view/2617 An Efficient AES Algorithm for Cryptography using VLSI 2021-10-17T11:18:23+00:00 T. Ramya ramyat1@srmist.edu.in KarthikRaju kannanarchieves@gmail.com Ravi J kannanarchieves@gmail.com Deepak Verma kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: The objective of this paper is to examine the encryption process used in information security. Information security has become a most crucial component in data correspondence framework. Encryption is a fundamental tool used for Information security. Organization of all classes relies on these encryption algorithms to protect their data. Initially DES- Data&nbsp;Encryption&nbsp;Standard algorithm was used it is a symmetric block cipher which means a same key is used for both encryption and decryption. The main disadvantage with that algorithm is it is easily crackable and is vulnerable to attacks. The calculations comparing to DES, Triple DES are remunerating with tremendous memory spaces and can’t be executed on the equipment stage. AES - Advanced Encryption Standard algorithm has replaced DES in number of ways. AES is an efficient cryptographic algorithm which uses various key lengths and outperformed compared to DES. By utilizing field programmable entryway clusters (FPGA’S) we can execute equipment stage circumstance inferable from its reconfiguration nature, low charge and publicizing Space. The RIJNDAEL cryptography algorithmic guideline might be a square figure used to scramble/decode advanced data and is equipped for utilizing crypto graphical keys of 28,192 and 256 bits.</p> <p>Methodology: With the concept centered on key expansion with dual stage design this method is proposed. Having an aim of finding out the result and consequence of using a number of round blocks on utilization of power, the dual stage scheme has been used. The high-speed designs focus to hike the throughput by using unrolling and pipelining. As our design is capable of finishing a key expansion of 128-bit keys internally the control logic to execute simultaneous encryptions will result in a further round block over the power users because of dynamic design. In the prevailing DOR schemes for finishing a single round, it takes eleven clock cycles and it uses output data bus for sending the encrypted data.</p> <p>Findings: The findings indicate that the simulation result of 128 data input test vectors. The 128-bit input data is encrypted using 4 byte key at each round and generates the secret code. The simulation is performed using Xilinx software and it is tested using test bench code. The simulation result shows the conversion of 128 plain texts to cipher text. The findings also show the decrypted output of 128 cipher text. It follows the reverse process of encryption. At each round, the encrypted key is separated from original data at the last round the original plain text is recovered from cipher text. Compared to DES the throughput is increased and delay is reduced in the AES algorithm.</p> <p>Originality/value: In this paper, the empirical results show the basic details required to implement the AES encryption algorithm. The needs of implementation including the primary input and primary output of the design, power notation and conventions were explained. To understand the proper flow, the design's general implementation flow has been discussed.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 T. Ramya, KarthikRaju , Ravi J, Deepak Verma, Mayakannan Selvaraju https://spast.org/techrep/article/view/1301 An Android Application For An Efficient Method of Tracking and Managing Pharmacies 2021-09-27T20:07:52+00:00 Rohan Pradhan crrohan6@gmail.com Akarsh Kumar Singh thakurak9415@gmail.com Kshitij Saxena kshitijs0310@gmail.com Rohit Saxena 8182rohitsaxena@gmail.com <p><span style="font-weight: 400;">With the increase in Research and Development in the medical sector. The number of diseases being diagnosed among people of every age group is increasing significantly. As a result of this, prescribed medications for their prevention as well as cure have also increased exponentially, and therefore, the number of pharmacies needed for their supply has increased drastically. There is a very high probability that the required medicines or medical apparatus would not be available in a particular pharmacy shop. Due to this, a customer in urgent need of these medicines may end up wasting a lot of time in search of the pharmacy shop that has the required medicines and also in the required quantity. This could be very dangerous for a customer in a critical situation. Therefore, this paper focuses on creating an Android application in which the users will be able to search their required medicines along with the required quantity and will be able to see all the nearby pharmacies having the required medicines, and then they can easily navigate to the pharmacy, which solves the problem mentioned above. Apart from the customers/users, the pharmacists will also benefit from the app as they can keep a track of all the medicines and medical apparatus in their shop along with their quantities and can also update this data regularly on the application. This application is named “PharmaSpotter” as it will help the customers to spot the pharmacies near them quickly.</span></p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Rohan Pradhan, Akarsh Kumar Singh, kshitijs, rohit https://spast.org/techrep/article/view/745 Indian Monument Recognition using Deep Learning 2021-09-15T19:16:00+00:00 Manasvi Trivedi manasvitrivedi@gmail.com Saloni Agrawal sagrawal168@gmail.com Abhinav Kumar abhinavrahul9801@gmail.com Sanjeev Kumar Thakur kumarrmsanjeev101@gmail.com Neha Gautam g.neha@jainuniversity.ac.in <p>A monument is a physical structure built or created that is dedicated to a person, event or purpose. Monuments become relevant to a group as a part of their history or culture due to their artistic, historical, political, technical, or architectural importance. Owing to the fact that the values, monuments have to the region they belong in their preservation and documentation are important [1]. The easiest way to obtain information about something is to use the object of interest as a query. Machine Learning and Deep Learning are advancing, spurring progress in image recognition, enabling computer vision to achieve newer heights. There is increased coverage of landmarks and monuments of the world, bringing about a need to connect the physical presence of a structure to its digital presence. Thus, the automatic identification of the monument comes into play [2].</p> <p>Monuments being 3D objects, pose an issue in terms of the difference in perception of the monument images. There can be an immense variation because of the viewpoint from which the image is being taken. The same monument captured from multiple angles can appear quite differently [3]. Monument recognition faces the additional problem of there being a resemblance in structure between the monuments [4]. Various classifiers, which have been applied to recognize the landmarks, have not been applied to recognize monuments. Few studies have been found to recognize monuments however, their performance can further be increased. The objective of this study is to classify monument images into their respective labels, in order to attain the highest accuracy using various Deep Learning architectures. The performance of the Deep Learning architectures for automated prediction of monuments are analysed and further improved.</p> <p>Various type of Deep Learning architectures have been used to recognize monuments and achieve a good performance, so, five best performing architectures namely InceptionV3 [2], MobileNet [5], RestNet50 [1], VGG16 [4] and AlexNet [6] was applied to recognize monument in this study. Apart from the classification, pre-processing is an important step to convert a dataset into a proper manner so that the performance of the system can be increased. Since monuments are three dimensional structures, there is a difference in perception in the images of the monuments. This leads to lesser accuracy and efficiency in recognizing the particular structure [3]. Data Augmentation is applied to reduce this variation in viewpoint [5]. Data Augmentation is the method used to create different views of existing images. Augmentation techniques in the form of Random Flip, Random Rotation, Random Translation, Random Zoom, Random Contrast, Random Hue, Random Brightness and Random Saturation is applied.</p> <p>To build any recognition system, the dataset is of prime importance. The two available standard datasets of Indian Monuments are ‘Indian Monument Recognition Dataset’ [3] and ‘Qutub Complex Monuments' Images Dataset’ [4] which contains a total of 99 classes and 6151 images. The results of the applied architectures are analysed to give the technique with the highest performance. InceptionV3, MobileNet, RestNet50, AlexNet and VGG16, were used to predict the monument and achieved 97.31%, 93.73%, 86.47%, 68.88% and 61.33% accuracies, respectively (Table 1). The highest accuracy achieved was 97.31% using InceptionV3, which was an improvement over the previous studies.</p> <p>Monument recognition system is important because, the people belonging to different cultures, castes, and religions take pride in their culturally rich heritage in the form of monuments. It represents great achievements&nbsp;present&nbsp;in the form of art and architecture, and also form the backbone of socio-economic growth for the surrounding region in the form of tourism [3]. There is a need to digitally recognize and archive the monument as an important historical and cultural heritage site [2]. The monument images should be identified and labelled as it will help in preserving the culture of the people belonging to different regions.</p> <p>Table 1. Result with Augmentation</p> <table style="height: 264px;" width="482"> <tbody> <tr> <td width="161"> <p><strong>Sl. No.</strong></p> </td> <td width="67"> <p><strong>Architecture</strong></p> </td> <td width="377"> <p><strong>Testing Accuracy</strong></p> </td> </tr> <tr> <td width="161"> <p><strong>1</strong></p> </td> <td width="67"> <p>InceptionV3</p> </td> <td width="377"> <p>97.31%</p> </td> </tr> <tr> <td width="161"> <p><strong>2</strong></p> </td> <td width="67"> <p>MobileNet</p> </td> <td width="377"> <p>94.51%</p> </td> </tr> <tr> <td width="161"> <p><strong>3</strong></p> </td> <td width="67"> <p>ResNet50</p> </td> <td width="377"> <p>86.47%</p> </td> </tr> <tr> <td width="161"> <p><strong>4</strong></p> </td> <td width="67"> <p>AlexNet</p> </td> <td width="377"> <p>66.88%</p> </td> </tr> <tr> <td width="161"> <p><strong>5</strong></p> </td> <td width="67"> <p>VGG16</p> </td> <td width="377"> <p>61.33%</p> </td> </tr> </tbody> </table> 2021-09-18T00:00:00+00:00 Copyright (c) 2021 Manasvi Trivedi, Saloni Agrawal, Abhinav Kumar, Sanjeev Kumar Thakur, Neha Gautam https://spast.org/techrep/article/view/84 An overview of Spoof Detection in ASV Systems 2021-07-24T12:09:55+00:00 Swathika Swathika Ravindran swathi19cs@gmail.com Geetha geethakab@gmail.com <p>In current years, voice based application are used broadly in varied applications for speaker recognition. Presently, there is a wide specialize in the analysis of spoofing and anti-spoofing for Automatic Speaker Verification (ASV) system. The current advancement within the ASV system ends up interest to secure these voice biometric systems for existent world applications. This paper provides the literature of spoofing detection, novel acoustic feature representations, deep learning, end-to-end systems, etc. moreover, it conjointly summaries previous studies of spoofing attacks with stress on SS, VC, and replay alongside recent efforts to develop countermeasures for spoof speech detection and speech sound disorder tasks.</p> 2021-08-22T00:00:00+00:00 Copyright (c) 2021 Swathika Swathika Ravindran, Geetha https://spast.org/techrep/article/view/1682 Effective Authentication Scheme for Healthcare systems in Wireless Sensor Networks 2021-09-30T07:40:27+00:00 Dr. V.Sireesha v.sireesha@staff.vce.ac.in J. Vinith jvinith2020@gmail.com <p>Healthcare systems quality is improved with the usage of wireless sensor networks. In the hostile unattended environments also these sensor networks may connect to the sensitive data. Therefore designing of a health care system must include the addressing of security concern. Various challenges are raised in the sensor networks, in which Wireless Sensor Network (WSN) security is focused. User authentication scheme of wireless sensor networks based E-healthcare application is proposed in this paper, which is more efficient and secured. The necessary security levels given by the algorithm Elliptic Curve Cryptography (ECC) in a WSN safeguards data confidentiality. In the proposed healthcare system, a secure communication mechanism is constructed using ECC among intelligent body sensors entity authentication, confidentiality, mobile gateway and healthcare database management system. By applying the proposed user authentication scheme on Raspberry Pi series, healthcare system performances are evaluated in addition to the analysis of ECC algorithm performance against attacks in a WSN.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Dr. V.Sireesha, J. Vinith https://spast.org/techrep/article/view/1726 Block Chain and Edge Computing Base Design Secured Framework for Tender 2021-10-08T07:54:15+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com Karthic R.M. kannanarchieves@gmail.com S.RanjithKumar kannanarchieves@gmail.com N.Nandhini kannanarchieves@gmail.com P.L.Kaliappan kannanarchieves@gmail.com <p><strong>Purpose:</strong> The objective of this paper is to create a secured website for government tender using block chain technology</p> <p><strong>Methodology:</strong> block chain is implemented for all login form, tenders information, agent’s information and users’ information and it will record all the information done by the user or agent so it is difficult to hack or cheat the system. The time stamp and block chain is implemented for all the histories</p> <p><strong>Findings: </strong>The Motive of the tender project is to provide a more secure framework for the government and avoid cor-ruption, security issues, and the main disadvantage is tender process is fully online, and the public does not have an idea about tender and what is happening inside the process. In order to avoid all those things we have created one new website and totally we are having five modules admin, agent, public, blockchain, cryp-tographic techniques. With the help of all these things, we will be implementing the government tender web-site with rules and regulations given by the government.</p> <p><strong>Originality/value: </strong>AES Algorithm: To generate ciphertext, the Rijndael program employs a SP internet with many rounds. The numerical of circular is determined by the clue measurements. &nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, Karthic R.M., S.RanjithKumar, N.Nandhini, P.L.Kaliappan https://spast.org/techrep/article/view/891 Sentiment Analysis on Wearing Mask during COVID19 pandemic in India: A case study on Twitter 2021-09-15T19:07:45+00:00 Kirti Bala BAHEKAR kkirti06@gmail.com <p>The ongoing pandemic due to coronavirus disease called COVID-19 issue of large global attention. Which is initiated in Wuhan, China, the rapid spread of Coronavirus disease (COVID-19) has caused public health crisis regionally and internationally [1]. The Centers for Disease Control and Prevention (CDC) actuated its Emergency Operations Center (EOC), and the World Health Organization (WHO) published its first report regarding the situation of Coronavirus disease 2019 (COVID-19) on January 20, 2020 [2].</p> <p>The virus can spread from an infected person’s mouth or nose in small liquid particles when they cough, sneeze, speak, sing or breathe. These particles range from larger respiratory droplets to smaller aerosols. People may also become infected by touching surfaces that have been contaminated by the virus when touching their eyes, nose or mouth without cleaning their hands. World Health Organisation (WHO)[3] guide people to take some care for preventing disease like</p> <ul> <li>Keep social distance: Stay at least 1 metre away from others, even if they don’t appear to be sick, since people can have the virus without having symptoms.</li> <li>Wear a mask:&nbsp; Wear a well-fitting three-layer mask, especially when you can’t physically distance, or if you’re indoors. Clean your hands before putting on and taking off a mask.</li> <li>Avoid crowded places, poorly ventilated, indoor locations, and avoid prolonged contact with others. Spend more time outdoors than indoors.</li> <li>Ventilation is important: Open windows when indoors to increase the amount of outdoor air.</li> <li>Frequently clean your hands with soap and water, or an alcohol-based hand rub.&nbsp;</li> </ul> <p>Although a number of vaccines are now available for the prevention of Covid19. Almost every country is vaccinating its adult citizens the saving them. There are several safe and effective vaccines that prevent people from getting seriously ill or dying from COVID-19. Even though most of the urban population is vaccinated but still WHO instructed to wear masks in public places. So, in the proposed paper sentiment analysis will be performed o study public views on wearing masks even after getting vaccinated. Twitter is a microblogging and social networking service on which users post their views, reactions and interact on some social issues[2][3]. The motto behind this study is to analyses tweets by Indian citizens on problems by wearing masks during the COVID-19 pandemic.</p> <p>&nbsp;The data included tweets collected by tweeter over a period of time. Data analysis was conducted by Artificial immune system algorithms (AIS)[4], which is a new deep-learning model for text analysis and performance and was compared with other models such as Naïve Bayes (NB), Logistic Regression (LR), Support Vector Machines (SVM), etc. Accuracy for every sentiment was separately calculated by the AIS algorithms like AIRS1, AIRS2, Immunos [5], and the other three models. Our findings present the high prevalence of keywords and associated terms among Indian tweets during COVID-19. Further, this work clarifies public opinion on pandemics, wearing masks, and public health care. The outcomes will help lead public health authorities to better society-related decisions that affect public health policies and issues.</p> 2021-09-18T00:00:00+00:00 Copyright (c) 2021 Kirti Bala BAHEKAR https://spast.org/techrep/article/view/3035 Deep Learning approach for Radical Sound Valuation of Fetal Weight 2021-11-06T11:29:17+00:00 M. Ramkumar 1990uditmamodiya@gmail.com Ms. Ranjeeta Yadav 1990uditmamodiya@gmail.com Prof.(Dr.) Sachin Yadav 1990uditmamodiya@gmail.com S M Ramesh 1990uditmamodiya@gmail.com <p>It is a very complicated task to identify and decipher the standard output plane of the fetus in the evaluation of the second trimester of 2D ultrasound, which requires a long preparation time. In addition to directing the test to the correct area, it is difficult for a technician to distinguish the applicable structure in the picture. The programmatic picture preparation function allows the device to provide assistance to experienced administrators to help them solve these problems. We portray an extraordinary convolutional neural organization-based methodology for perceiving thirteen fetal well-known perspectives in freehand 2D ultrasound data and introducing fetal primary limitation by utilizing a bouncing field in this examination. Utilizing exclusively the objective life structures, the local area figures out how to pinpoint image level labels, which is a significant contribution. Tissue engineering aims to work continuously while providing ideal benefits for localization tasks. We provide the results of continuous reviews, recover outlines from saved pictures, and localize them in an extremely large test data set, which includes pictures and video accounts of complete clinical peculiar examinations. We tracked down that the proposed acquired a 91% exactness for review outline recovery and 82% precision in the localization task.</p> 2021-11-06T00:00:00+00:00 Copyright (c) 2021 M. Ramkumar, Ms. Ranjeeta Yadav, Prof.(Dr.) Sachin Yadav, S M Ramesh https://spast.org/techrep/article/view/2400 Locking Device for Physical Protection of Electronic Devices 2021-10-11T10:21:16+00:00 Rajesh Kumar Kaushal rajesh.kaushal@chitkara.edu.in Naveen Kumar naveen.sharma@chitkara.edu.in Shilpi Singhal shilpi.singhal@chitkara.edu.in Simranjeet Singh simranjeet.singh@chitkara.edu.in Harmaninderjit Singh harmaninder.jit@chitkara.edu.in <p>The physical security of electronic devices like laptops and tablets is vital as these devices carry confidential data which cannot be compromised. Data security can be provided through a number of innovative ways. The first line of defence is physical security. As far as physical security is concerned, there has been very little development till date [1][2] . This research is focusing on the physical security of data storage devices. To provide physical security to laptop devices, this work proposes a locking assembly for various electronic devices like laptops and tablets [3][4]. The assembly includes a frame adapted to be coupled to a base of the one or more electronic devices. This locking assembly ensures the safety of devices against any possible theft or unauthorized usage. The design of this locking assembly presents a cost effective, easily available, and simpler to assemble, easy to manufacture and durable solution to the theft problems. A detailed analysis of the proposed design has been conducted which provided an insight of efficiency and usability of the locking mechanism. The mass properties of the model have been computed after assigning the standard material (Aluminium) to various components. The mass of the locking assembly has been estimated as 0.863 Kg and the volume are 0.002 cubic meters.</p> 2021-10-11T00:00:00+00:00 Copyright (c) 2021 Rajesh Kumar Kaushal, Naveen Kumar, Shilpi Singhal, Simranjeet Singh, Harmaninderjit Singh https://spast.org/techrep/article/view/924 FPGA Implementation of Secure Block Creation Algorithm for Blockchain Technology 2021-09-16T12:10:31+00:00 Vijayakumar vijayrgcet@gmail.com Anusha Kulkarni anushakulkarni30@gmail.com Prachi Thakur prachit.1998@gmail.com Rajashree R rajashree.ece@gmail.com <p>Blockchain technology is very essential for secure storage and authentication of information data. In this modern day of digitization, it is necessary to protect data from misuse, falling into wrong hands, and exploitation, where this data may range from important user credentials, to bank account information, to logs of a company etc. Traditional methods of securing devices using cryptographic algorithms include hashing functions like SHA-0, SHA-1 but these processes have limitations. Limitations of these methods include excess computational time, high power requirements, collision attacks, not enough security, scalability, backtracking to retrieve the original message. In order to overcome the above limitations, FPGA implementation of blocks for blockchain technology using RSA and SHA-256 together have been proposed, so as to provide encryption of the data, before authentication for security of data. The proposed idea is to build an encrypted blockchain for safe and secure storage of data, through the use of FPGAs. This is advantageous as encryption allows us to encode data and send it securely to the receiver via use of keys. Security, resistance to side attacks and collisions, larger key size, non-factorization of the prime numbers to retrieve original message, exponential calculations are some of the advantages of RSA encoding methods. Since blockchain allows us to store data, exchange it in a peer-to peer network, without giving any third party the right to modify or alter the data, it allows the users to guarantee data storage and security, privacy, confidentiality, authenticity, and protection. Synthesis and implementation of the encrypted block have been compared and analyzed on Virtex-4, Virtex-5 and Spartan-6 FPGA boards. The encrypted data is sent to a hashing function which generates a hash value. The combination of RSA and SHA, allows us to create a block on an FPGA, which when combined with other blocks establishes a Blockchain. Bases on the resources used like slice registers, LUT-FF pairs, the most efficient FPGA, virtex-5 is chosen as it uses less memory and are in the architecture. Complete security is achieved as the hashing process is irreversible ad backtracking of data is not possible. Previous problems of strengthening security, backtracking, excessive memory usage, zero collision attacks are addressed and solved.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Vijayakumar, Anusha Kulkarni, Prachi Thakur, Rajashree R https://spast.org/techrep/article/view/2475 Eyeball Movement Cursor Control Using OpenCV 2021-10-13T12:38:41+00:00 S Kanaga Suba Raja kanagasubaraja.s@eec.srmrmp.edu.in <p><strong>Abstract:</strong></p> <p><strong>Purpose:</strong></p> <p><strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </strong>The objective of the system is to provide controlling power to the physically disabled person through his eye ball movement. The basic need for this kind of a system is that it can provide the assistance that a third person gives for the physically disabled people and hence by the means of tracking the eyeball movement this is made possible.</p> <p><strong>Methodology:</strong></p> <p><strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </strong>Firstly, the input video has been captured by using either I/p cam or external camera (USB), the detection of face then takes place through this. The target size is moved with a window alongside the input image. Each of the subdivision with respect to the image is being calculated on a whole. The obtained distinguish factor is then kept subjective to a learned threshold that generally eliminates the objects that are not said so. This is done such that to obtain the results in a accurate format and then go along the main structure of it, the input layer is processed in the haar cascade algortihm for detecting the faces available in the given image. After detecting the face of an image, the exact face is bounded by the box for showing detected image. For detection of eye the face is get identified. By the use of facial landmarks, the eye is get detected. After getting the markings of the eye, it is get tracked. By fixing the points in the eye it is achieved. Based on the value between the marking points in the eye it is get alerted when it’sgone below the threshold value.</p> <p><strong>Findings:</strong></p> <p><strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </strong>The algorithm used to detect face is Haar Cascade. Haar-like features&nbsp;are&nbsp;digital image&nbsp;features&nbsp;used in&nbsp;object recognition. They were used in the first real-time face detector.The input layer is processed in the haar cascade algortihm for detecting the faces available in the given image.By the use of facial landmarks the eye is get detected. After getting the markings of the eye ,it is get tracked. By fixing the points in the eye it is achieved. Based on the value between the marking points in the eye it is get alerted when its get below the threshold value. The eye region detection is done at the initial stage of the system.As the eye ball is getting tracked when the eye ball is moved left, right or up and down the mouse pointer moves accordingly.By making a voluntary blink when the mouse pointer is on a desired file or folder to be opened, the click operation is achieved.</p> <p><strong>Originality/value:</strong></p> <p><strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </strong>In this study, the results show how eye ball is getting tracked for the cursor movement and the aspect ratios are calculated accurately for detecting blinks using Haar-Cascade. To promote future research, an user-interface is to be developed for easy access of most-used applications.</p> 2021-10-13T00:00:00+00:00 Copyright (c) 2021 S Kanaga Suba Raja https://spast.org/techrep/article/view/2588 ANALYSIS OF SEED QUALITY USING DEEP LEARNING IN RASPBERRY PI 2021-10-15T03:01:32+00:00 Uma Maheswari S umamaheswari.s@eec.srmrmp.edu.in Saathvika R kannanarchieves@gmail.com Yahvi L kannanarchieves@gmail.com Yamni R kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p><strong>Purpose:</strong> The objective of this paper is to examine and separate the good quality of seed from foreign particles.</p> <p><strong>Methodology:</strong> An automated system can be framed to determine the seed quality using a deep learning mechanism. For seed quality observation, a deep learning CNN algorithm based on image processing is implemented using Raspberry Pi (Python). Python is used to implement the deep algorithm.</p> <p><strong>Findings: </strong>Wheat is said to be high in vegetable protein content as compared to other cereals such as corn, barley and rice. A stable nation is said to be born from healthy seeds. As a result, determining wheat quality is critical in agriculture. Using a manual workforce, determining the wheat quality is difficult and time consuming. Separating wheat from foreign particles with manual workforce can be a challenging and time- consuming process at times. It also leads to low seed quality selection and the wastage of a significant quantity of high-quality seeds.</p> <p><strong>Originality/value: </strong>In this study, the empirical results show the developed system 97% accuracy of good quality seed from foreign materials and low-quality seed. Deep learning CNN algorithm based on image processing implemented to improve the percentage of accuracy.&nbsp;</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Uma Maheswari S, Saathvika R, Yahvi L, Yamni R, Mayakannan Selvaraju https://spast.org/techrep/article/view/2047 Quantitative Analysis and Interpretation of Pore-scale Images 2021-09-30T19:10:44+00:00 Niranjan Bhore bhore.niranjan@gmail.com Samarth Patwardhan samarth.patwardhan@mitwpu.edu.in Stefan Iglauer s.iglauer@ecu.edu.au Hisham Khaled Ben Mahmud hisham@curtin.edu.my <p>The quantification of transport mechanisms in porous media is critical to understand oil recovery, contaminant transport and carbon sequestration (Kumar et al., 2005). Micro-morphology and chemical heterogeneity add to complexities to the structural diversity of subsurface rocks (Donaldson and Tiab, 2004). Performance and structure of subsurface rocks are intimately related, hence micro-computed tomography, sub-micron resolution of in-situ 3D images and FIB-SEM sectioning are trending techniques to capture the rock geometry more effectively. The advent of computers with sufficient memory storage capable to handle large file sizes has led to the establishment of image analysis of both 2D sections and 3D volumetric data (Taiwo et al., 2016).</p> <p>&nbsp;</p> <p>Retrieving, de-noising and segmenting are vital tools used for image analysis in the industry today for generating quantitative information in porous media research. The 3D images need to be meshed, along with the routine de-noising and segmentation, to capture the exact rock geometry. The scope of this work is to present a comparative study of two tools used for quantitative analysis of porous media images, by applying a series of basic image analysis functions with appropriate arguments and adjustments. These tools, which contain several image processing libraries offering the full suite of necessary functionalities (Gouillart, Nunez-Iglesias, &amp; Walt, 2016, Walt et al. (2014), Gomez, 2021) are assessed in this work from an accuracy perspective. Further, this work will be presenting a brief review of techniques that can be successfully applied for analysis and manipulation of images, followed by the future directions which can be explored for better understanding of fluid flow through porous media.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Niranjan Bhore, Samarth Patwardhan, Stefan Iglauer, Hisham Khaled Ben Mahmud https://spast.org/techrep/article/view/1208 Interactive Virtual Realty (IVR) an aid for Self-Paced Learning 2021-09-25T08:17:55+00:00 Pathanjali C pathu.chowdaiah@gmail.com <p>Technical communication and training are becoming increasingly important in the business, school, and career as technologies continue to evolve. Practically, it is most difficult, expensive, and time-consuming for learners to effectively attend a course in an institution due to the necessity to handle various work-related schedules, as well as current pandemic conditions. As a result, self-paced learning is becoming increasingly popular. Immersive technologies have a key role in making these type of learning effective, easy, and related to skill acquisition. Virtual reality (VR) training approaches have the potential to help learners more successfully transfer their abilities to the real world better than traditional training methods including textual, video, and live instruction[4]. This study looks into VR training approaches to save time, minimize mistake rates, and improve the VR user experience. This article includes an overview of the learning environment for self-paced learning, as well as a summary and comparison necessary for efficient use of this immersive technology.</p> 2021-09-28T00:00:00+00:00 Copyright (c) 2021 Pathanjali C https://spast.org/techrep/article/view/3458 The Performance Analysis and Security Aspects of MANET 2021-11-16T15:45:23+00:00 Kranthi Kumar Singamaneni kkranthicse@gmail.com Abdullah Shawan Alotaibi dr.sivaram@su.edu.sa K. Sri Vijaya srivijayak@gmail.com Purnendu Shekhar Pandey purnendu.pandey@kluniversity.in <p>Mobile Ad hoc NETwork (MANET) is a type of network that is built by connecting a number of mobile devices together in a temporary manner through impermanent connections. Through broadcasting, the information should be distributed to all nodes in the network. A MANET is a network of self-configurable mobile nodes that are connected wirelessly. The security aspect of MANET is a major challenge, and there is a great deal of research being done in this area. The availability of energy is a critical criterion for a decentralised network. The protocol for AODV and DSR routing, as well as the security of the black hole node, are all investigated in this study. Protocols that are used during the route discovery process are particularly vulnerable to attack by the black hole. The goal of this survey, as a result, is to thoroughly investigate black hole attacks while also evaluating the performance of AODV and DSR during black hole attack scenarios. With Network Simulator 3, the work is completed by simulating both protocols under normal operation as well as under a black hole attack. The work is completed with Network Simulator 3 by simulating both protocols under a black hole attack (NS-3). AODV is more vulnerable to a black hole attack than the DSR, according to the results of simulations, and this vulnerability is greater under normal operating conditions. It has been discovered that MANET attacks are carried out with the assistance of a "black hole," according to simulation results.&nbsp;&nbsp;</p> 2021-11-18T00:00:00+00:00 Copyright (c) 2021 Kranthi Kumar Singamaneni, Abdullah Shawan Alotaibi, K. Sri Vijaya, Purnendu Shekhar Pandey https://spast.org/techrep/article/view/1429 A Prefatory Analysis of Brain Computer Interfacing Based on EEG 2021-09-29T09:38:50+00:00 Ayonija Pathre ayo.pathre@gmail.com <p><strong><em>A extremely developing area of application systems science is defined by brain programming interface technology. In health fields, its contributions range from treatment to synaptic healing for severe injuries.</em></strong> <strong><em>The special fingerprint of mind reading and remote contact I n several areas, such as education, self-regulation, manufacturing, marketing, protection, entertainment &amp; games. It induces shared trust between consumers &amp; systems around them.</em></strong> <strong><em>Deep learning has already received mainstream recognition and has been used in numerous applications, like natural language processing (NLP), computer vision &amp; voice. For MI EEG signal classification, however, deep learning has seldom been used.</em></strong> <strong><em>This paper highlights the fields of application that could advantage from brain waves in promoting or attaining their objectives. We also answer big usableness &amp; technological problems facing the use of brain signals in different BCI device components. Various solutions aimed at minimizing and reducing their effects have also been studied.</em></strong> <strong><em>The popular spatial pattern (CSP) approach, which is generally utilized, is applied&nbsp; to extract variance-based CSP functions, that are then fed for classification to DNN. DNN practice has been thoroughly studied for classification of MI-BCI &amp; best framework found has been explored.</em></strong></p> <p>&nbsp;</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Ayonija Pathre https://spast.org/techrep/article/view/1244 A Hybrid Algorithm for Face Recognition System Based Smart Attendance System 2021-09-27T19:09:56+00:00 sathya A sathya.a@rajalakshmi.edu.in <p>Face Recognition aims to detect, track, identify or verify person from video or images using recognition system[3]. Its Application varies from security, Authentication, Social media and so on.With the advancement of technologies,still it impedes in accuracy.The Challenges involved in face recognition are variations in facial expression involve diverging light,facial changes interfered with facial hair, pose and so on[1-2]. This research work presents a new hybrid approach which improves the accuracy in identification of a person. The accuracy of the proposed work clearly depicts that it can be best suited in real time environments for tracking persons. It can be used for applications like Authentication, Security and so on.The proposed work is validated by tracking the student presence .it Captures the image of the student and detect the face by pre-processing, extracting the features and recognizing the person. For recognition it uses Local Binary Pattern Histogram (LBPH) algorithm and Haar Cascade classifier. LBP Algorithm uses Histogram which improves the accuracy of the face recognition. It is integrated with Haar classifier which uses machine learning algorithm [4-8], Adaboost Learning algorithm which selects small features from large set, which improves the efficiency of recognition.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 sathya A https://spast.org/techrep/article/view/614 Sentiment Analysis on COVID 19 related Social Distancing across the globe using Twitter data 2021-09-16T14:12:12+00:00 Jyothsna R jyothsna.r@res.christuniversity.in Dr. Rohini V rohini.v@christuniversity.in Dr. Joy Paulose joy.paulose@christuniversity.in <p>The ongoing COVID 19 pandemic has affected the lives of millions of people across the globe. The pandemic has not only disrupted the natural way of life, and also caused several other mental health issues like depression, anxiety disorder. A number of preventive measures were incorporated to stop the spread of COVID 19. Social distancing, also known as physical distancing &nbsp;was one such measure undertaken to stop the spread of Corona virus. Social distancing may trigger sadness, anxiety and a sense of solitary feeling among the people. This uncertain pandemic situation may adversely affect those patients already suffering with clinical depression during the times of being in quarantine or self isolation. A number of social media networking sites like Facebook and Twitter have emerged to become popular. People are able to express their feelings freely using these social media platforms. Social media sites play a great role towards corporate social responsibilities during the covid pandemic. Sentiment analysis, also called emotional artificial intelligence deals with analyzing whether a piece of text conveys a positive, negative or neutral opinion. Sentiment analysis can also be performed on social media data like Tweets and Facebook messages .</p> <p>The objective of this research is to perform sentiment analysis on Twitter data concerning social distancing by considering the people across the globe. The proposed research work involves the following steps i.e collection of data (Tweets), data pre-processing, feature extraction and sentiment classification using &nbsp;machine learning techniques. Data for the proposed work can be obtained from IEEE data port, which contains a huge number of Tweet IDs, that can be hydrated to obtain the Tweet data. Stop word removal, removal of punctuations, stemming, lemmatization and normalization of tweet data are involved in the pre-processing . The major step in sentiment analysis is data pre processing since removal of noise is extremely important for constructing the machine learning based models. The proposed method for feature extraction is by using Bag of Words, Term Frequency — Inverse Document Frequency(TF-IDF) and Word2Vec.Bag of Words is one of the simplest methods for conversion of text to numbers.TF-IDF signifies the importance of words in a document for a corpus. TF-IDF is one of the most popular technique for information retrieval and knowledge discovery.Word2Vec comprises of two layers of neural net. Word2Vec processes the data by vectorizing it . Machine learning algorithms Support Vector Machine(SVM) and Logistic Regression (LR) are considered for classification of sentiments. The supervised machine learning algorithms include Regression and classification tasks. SVM and LR are supervised machine learning algorithms. Supervised machine learning algorithm involves two phases namely training and testing . These trained supervised algorithms will be able to predict the sentiments of those tweets as either positive, negative or neutral.&nbsp; The coding language&nbsp; adopted is Python. This work analyzes tweets from across the globe concerning social distancing, as not much research is carried out considering global tweet data. The proposed research analyzes the extent to which people have been suffering in isolation , due to physical distancing which is adopted as a measure to stop the spread of COVID-19.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Jyothsna R, Dr. Rohini V, Dr. Joy Paulose https://spast.org/techrep/article/view/1282 A Machine Learning Approach towards predicting formation permeability using real-time data 2021-09-27T15:59:39+00:00 Soumitra Nande soumitra.nande@mitwpu.edu.in Samarth Patwardhan samarth.patwardhan@mitwpu.edu.in <p>Reservoirs are hydrocarbons (oil and gas) bearing subsurface structures (formation) in which wells are drilled to produce the fluids to the surface. Well testing is a method of studying pressures and their corresponding rates from an individual well, to analyze the various characteristics of a reservoir, which helps in optimum management of the production operations.</p> <p>With applicability of machine learning (ML) in variety of engineering domains, oil and gas industry is no exception. The areas where ML is applied in the industry ranges from drilling engineering (predicting rate of penetration)&nbsp;[1], petrophysics (predicting water saturation)&nbsp;[2], production engineering (predicting lifespan of submersible pumps, decline curves, hydraulic fracturing)&nbsp;[3] [4] [5], reservoir engineering (predicting relative-permeability curves)&nbsp;[6], geology (predicting seismic-facies) [7], geophysics (predicting locations of hydrocarbon deposits)&nbsp;[8].</p> <p>In this paper, we have applied one of the most popular supervised learning algorithms know as Artificial Neural Networks (ANN) for predicting the permeability (conductivity) of the formation. The well testing data consisting of well head pressure, down hole gauge pressure, flow rates for oil and water, P<sup>*</sup>, P<sub>1hour</sub> etc. were used as input features for training the model. A 4-layer dense ANN architecture consisting of one input and output layer each and two hidden layers was built for training and testing the model. The data was split into 80:20 ratio, with 80% data being used for training and remaining 20% data being used for testing the algorithm. The training data was further split into training and validation data.</p> <p>The ANN gave reasonable training and testing accuracy, which means that the algorithm could predict the formation permeability within acceptable error for the unseen data. This model is useful in automated and real-time testing of the wells, wherein the data from the SCADA servers can be fed to the model which updates itself based on the new data it receives. This way, application of ML models significantly reduces the subjectivity in the analysis of well testing data thereby achievement of more reliable and objective determination of the critical reservoir parameters.</p> 2021-10-01T00:00:00+00:00 Copyright (c) 2021 Soumitra Nande, Samarth Patwardhan https://spast.org/techrep/article/view/2861 A Survey of Network Intrusion Detection System 2021-10-19T05:35:53+00:00 Anirudh Tiwari ictsgs1@gmail.com Bhavana Narain ictsgs1@gmail.com <p>Over the past few decades, computer and network security has become a main issue, especially with the increase number of intruders and hackers, therefore systems were designed to detect and forestall intruders. The analysis of IDS is additionally characterized by the misuse and anomaly detection approaches. Nowadays the network is implemented altogether places like offices, schools and banks etc. and most individual social networks are participating within the media. Many Researchers and Industries are attracted by Intrusion detection. Still the community is facing the matter of building reliable and efficient NIDS. During this paper, I’ve got presented literature survey on network intrusion detection system. First of all I’ve got mentioned various mechanism of intrusion detection system so detailed the kinds of intrusion detection system.</p> 2021-10-19T00:00:00+00:00 Copyright (c) 2021 Anirudh Tiwari, Bhavana Narain https://spast.org/techrep/article/view/2232 DR Quality Check based on fault Detection using Bench mark Image Processing Algorithms 2021-10-02T12:27:43+00:00 Arunachalam U arunachalem_u@yahoo.com Vairamuthu J vairamuthuj@yahoo.com Parisa Beham M hodece@sethu.ac.in <p>production. Throughout the production of components like bolts and nuts, various stages are involved. All the stages in the production are manually inspected and the quality is verified with human dependency. During this inspection, some of the components are not identifies properly, as it is a manual check. Also the manual involved process is a time consuming process and there will be a presence of error in the fault detection identification of the components. So it is a needed research for identification of fault in the industry. Even though lot of algorithms are developed, still an accepted algorithm that is fully automatic is a challenge. In this work an automatic method for fault detection in turn quality check is proposed. The digital images of bolts and nuts are collected from the camera positioned in the production zone and created a separate database. Both normal and defected components (bolts and Nuts) are collected in the database. The images are pre-processed and are enhanced for better detection. Segmentation algorithms are used for detecting the Region of Interest (RoI). The geometric features like area, diameter and thickness are measured as salient features to discriminate between normal and defected components. In addition the texture features are also estimated. The proposed method gives the classification accuracy of 98%. By identifying the defects and recovering it at the early stage increases our quality in the production.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>Figure 1: Overall Flow diagram of the Proposed Method</p> <p>Table 1. Classification Accuracy based on Knn algorithm</p> <p>&nbsp;</p> <p>&nbsp;</p> <table> <tbody> <tr> <td width="85"> <p><strong>S.NO</strong></p> </td> <td width="217"> <p><strong>K Value</strong></p> </td> <td width="208"> <p><strong>Accuracy (%)</strong></p> </td> </tr> <tr> <td width="85"> <p>1</p> </td> <td width="217"> <p>2</p> </td> <td width="208"> <p>92</p> </td> </tr> <tr> <td width="85"> <p>2</p> </td> <td width="217"> <p>3</p> </td> <td width="208"> <p>93</p> </td> </tr> <tr> <td width="85"> <p>3</p> </td> <td width="217"> <p>4</p> </td> <td width="208"> <p>98</p> </td> </tr> </tbody> </table> <p>&nbsp;</p> 2021-10-03T00:00:00+00:00 Copyright (c) 2021 Arunachalam U, Vaira, Parisa Beham M https://spast.org/techrep/article/view/2897 The Applications of Deep Neural Network for Human Activity Recognition 2021-10-21T06:14:58+00:00 SUNANDA DAS das.sunanda2012@gmail.com <p>Human Activity Recognition By using Deep Learning Models on Smartphones and Smart watches Sensor Data. Human Activity recognition is currently applied in various fields where in valuable facts approximately a man or woman's useful ability and way of life is needed. Human activity recognition (HAR) pursuits&nbsp;&nbsp; sports from a chain of observations at the actions of topics and the environmental conditions. Human Activity Recognition (HAR) refers to the computerized detection of&nbsp;&nbsp; sports finished by human beings in their daily lives. A HAR device allows&nbsp;&nbsp; sports completed through someone and offer informative comments for invention. Human activity recognition (HAR) is developing in recognition due to its huge-ranging packages in patient rehabilitation and motion problems. HAR&nbsp;&nbsp; usually begin with accumulating sensor facts for the sports&nbsp;&nbsp; attention and then develop algorithms using the dataset. Activity recognition aims to understand the moves and desires of one or more marketers from a chain of observations on the dealers' movements and the environmental conditions.&nbsp; Deep gaining knowledge of is a sort of system mastering and artificial intelligence (AI) that imitates the manner human beings benefit positive types of knowledge. While traditional machine learning knowledge of algorithms are linear, deep learning&nbsp;&nbsp; algorithms are a hierarchy of growing complexity and abstraction. Human activities are the various actions for recreation, living, or necessity done by people. For instance it includes leisure, entertainment, industry, recreation, war, and exercise.&nbsp; All activities which might be carried out by&nbsp;&nbsp; the humans for their residing, earnings reason, leisure, mental peace, are known as human sports. It includes amusement, enjoyment, industry, undertaking etc. Machine learning is an software of artificial intelligence (AI) that offers structures the ability&nbsp;&nbsp; to learn&nbsp;&nbsp; and enhance without being explicitly programmed. Machine learning specializes in the improvement of&nbsp;&nbsp; packages that can get right of entry to records and use it to examine for themselves&nbsp;&nbsp; . Humans impact the activity environment in lots of methods: over population, pollutants, burning fossil fuels, and deforestation. Changes like those have triggered weather change, soil erosion, negative air fine, and undrinkable water etc. Impacts from human pastime on land and in the water can have an impact on&nbsp;&nbsp; ecosystems. Climate change, ocean acidification, permafrost melting, habitat loss, typhoon water runoff, air pollutants, contaminants, and invasive species are among many&nbsp;&nbsp; problems facing ecosystems. Human activity affect the environment via contributing to air pollution, or the emission of harmful substances into the air.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 SUNANDA DAS https://spast.org/techrep/article/view/873 Augmenting Employability Skills by integrating English as Communicative Language with Special Reference to COVID-19 by using ANOVA 2021-09-15T19:13:51+00:00 Vinod Bhatt chetanmthakar8855@gmail.com <p>As a matter of fact the English language is considered as the language of global commerce, the role and outcomes of English language provision in English-medium higher education institutions in the Indian fronts and remained central to any discussion on graduate profile and the employability of graduates in the global marketplace. This present study entails its direction toward the learning process of English language after the completion of graduation and post-graduation in the crucial times of COVID-19, where the most of the educational and training institutes were running in online mode. Many of the job aspirants use to learn English from the private English learning centers, who are having different methodologies for teaching the language, but presently this scenario has changed to online mode of training. Using a mixed methods approach, data was gathered through telephone interviews, student workplace simulations and employer focus groups. Findings of the study focus on the increase in employability skills during pandemic period, channeled through English as a second or additional language, becoming confident, gaining knowledgeable, etc.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Vinod Bhatt https://spast.org/techrep/article/view/2380 An Algorithm for efficient real-time airplane landing scheduling 2021-10-10T08:39:17+00:00 Keerthana keerthana.velilani@gmail.com <p><span style="font-weight: 400;">The Air Traffic Controller sequences and schedules the flow of aircraft around defined airspace. One of the important operations performed by the ATC is the landing procedure. Due to increased air traffic, the job of the air traffic controller officers has become stressful. There is also an increased possibility for accidents to occur with increased air traffic. There have been multiple theoretical approaches proposed to deal with the issue. This paper suggests a simple solution for landing procedure based on the landing patterns and conditions of the given Airport thereby reducing the manual dependency on ATC.</span></p> 2021-10-10T00:00:00+00:00 Copyright (c) 2021 Keerthana https://spast.org/techrep/article/view/272 HYBRID DEEPLEARNING BASEDMUSIC RECOMMENDATION SYSTEM 2021-09-11T07:55:02+00:00 Sunitha Reddy Mallannagari sashu2006@gmail.com Dr.Adilakshmi Thondepu t_adilakshmi@staff.vce.ac.in <p><em>There is an astounding increase in music digitally, available. The fundamental objective of musical recommendations is to propose songs that are appropriate to the tastes of the user. Content-based filtration and content-based approaches are currently most recommended by Streaming Music systems. Songs based on But the Cold-Start problem fails in these systems. This paper provides user-based hybrid algorithms for music recommending systems to address the problem of Cold-Start by providing context conscious and tailored musical recommendations to new and existing users depending on their own context. We have developed, implemented and analyzed music recommendation systems with several algorithms in this project. The music advice is a highly complicated subject since it is necessary to structure music so that the favorite songs are recommended to users that will not be defined. Practical tests by real users assessed the offered algorithms and framework satisfactorily.</em></p> 2021-09-11T00:00:00+00:00 Copyright (c) 2021 Sunitha Reddy Mallannagari, Dr.Adilakshmi Thondepu https://spast.org/techrep/article/view/2416 A systematic review on prognosis of Autism using Machine Learning Techniques 2021-10-11T18:09:47+00:00 meenakshi Malviya minaldk25@gmail.com Dr J Chandra chandra.j@christuniversity.in <p>Quality of life (QoL) and QoL predictors have become crucial in the pandemic. Neurological anomalies are at the highest level of QoL threats. The world is evolving more stressful, and toxin and mental health have become the primary goal for a healthy life. Autism is a multisystem disorder that causes behavioral, neurological, cognitive, and physical differences in Autistic people. All the levels relate and influence and affect each other at distinct ages. The recent studies state that neurological disorders can be dysfunction of the brain or dysfunction of the whole nervous system, which may cause other symptoms of Autism. Autism is a heterogeneous neurodevelopmental disorder concerning diversity in the symptoms, risk factors, severity level, and response to the treatment [1]. The findings exhibit a significant change in the brain region at the occurrence of Autism [2]. The study of brain Magnetic resonance imaging&nbsp;(MRI) provides astute knowledge of brain structure that helps to study the minor to significant changes inside the brain that emerged due to any disorder. The brain MRI of Autistic subjects exhibited a large volume of the brain and increased head circumference size, which have expected at the time of birth, but a significant increase started at the age of 12 to18 months [3]. The paper focuses on reviewing various Machine Learning (ML)&nbsp; techniques used for diagnosing Autism at an early age with the help of brain MRI Images. The diagnosis contributes to the Autistic subjects leading a healthy if they get treatment and training if required on time. "Early diagnosis of Autism Spectrum Disorder" is an objective and one of the main goals of health organizations worldwide. The work supports the goal and contributes to the betterment of the quality of life of Autism patients.</p> <p>It is essential to detect the disorder at the earliest. Autism occurs early but is challenging to see because symptoms and severity vary subject to subject. The diagnosis using brain MRI and ML ensures the accuracy of diagnosis results. A lot of work has been done till time using Structured MRI (fMRI), and Functional MRI (fMRI) applied to Artificial Intelligence algorithms and ML. This work explores the ML techniques to diagnose the anomaly and increase the accuracy rate of diagnosis using MRI.&nbsp;&nbsp;</p> <p>Detection of Autism using MRI at an early stage is difficult as it displays multiple symptoms like behavioral, cognitive, neurological, sensory-perceptual, regulatory differences, etc. [4]. The range of severity of Autism is normal to noteworthy. The delay in diagnosis causes many to live an abnormal and challenging life without any special assistance. The objective of the proposed work is to study brain MRI to identify the brain regions that are affected and find out the changes in the specific brain regions to calculate the severity level of symptoms of Autism. Much significant research were implemented in detecting, classification, pattern recognition, and predicting Autism. Table 1 describes multiple machine learning classification accuracies on Autism.</p> <p>Table 1. Performance comparision on various Machine Learning Techniques</p> <table> <tbody> <tr> <td width="77"> <p><strong>References</strong></p> </td> <td width="171"> <p><strong>Type</strong></p> </td> <td width="222"> <p><strong>Methods</strong></p> </td> <td width="147"> <p><strong>Accuracy (Highest)</strong></p> </td> </tr> <tr> <td width="77"> <p><strong>[5-12]</strong></p> </td> <td width="171"> <p>AQ-10 screening tool</p> </td> <td width="222"> <p>SVM, RF, RML, CNN, DL, LDA, ADTree, LR</p> </td> <td width="147"> <p>100% using ADTree, LR and RF</p> </td> </tr> <tr> <td width="77"> <p><strong>[13-20]</strong></p> </td> <td width="171"> <p>ADOS, ADI-R (Video based)</p> </td> <td width="222"> <p>Gradient Boosted DT, Alternating DT, RF, LR, SVM</p> </td> <td width="147"> <p>100% using Alternating DT</p> </td> </tr> <tr> <td width="77"> <p><strong>[21-28]</strong></p> </td> <td width="171"> <p>Other biomarkers like Kinetic features, cognitive response, behavioral data, eye movement</p> </td> <td width="222"> <p>ANN, SVM, RF, KNN, LDA, DT</p> </td> <td width="147"> <p>100% using DT with behavioral data</p> </td> </tr> <tr> <td width="77"> <p><strong>[29-38]</strong></p> </td> <td width="171"> <p>EEG, MRI</p> <p>(fMRI, sMRI)</p> </td> <td width="222"> <p>ANN, GLCM, LDA, RF, SVM, MLP, DT, KNN, RNN, NN, LR, Gradient Boosted DT, Alternating DT.</p> </td> <td width="147"> <p>82% using MLP,</p> <p>75.54 using ANN</p> </td> </tr> </tbody> </table> <p>The first row describes the methods of ML and DL applied on the AQ-10 (Autism Spectrum Quotient) tool, which is a set of 10 questions to detect the disorder—using AQ-10, 100% accuracy achieved by ADTree (Alternating Decision Tree), LR (Logistic Regression), and RF (Random Forest). The second row describes the methods used on ADOS (Autism Diagnostic Observation Schedule) and ADI-R (Autism Diagnostic Interview-Revised). It is video-based data, with 100% accuracy achieved using ADTree. The third row explains the data associated with cognitive responses, behavioral changes, eye movement, and kinetic features. Here also the detection accuracy is 100% using DT with the behavioral response data. The fourth row is about EEG, Brain MRI (MRI and fMRI) dataset applied to Artificial Neural Network (ANN), Gray-level-co-occurrence matrix (GLCM), Linear Discriminant Analysis (LDA), RF, Support Vector Machine (SVM), Multi-layer Perceptron (MLP), DT, K-nearest neighbor (KNN), Recurrent NN (RNN), NN, ADTree. Different data types were considered features applied on ML, DL, and AI algorithms and achieved 100% accuracy. Autism is a neurological disorder, and hence the study of the brain and exploring the biomarkers is essential. This work aims to achieve higher accuracy of Autism detection using brain imaging. Autistic subjects are treated differently not only by society but themselves too. They feel neglected to face everyday environment and aim to achieve higher accuracy at the early age of autistic individuals to get early treatment and independent and healthy life.</p> 2021-10-11T00:00:00+00:00 Copyright (c) 2021 meenakshi Malviya, Dr J Chandra https://spast.org/techrep/article/view/1015 Detection and Classification of Thoracic Diseases in Medical Images using Artificial Intelligence Techniques 2021-09-20T05:23:39+00:00 Shubhra Prakash shubhra.prakash@res.christuniversity.in Ramamurthy B ramamurthy.b@christuniversity.in <p>Background: Artificial Intelligence is at the leading edge of innovation and is developing<br>very fast. In recent studies, it has played a progressive and vital role in Computer-Aided<br>Diagnosis (CAD). Some studies of deep learning, a subset of artificial intelligence applied to<br>lesion/nodule detection or classification, have reported higher performance than<br>conventional techniques or even better than radiologists in some tasks. However, these<br>approaches have targeted a single disease or abnormality with limited value in general<br>clinical practice. The interpretation of the medical images requires assessing various diseases<br>and abnormalities associated with the body part. The chest is one of the large body parts of<br>human anatomy and contains several vital organs inside the thoracic cavity.<br>Furthermore, chest radiographs are the most commonly ordered and globally used by<br>physicians for diagnosis. An automated, fast, and reliable detection of diseases based on<br>chest radiography can be a critical step in radiology workflow. [1]-[9].<br>Objective: This study presents the conduction and results of a systematic review that aims to<br>investigate Artificial Intelligence Techniques to identify Thoracic Diseases in Medical Images<br>Methods: The systematic review was carried out according to PRISMA (Preferred Reporting<br>Items for Systematic Reviews and Meta-Analyses) guidelines. Science Direct, IEEE Xplore, and<br>PubMed were used as the scientific databases to search for research articles published in<br>English and were filtered based on defined inclusion and exclusion criteria. At the time of<br>writing (August 1, 2021), 348 studies were retrieved, out of which 182 were included. 52<br>addition papers were also included based on specific body parts in the thoracic region.<br>Results and Conclusion: This review provides a systematic analysis of the current status of<br>Artificial Intelligence Techniques in the Detection and Classification of Thoracic Diseases. On<br>the whole, Transfer learning is the most commonly used approach, and it displays a good<br>performance with the available datasets. However, it is difficult to compare different<br>classification methods due to the lack of a standard dataset, making reproducibility difficult.</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Shubhra Prakash, Ramamurthy B https://spast.org/techrep/article/view/1894 Design of Machine Learning Based Model to Predict Students Academic Performance 2021-10-08T14:19:58+00:00 Myla M. Arcinas abhishek14482@gmail.com <p><strong>Abstract</strong></p> <p>In order to identify the educational level of student performance, several scholars and educational institutions are interested in forecasting students' success. Despite the fact that the educational sector employs a variety of methods for gathering useful information about students and the steps they should take to improve their performance, a student performance assessment model must be created to help both students and faculty members reach their full potential.[1][2]</p> <p>Machine learning is one of the most active research topics in artificial intelligence today, involving the study and development of computational models of learning processes. A lot of intriguing work has recently been done in the field of implementing machine learning algorithms. Machine learning is the most basic method of making a machine intelligent.[3][4]</p> <p>The goal of machine learning is to acquire new knowledge or skills, arrange knowledge structures, and gradually enhance its own performance. Machine learning is a critical component of artificial intelligence [5]. Learning and intellect are inextricably intertwined. Learning is always about self-improvement of future behaviour based on previous experiences. In circumstances where we cannot immediately inscribe a computer code to answer a given, but instead require example data or experience, we require learning.</p> <p>Machine learning is a highly interdisciplinary field that draws and expands on ideas from statistics, computer science, engineering, cognitive psychology, optimization theory, and many other scientific and mathematical disciplines. We may build a learn model using example data or past experiences by merging all of these fields. This model could be predictive in order to make future forecasts, descriptive in order to gather knowledge from data, or both. Machine learning based on data is a critical component of modern intelligent techniques; it primarily studies how to obtain rules that cannot be obtained through theoretical analysis from observed samples, and then how to apply these rules to recognise objects and predict future data or unobserved data. In a nutshell, machine learning is an efficient method for recognising new samples by learning from previous samples. [6][7]</p> <p>This article discusses how machine learning techniques can be used to develop a model to predict student academic performance.</p> <p>&nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Myla M. Arcinas https://spast.org/techrep/article/view/458 A Comparative Study and Impact Analysis of Different Oversampling Techniques for CIP 2021-09-14T10:17:59+00:00 Dibyajyoti Bora udit.mamodiya@poornima.org <p><span style="font-weight: 400;">In the area of data science, machine learning based classification techniques are the first choice for an accurate analysis of a huge amount of data. Most of the machine learning and data mining algorithms make an assumptions that the data is equally distributed. Most of the dataset or applications are imbalanced and such data are more biased, towards majority class. Class Imblance can be defined as the observation/units in training data belonging to one class substantially outrange the observations in the other classes, e.g: insurance claims, forest cover types, fraud detection, rare medical disease diagnosis or rare variety clasification[1]. When we analyzed real world datasets like text and video mining, detection of oil spills in satellite radar images, activity recognition detection of fraud telephonic call, so on.[2] [3]. Basically, Imbalanced Datasets deal with rare classes or it can be called as skewed data and </span><span style="font-weight: 400;">skewed Dataset can reduce the performance of the classification algorithms.</span> <span style="font-weight: 400;">Class Imblance influence the performance achieved by existing learning systems and the learning systems may have difficulities to learn the concept related to the majority class. The Machine-Learning community appears to concur on the idea that the major hypothesis in inducing classifiers in imbalanced domains is class imbalance. Imbalanced problems can be considers of two types that is between and within class imbalances. In the between class imbalances, the existence of the imbalance is between the two sample and in the within class imbalances, the majority samples are higher than the minority samples. The first requirement while developing such classification techniques is the robustness that it will imply for an accurate and efficient classification. However, it is seen that in many times these algorithms suffer from “class-imbalance problem”, shortly CIP. Besides class imbalance, the degree of data overlapping among the classes is another factor that lead to the decrease in the performance of learning algorithms.Due to CIP, many difficulties arise during the learning process, which as a whole results a poor classification process. Resampling the data set is one common technique for dealing with CIP where in general oversampling the size of the rare class is made. We have different oversampling techniques available in the literature like SMOTE, ADYSN, and Random Oversample are the noted ones. In this paper, an effort is made to compare these different techniques as well as their impact on classification performance.</span></p> <p><span style="font-weight: 400;">For our comparative study, we have used 7 datasets form UCI repository, like yeast, pima diabetes, Ionosphere, abalone, yeastme1, yeastme1, yeastexc. The imbalanced ratio of the datasets, with the number of observation, number of positive and negative shown on the table 1.</span></p> <p><span style="font-weight: 400;">Table 1: Characteristics and Imbalanced Ratio of the datasets</span></p> <table> <tbody> <tr> <td> <p><strong>Dataset</strong></p> </td> <td> <p><strong>Number&nbsp; of Observation</strong></p> </td> <td> <p><strong>Number&nbsp; of Positive</strong></p> </td> <td> <p><strong>Number of Negative</strong></p> </td> <td> <p><strong>Imbalance Ratio</strong></p> </td> </tr> <tr> <td> <p><strong>Yeast</strong></p> </td> <td> <p><span style="font-weight: 400;">514</span></p> </td> <td> <p><span style="font-weight: 400;">51</span></p> </td> <td> <p><span style="font-weight: 400;">463</span></p> </td> <td> <p><span style="font-weight: 400;">9.078431373</span></p> </td> </tr> <tr> <td> <p><strong>PIMA</strong></p> </td> <td> <p><span style="font-weight: 400;">768</span></p> </td> <td> <p><span style="font-weight: 400;">268</span></p> </td> <td> <p><span style="font-weight: 400;">500</span></p> </td> <td> <p><span style="font-weight: 400;">1.865671642</span></p> </td> </tr> <tr> <td> <p><strong>Ionosphere</strong></p> </td> <td> <p><span style="font-weight: 400;">351</span></p> </td> <td> <p><span style="font-weight: 400;">126</span></p> </td> <td> <p><span style="font-weight: 400;">225</span></p> </td> <td> <p><span style="font-weight: 400;">1.785714286</span></p> </td> </tr> <tr> <td> <p><strong>Abalone</strong></p> </td> <td> <p><span style="font-weight: 400;">4177</span></p> </td> <td> <p><span style="font-weight: 400;">62</span></p> </td> <td> <p><span style="font-weight: 400;">4115</span></p> </td> <td> <p><span style="font-weight: 400;">66.37096774</span></p> </td> </tr> <tr> <td> <p><strong>YeastME1</strong></p> </td> <td> <p><span style="font-weight: 400;">1484</span></p> </td> <td> <p><span style="font-weight: 400;">44</span></p> </td> <td> <p><span style="font-weight: 400;">1440</span></p> </td> <td> <p><span style="font-weight: 400;">32.72727273</span></p> </td> </tr> <tr> <td> <p><strong>YeastME2</strong></p> </td> <td> <p><span style="font-weight: 400;">1494</span></p> </td> <td> <p><span style="font-weight: 400;">51</span></p> </td> <td> <p><span style="font-weight: 400;">1443</span></p> </td> <td> <p><span style="font-weight: 400;">28.29411765</span></p> </td> </tr> <tr> <td> <p><strong>YeastEXC</strong></p> </td> <td> <p><span style="font-weight: 400;">1484</span></p> </td> <td> <p><span style="font-weight: 400;">35</span></p> </td> <td> <p><span style="font-weight: 400;">1449</span></p> </td> <td> <p><span style="font-weight: 400;">41.4</span></p> </td> </tr> </tbody> </table> <p>&nbsp;</p> 2021-09-14T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/2594 HUMAN HAPPINESS AT HARD TIMES - COVID-19 PANDEMIC 2021-10-15T04:22:47+00:00 Robert Ramesh Babu Pushparaj babujisdb@gmail.com Sigamani Panneer robertrb19@students.cutn.ac.in Antonio Dellagiulia kannanarchieves@gmail.com Komali Kantamaneni kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: COVID-19 brought in a lot of physical health problems and mental health issues. The main aim of this study is to present that even at hard times (COVID-19) it is possible to find happiness and present various ways of achieving it. The study is a systematic review that could help everyone to understand the need and importance of happiness at hard times and ways to find it.</p> <p>Methodology: The study follows the method of scientific review. The researcher selected articles based on the keywords from the Science Direct, Google Scholar, Scopus, Google, WHO documents, Government websites, Elsevier journals and Springer journals.&nbsp; The researcher formulated the protocol of inclusion and exclusion criteria. Based on the criteria papers were selected, analyzed and synthesized. The figure 1 explains the methodology and the results of the systematic review.</p> <p>Findings: Factors like fear, anxiety, worry, depression, panic, social withdrawal, difficulty in concentrating, insomnia, too much of exposure to media, psychological distress, feeling of helplessness, confusion with the symptoms, hyper-vigilance to health, anger, grief, loneliness and paranoia affect the human happiness at hard times of COVID 19. Table 1 presents the finding with the source of information.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Robert Ramesh Babu Pushparaj, Sigamani Panneer, Antonio Dellagiulia, Komali Kantamaneni, Mayakannan Selvaraju https://spast.org/techrep/article/view/479 Internet of Things- Security Vulnerabilities and Countermeasures 2021-09-15T12:11:31+00:00 Abhishek Raghuvanshi abhishek14482@gmail.com Umesh Kumar Singh abhishek14482@gmail.com Thanwamas Kassanuk abhishek14482@gmail.com Khongdet Phasinam abhishek14482@gmail.com <p><strong>Abstract</strong></p> <p>Weaving conventional systems, sensors, clouds, mobile apps and Web-based controls into the Internet of Things (IoT) touches every part of people's life and has the potential to change the world. With the proliferation of heterogeneous devices and data processing, security issues are on the rise. In addition, it is well-known that most IoT apps and devices are not entirely secure and are susceptible to certain types of attacks. [1-3]</p> <p>In addition to their influence on the device's availability, different threats have varying effects on its security or quality. There is a roadblock in the way of organizations figuring out what dangers they face with their information assets and how to address them. A taxonomy based on the application domain and the design of the architecture is established in this article to better identify IoT security concerns.[4][5]</p> <p>Taxonomy of assaults against the internet of things may be seen in Figure 1. On the basis of a three-layer structure, this taxonomy was created. At the perception, network and application layers are attacked. [6][7]</p> <p>Fig.1. Taxonomy of Security Attacks in Internet of Things</p> <p><strong>&nbsp;</strong></p> <p>IoT development boards and sensors as well as cloud subscriptions are used in this study to build up an experimental setup. Raw data about IoT apps and devices is collected using network host scanning and vulnerability scanning technologies. Also, the Shodan scanning tool is used to effectively discover vulnerabilities in IoT devices and to do penetration testing on those devices.</p> <p>&nbsp;</p> <p>Server computers, client PCs, IoT development boards, sensors, and cloud subscriptions are used in this experimental configuration at the Mahakal Institute of Technology in Ujjain (India). Raw data on IoT-based smart cities is collected via network host scanners and vulnerability scanners. [8]</p> <p>&nbsp;</p> <p>There are a number of security vulnerabilities in IoT networks that will be investigated by Shodan [9]. A worldwide search engine for Internet of Things devices, Shodan was launched in 2013. At end, this paper provides a mitigation plan to mitigate various vulnerabilities in Internet of Things.</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Abhishek Raghuvanshi, Umesh Kumar Singh, Thanwamas Kassanuk, Khongdet Phasinam https://spast.org/techrep/article/view/2614 RENEWABLE ENERGY ENHANCED SMART ENERGY MANAGEMENT SYSTEM USING INTERNET OF THINGS 2021-10-17T11:18:30+00:00 G. Ramya arthir2@srmist.edu.in P. Suresh ramyag@srmist.edu.in R. Arthi kannanarchieves@gmail.com K. Murugesan kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: The paper presents the renewable energy enhanced smart energy management system using Internet of Things (IoT).</p> <h1>Methodology:</h1> <p>As we are living in a world where climate change is unpredictable, we chose the combination of solar and wind energy as renewable sources of energy, which has a rapid growth in the last decade. The renewable source of energy will be utilized alternatively when there is insufficient grid power and it will be achieved using an industrial microcontroller. The power generated by the renewable sources is sensed by the sensors and the data is transmitted to the microcontroller, which chooses what sources to be supplied based on availability. These enable users to monitor renewable energy as well as it can be controlled by switching it from grid power to any one of the renewable sources. The transmitted data can be controlled and monitored remotely using IoT platform.</p> <h1>Findings:</h1> <p>The unpredictable weather and climate are the main important drawbacks of wind and solar based renewable energy systems. The proper combination of two resources can help to overcome the drawback partially, the strength of each source overcomes the drawbacks of the other. The capacity of the batteries is maintained well when it is maintained with a full charge or charged quickly during deep and partial discharges. PV modules do not protect the batteries against deep discharges during rainy seasons or no sunshine. The batteries will be protected from deep discharge with the aid of a dynamic source of energy from the wind turbine, which extends the life of the batteries. The generated wind and solar energies are maintained with a common voltage of 12 Volts to 16 Volts using boost converters to be supplied to the load. The block diagram of the system.</p> <h1>Originality/value:</h1> <p>The hybrid energy management system is employed and it can generate electrical energy for the apartments, private houses, educational institutions, small companies, and so on.&nbsp; The proposed method incorporates solar and wind energy sources for the sustainable generation of renewable energies. The area required by the module is less as it is built on a single module with a combination of wind and solar energies. The main objective is to produce green energy with the best efficiency by proper incorporation of algorithms between solar and wind energy sources. The best suitable hybrid renewable energy system to utilize the local renewable resources are solar and wind energy resources. The outputs of solar and wind are complementary to each other during some seasons. When compared to the performance of solar and the performance of wind energy, the combination of solar and wind performs better. Further, the PIC16F877A automatically determines the best available energy at any given time to be provided to the load. Since the present proposed model provides more energy output per unit area it can be used in several house rooftops as a reserve energy supply unit reducing the dependency of the customer from the main EB supply and thereby reducing the overall power consumption cost.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 G. Ramya, P. Suresh, R. Arthi, K. Murugesan, Mayakannan Selvaraju https://spast.org/techrep/article/view/2174 Mammogram Based Breast Cancer Detection Using Deep Learning Technique 2021-10-01T16:38:47+00:00 bharathasreeja bharathasreejaece@rmkcet.ac.in <p>Breast cancer is a common disease in today’s world. An analysis and investigation of suitable image processing techniques for breast cancer detection in mammogram images are proposed in this paper. Preprocessing step is the first step in order to remove the noises in&nbsp; breast cancer using different types of filters. It is possible to detect the various noises and can find out which filter method is best for removing noise in mammogram images. The main goal of the preprocessing is to improve the mammogram image quality and make it ready for the segmentation and feature extraction.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 bharathasreeja https://spast.org/techrep/article/view/2210 A Predictive Model for Student Employability Using Deep Learning Techniques 2021-10-01T13:31:39+00:00 Biku Abraham biku.abraham@saintgits.org Ambili P.S ambili.ps@reva.edu.in <p>Education in the present scenario is Outcome Based and focuses&nbsp;mainly on the skill sets a student acquires on completion of the studies. As higher education systems grow and diversify, this sector has now been identified as one of the promising areas for private and foreign investments. The society is increasingly concerned about the quality of programs, international rankings and placement statistics of HEI (Higher education institutions). The need to enhance the vocational skills of graduates is a challenge to the institutions in this circumstance. The major aim of this study is to support institutions and curriculum designers to assess students’ overall qualitative and quantitative growth during the course of study so as to facilitate timely educational interventions and equip them for suitable employment.</p> <p>The goal of this study is to provide tools/guidance to educators and assessment developers on how to predict student placement possibilities using performance-based assessments.&nbsp;The work is intended to benefit institutions, students, curriculum designers and the faculty by improving their understanding on how much the students are learning and how much they are prone to employment. This study concentrates on how demographic data, scholastic and co-scholastic abilities of students, faculty characteristics and teaching practices contribute to the student learning.&nbsp;</p> 2021-10-03T00:00:00+00:00 Copyright (c) 2021 Biku Abraham, Ambili P.S https://spast.org/techrep/article/view/82 A Systematic Level Review on usage of Internet of Things (IoT) technologies in mushroom cultivation 2021-07-21T10:02:14+00:00 Nisha Aggarwal me.nisha.aggarwal@gmail.com Dinesh Singh dineshsingh.cse@dcrustm.org <p>With the advent of new technologies, IoT came into existence and created its niche in the technological world with its outperforming functionalities. Without human intervention, things can communicate with each other and perform their defined actions. Integration of different sensors helps in collecting real time data in different applications. Smart farming is described as the application of modern technologies to farming practices in order to achieve continuous improvement in farming procedures, resulting in increased productivity. The Internet of Things (IoT) is blending with modern agriculture because it enables farmers to track their farms in real time and access all of the information they need from any place at any time. Mushroom cultivation has also experienced the similar trends. Improved production and quality of crops can be obtained by controlling the climate for mushroom cultivation, as the ideal environmental conditions such as temperature, carbon dioxide, humidity level, sunlight, nutrient, and pH can be monitored and regulated using modern IoT enabled techniques.</p> <p>This research article presents the systematic literature review (2007-2020) of the current technologies been used in mushroom cultivation using IoT with sensors. Review of IoT technologies such as gateway, types of sensors, communication system, nature of experiment and user interface is presented. The advantages and disadvantages of usage of these modern technologies in mushroom cultivation are also discussed. A SWOT analysis on Indian mushroom industry has also been presented for better mushroom cultivation for cost effective methods which will help in further future analysis.It is found that wireless sensor networking is helpful in maintaining and controlling optimum environmental parameters such as humidity, temperature and carbon dioxide level. Automated systems overcome traditional methods in an efficient way.</p> 2021-07-21T00:00:00+00:00 Copyright (c) 2021 Dinesh Singh https://spast.org/techrep/article/view/2586 REAL TIME FACE DETECTION AND RECOGNITION FROM VIDEO USING DEEPFACE CONVOLUTIONAL NEURAL NETWORK 2021-10-15T02:44:55+00:00 Poorni R poorniram21@gmail.com Amritha B kannanarchieves@gmail.com Bhavyashree P kannanarchieves@gmail.com Charulatha S kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: To implement secured and contactless in-store order pickup system based on real time face recognition to ensure the authenticity of the consumer and also to reduce the risk of frontline workers who are vulnerable to the prevailing COVID – 19.</p> <p>Methodology: Online shopping website is developed using HTML/CSS. QR Code is generated for the corresponding order ID. In this system the face detection and recognition are done using Haar Cascade Classifier and Convolutional Neural Network Algorithm. 2D Convolution is used to train the model. 3 layers of convolution is used to get a testing accuracy of 98.99% and validation accuracy of 94.76%. ReLU and Softmax activation functions are used in this system. Structural Similarity Index is used to compare faces and get the desired output.</p> <p>Findings: Input layer in CNN ought to contain picture information. Picture information is addressed by three-dimensional grid as we saw before. You need to reshape it into a solitary section. On the off chance that you have "p" preparing models measurement of information will be (625, p). Convolutional layer is now and again called include extractor layer since highlights of the picture are get separated inside this layer. Above all else, a piece of picture is associated with Convolutional layer to perform convolution activity as we saw before and ascertaining the dab item between open field (it is a nearby locale of the information picture that has the very size as that of channel) and the channel. Pooling layer is utilized to diminish the spatial volume of info picture after convolution. It is utilized between two convolution layers. The Euclidean distance or Euclidean measurement is the customary distance between two focuses that one would quantify with a ruler, and is given by the Pythagorean recipe.</p> <p>Originality/value: The training accuracy acquired during training is 98.99 percent, and the validation accuracy is 94.76 percent. Thus, the CNN model may be utilized to detect and recognize faces accurately from any given video.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Poorni R, Amritha B, Bhavyashree P, Charulatha S, Mayakannan Selvaraju https://spast.org/techrep/article/view/2045 Classification of burn images using state-of-art deep learning techniques 2021-10-02T12:15:57+00:00 JEEVA JB jbjeeva@vit.ac.in Rohit Volety rohit.volety2018@vitstudent.ac.in <p><span style="font-weight: 400;">An important step to a successful healing of any burn, is to first identify the correct course of treatment. Presently the most popular method of classifying burn images is by manual inspection where the specialist looks at the burn skin and then suggests treatment. Events like burning buildings, forest fires etc can lead to a high number of burn victims. Situations like that of a pandemic, affected people cannot visit hospitals due to restrictions. Therefore, the objective of this work is to propose a system that can classify burns without any bias, reduce the workload on doctors so that they are not overworked in situations of mass casualty and also help the people living in rural areas. Hence an automatic system which can classify the burn wound into 4 broad categories is proposed such as 1st, 2nd, 3rd and not-a-burn image. 514 images were collected from different web resources like medical journals, medical magazines and medical books which consists of 128 images in no burn category and 127 images for each of 1st, 2nd, 3rd degree burns. Figure 1 shows the methodology that was followed for the experimentation. The methodology consists of web-scarping images for dataset, then choosing the images for the dataset, preprocessing them and augmenting them to make a large dataset after which it is split into train and test datasets for model training and model evaluation respectively. This dataset was then used to compare the performances of several state of the art deep learning architectures like VGG16, VGG19 [1], RESnet [2], DENSEnet [3], InceptionNet [4] [5] and EfficientNet [6]. In this study a new architecture is proposed that use the above-mentioned architectures as base-models, and use the concept of transfer learning by freezing all layers and connecting them to new layers. Then this data was annotated and fed to the proposed architecture. The software used for training is google colaboratory. After training, the model was evaluated on a test dataset. Metrics like accuracy, recall, precision and F1 score are used to evaluate and compare the architectures. The performances of several optimizers like Adam, Adagrad and RMSProp are compared to get the best results. This can be seen from the result shown in Table 1. Resnet 101 gives the best accuracy 95% with the adagrad optimizer. It also achieves the best average recall, average F1 score and average precision of 94%. Efficient Net follows closely in terms of model performance with an accuracy of 92% with RMSprop optimizer. The average precision, average recall and average F1 score was obtained for each model and the Resnet model seems to have performed well in all aspects with an scores of 0.94 , 0.94 and 0.94 respectively. This is followed by effecientNet model. The lowest average precision, average recall and average F1 score is seen in the Inception model followed by VGG16 and then VGG19.The impact of an automated burn classifier can be huge on a society as these types of systems do not discriminate against patients on the basis of ethnicity, age, gender etc. This ensures a fair diagnosis to every patient who are affected. This system can also be easily deployed using a web-app which can make it much more accessible. </span></p> <p>&nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 JEEVA JB, Rohit Volety https://spast.org/techrep/article/view/3454 IOT ENABLED SMART IRRIGATION AND CULTIVATION RECOMMENDATION SYSTEM FOR PRECISION AGRICULTURE 2021-11-18T06:20:17+00:00 V. Elizabeth Jesi jesiv@srmist.edu.in Anil Kumar akdwivedinutra25@gmail.com Bappa Hosen hosenbappa@gmail.com D. Stalin David sdstalindavid707@gmail.com <p>Agriculture is a separate economic sector. In agriculture, India ranks second. Agriculturists stress the need of fertilization and crop rotation. Today's farmers are unable to produce the maximum amount of food due to technological advancements. The major focus should be on using and collaborating with emerging technology in agriculture to boost output. The Internet of Things (IoT) helps estimate crop yields and other factors that contribute to high productivity. Soil temperature, pH, and water level all play a role in delivering optimal crops and increasing productivity. This study proposes "ACRIS: Agriculture Cultivation Recommender and Smart Irrigation System" to help farmers who use IoT in Precision Agriculture to increase crop output. The ACRIS system has three modules. Accurate Farming Recommendations Using an Agriculture Factor-based Relevance Vector Analysis Model" is the first ACRIS module. Because of this, our model offers a more favourable situation based on relevant vector analysis. The second module is "AISM System: Advanced Irrigation Planner for Precision Farmers". The module forecasts soil moisture and organises irrigation for farmers using precision agriculture to decrease water usage and boost productivity. the "AMOP System: ACRIS Multiparameter Optimization Systems for Precision Agriculture" module. The module compares water content at different phases of plant development and integrates IoT technologies into agriculture to ensure optimal crop growth and water stability. The agricultural production is enormous, and it is vital to farmers' income. The goal of "ACRIS: Agriculture Cultivation Recommender and Smart Irrigation System" is to optimize water use in precision farming by combining IoT and machine learning. The proposed strategy works well for large agricultural fields. This technology aids in anticipating irrigation planning based on irrigation needs using multiple sensor metrics. Soil moisture, temperature, and humidity are predicted. This experimental evidence demonstrates smart irrigation with great crop yields using less water.</p> 2021-11-18T00:00:00+00:00 Copyright (c) 2021 V. Elizabeth Jesi, Anil Kumar, Bappa Hosen, D. Stalin David https://spast.org/techrep/article/view/2782 Role of Internet of Things (IoT) Increasing Quality Implementation in Oman Hospitals During Covid-19 2021-10-17T18:05:10+00:00 Malik Mustafa abhishek14482@gmail.com Ali Al-Badi ieeemtech@gmail.com <p><strong>Abstract</strong></p> <p>Over the last decade, the rapid development of the Internet of Things (IoT), specifically in the health information industry, has dramatically affected. This goal is achieved by enhancing the health care delivery system through improved efficiency, reducing time and cost. Nevertheless, Mclean and DeLone Information Systems' (I.S.) applicability and implementation of success framework of health-related internet of things (IoT) have not yet been proven [1][2]. Thus, this research aims to develop the importance of information system (I.S.) application and its affiliation with client fulfilment, Net benefits of the internet of things (IoTs) and user intension in 5 health care providers in developing nations as Oman [3]. Moreover, this research will be significant on the infrastructural and technological factors because these are necessary to improve the social healthcare process's efficiency at the resident of the healthcare. Hope this finding would motivate hospital management to concentrate on critical points of view that influence the use of the Internet of Things (IoT) in health systems. This study's main objective is to evaluate the effects of the Internet of Things (IoT) on the performance of human services in Oman from a clinical medical practitioners' perspective. Furthermore, the most important of this research is to analyze the factors that persuade the success of the internet of things (IoT) and its implementation specifically in the health care centres in Oman. A total of 750 survey questionnaires were circulated, and 430 out 750 were returned to the five hospitals in Oman. The response rate of the data was 57%. Moreover, the present study will also demonstrate the elements of infrastructural and technological characteristics in the McLean and DeLone success data framework model and the impact of these variables on citizen satisfaction and user intention on the Internet of Things (IoT) as part of health care services. PLS-SEM was also utilized for hypotheses testing, including Technological factors, Infrastructure factors, system quality, information quality, and service quality. The research findings would support infrastructural factors and technological factors, quality of service, system quality, and quality of information as a significant factor that can affect the Internet of Things (IoT) implementation successfully for healthcare centres in Oman. This research will expand McLean and Delone's model by analyzing the impact of infrastructural facilities and technological factors that are the most vital factors in the Arabic countries. [4][5]</p> 2021-10-19T00:00:00+00:00 Copyright (c) 2021 Malik Mustafa, Ali Al-Badi https://spast.org/techrep/article/view/2155 Effective Heart Disease detection based on convolution neural network 2021-10-01T17:27:40+00:00 bharathasreeja bharathasreejaece@rmkcet.ac.in <p>Heart disease is a major cause of mortality all over the world. Rapid urbanization and change in lifestyle that occurred during the past two decades had led to growing of coronary risk factors such as diabetes, hypertension, atherogenic dyslipidemia, obesity and physical inactivity. Major problem faced in heart disease is that it doesn't show any kind of visual symptoms or changes. It is noted only when it hits or in some worst cases it has been unnoticed till last breath. To overcome this, we have proposed a model that can predict the possibility of having a heart disease. This can be used to treat patient at right time and take immediate action to overcome the defect. This model deals the predefined set of data to predict the possibility of heart disease. The data consists of set of values of patient who have been tested positive for heart disease and values of patient who tested negative.</p> <p>&nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 bharathasreeja https://spast.org/techrep/article/view/1502 Reconstructing Images using Super-Resolution Generative Adversarial Networks - SRGAN 2021-09-30T19:48:33+00:00 NATHIYA S nathiya.s@vit.ac.in <p>In most research communities, Machine Learning (ML) and Deep Learning (DL) play a vital role. In many digital image applications, there is a need for high-resolution images for performing the task. The single-image super-resolution (SISR) is a broadly used technique for accomplishing Computer Vision related problems. Since Super-Resolution is performed by generating single or multiple High-Resolution images from single or multiple Low-Resolution images. Single-Image Super-Resolution (SISR) is the concept of obtaining a good quality of images (high-resolution) acquired from the lower-resolution images [3]. And further this process is relatively difficult to obtain and also there remain some issues with the given low-resolution images. There is much any application related to this, such as image analysis, image/video compression, Text to image generation, Interactive image generation, medical image processing, etc. The objective of SISR is to generate high-resolution or super-resolution images from the given low-resolution images. So, the high-resolution images are available only during the training process. Furthermore, Super-Resolution algorithms are classified into two: The first is Model-based and the second one is Learning-based algorithms. Model-based algorithms are performed by using the noise images and the sub-sampled images of the high-resolution images that are converted into low-resolution images [5]. So, the reconstruction of the HR images from LR images has occurred with the additive noise and blurred images. In, Learning-based algorithms, reconstruction generally occurs in learning the representation of the patches and also reconstructing patch by patch of the images. Therefore, it may lead to computational overhead.&nbsp; So, these problems have to be overcome using Deep Learning (Generative Adversarial Networks) for achieving a better outcome in Single-Image Super-Resolution. Generative Adversarial Networks (GAN) associated with Deep Learning plays an important role in Computer Vision for creating innovative ideas in recent research [1]. GAN is used in many Computer Vision related problems. A Generative Adversarial Networks (GAN) comes up to the category of Machine Learning (ML) frameworks. SR algorithms may differ in many aspects such as architectures, models, functions, principles, and also strategies. In recent advances, there is some Systematic and comprehensive manner for achieving the SR-related problems. Also, Deep Learning can overcome the problems such as enhancing the quality of the images, contrast along with the resolution from low-resolution images [2]. The neural networks achieved well in the past and now the generative adversarial networks are predictable to achieve more than the traditional deep learning methods. This similar approach is used in Super-Resolution techniques in many kinds of research such as medicine, object generation, image processing, and texture transfer, face detection, etc. Super-Resolution is a task, from which it creates high-resolution (HR) images from the low-resolution images. As we need to obtain the finer details from the larger up-scaling factors. Together with GAN and Super-Resolution, there is Super-Resolution Generative Adversarial Networks (SRGAN) for single-image super-resolution (SISR) techniques [4]. The major intention is to acquire the high-resolution images from low-resolution images, there comes the image enhancement. This can be achieved in various ways such as up-scaling images, noise reduction, and color adjustments. Therefore, by up-scaling, the low-resolution images to reconstruct the super-resolution images or high-resolution images and the texture feature of the reconstructed images are also not lost. Also, deep learning provides a better solution to obtain optimized results. Therefore, this paper provides the finer details and high quality with respect to the quality measurements such as Peak-Signal Noise Ratio (PSNR) and Structural Similarity Index (SSIM) shows the better solution for SRGAN.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 NATHIYA S https://spast.org/techrep/article/view/2895 G GCL And ILM Layer Extraction From OCT Images For Glaucoma Detection 2021-10-21T06:15:58+00:00 Gangadevi Bedke gangabedke@gmail.com Mukti Jadhav muktijadhav@gmail.com Promodini Punde pramodinidange@gmail.com Swapnil Dongaonkar dseh2018@gmail.com <p>Glaucoma is the second leading cause of<br>blindness in worldwide, it can cause due to increase in intra<br>ocular pressure and loss of retinal nerve fiber layers, it can<br>cause total vision loss if it is not treated earlier, so there is need<br>of early detection of glaucoma. For the early detection of<br>glaucoma, we have used Optical Cohorence tomography<br>images; we collected 87 normal and 112 glaucomatous images.<br>In this our research study, we analyzed OCT images and we<br>extracted GCL and ILM layers using image processing<br>techniques, after extraction of GCL and ILM layers, we<br>measured thickness of nfl layers. To classify images into<br>normal and glaucomatous group, we are going to use Support<br>vector machine Classifier.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Gangadevi Bedke, Mukti Jadhav, Promodini Punde, Swapnil Dongaonkar https://spast.org/techrep/article/view/2378 e Medical image denoising and classification based on machine learning- A review 2021-10-09T13:48:24+00:00 Saumya Chaturvedi saumyanmishra5@gmail.com T. Aditya Sai Srinivas taditya1033@gmail.com R.Karthikeyan karthikhonda77@gmail.com M.Vijayaraj vijayarajmsec@gmail.com A Nirmal Kumar sa.nirmalkumar@gmail.com M.sangeetha sangeetha.m@reva.edu.in Mayakannan Selvaraju kannanarchieves@gmail.com <p>Advances in medical imaging technology continue to create new possibilities for the collection of medical data that are important in timely and accurate diagnosis, in monitoring progress, and in the treatment of various diseases and in medical research. The capabilities of the new skills arise mainly from the technologies depicted in the vivo interior of the human body. Thus the study of the morphology and function of the various organs and the detection of any pathogens is achieved in a very direct way. The "source imaging data" provided by them is important information, but their large number is constantly growing, but their nature also creates the need for further processing with the help of computers. The primary purpose of processing images is to use denoising that includes the elimination of noise due to technical errors and feature preservation. Following noise reduction, the image segment, i.e. the location or areas of interest in an image, is the central objective of the process. In addition, usually, the complexity of the data in large volumes and charts requires a lot of time to study and a lot of experience to do their interpretation correctly. Therefore, in many cases, its automation using machine learning seeks out the partitioning process, but also categorizes images, i.e. classifying an image or parts of an image into specific categories. In most applications, machine learning performance is better than conventional techniques.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Saumya Chaturvedi, T. Aditya Sai Srinivas, R.Karthikeyan, M.Vijayaraj, A Nirmal Kumar, M.sangeetha, Mayakannan Selvaraju https://spast.org/techrep/article/view/1013 Detection and Classification of Thoracic Diseases in Medical Images using Artificial Intelligence Techniques 2021-09-20T08:32:36+00:00 Shubhra Prakash shubhra.prakash@res.christuniversity.in Ramamurthy B ramamurthy.b@christuniversity.in <p>Background: Artificial Intelligence is at the leading edge of innovation and is developing<br>very fast. In recent studies, it has played a progressive and vital role in Computer-Aided<br>Diagnosis (CAD). Some studies of deep learning, a subset of artificial intelligence applied to<br>lesion/nodule detection or classification, have reported higher performance than<br>conventional techniques or even better than radiologists in some tasks. However, these<br>approaches have targeted a single disease or abnormality with limited value in general<br>clinical practice. The interpretation of the medical images requires assessing various diseases<br>and abnormalities associated with the body part. The chest is one of the large body parts of<br>human anatomy and contains several vital organs inside the thoracic cavity.<br>Furthermore, chest radiographs are the most commonly ordered and globally used by<br>physicians for diagnosis. An automated, fast, and reliable detection of diseases based on<br>chest radiography can be a critical step in radiology workflow. For the research work, we<br>propose to develop a framework for automatic detection of thoracic diseases, to help alert<br>radiologists and clinicians of potential abnormal findings as a means of work list triaging and<br>reporting prioritization[1]-[9].<br>Objective: This study presents the conduction and results of a systematic review that aims to<br>investigate Artificial Intelligence Techniques to identify Thoracic Diseases in Medical Images<br>Methods: The systematic review was carried out according to PRISMA (Preferred Reporting<br>Items for Systematic Reviews and Meta-Analyses) guidelines. Science Direct, IEEE Xplore, and<br>PubMed were used as the scientific databases to search for research articles published in<br>English and were filtered based on defined inclusion and exclusion criteria. At the time of<br>writing (August 1, 2021), 348 studies were retrieved, out of which 182 were included. 52<br>addition papers were also included based on specific body parts in the thoracic region.<br>Results and Conclusion: This review provides a systematic analysis of the current status of<br>Artificial Intelligence Techniques in the Detection and Classification of Thoracic Diseases. On<br>the whole, Transfer learning is the most commonly used approach, and it displays a good<br>performance with the available datasets. However, it is difficult to compare different<br>classification methods due to the lack of a standard dataset, making reproducibility difficult.</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Shubhra Prakash, Ramamurthy B https://spast.org/techrep/article/view/385 Prediction Of Epidemic Outbreak Using Social Media Data 2021-09-14T09:00:31+00:00 Harin Mehta udit.mamodiya@poornima.org <p><span style="font-weight: 400;">Big data [1] is a term that refers to large amounts of organized, slow-moving, and informal data that a company encounters on a daily basis. However, it is not the amount of information that is important. What matters is what organizations do with their knowledge. Big data can be read in detail leading to educated judgment and smart business movements. It is difficult or impossible to process using traditional methods because it is so large, rapid, or complex. [2] The act of obtaining and storing massive volumes of data has been around for a long time in analytics. The notion of big data gained traction in the early 2000s when market analyst Doug Laney proposed a wide perspective of big data called the five Vs (Volume, Velocity, Variety, Variety, and Veracity). [3] Volume refers to the amount of data, Velocity in the speed at which data is created, and variability in heterogeneity and data complexity. (e.g. multilingual text, images, videos, voice, functions, and producer population, context, consumer characteristics, etc.) bad.&nbsp;&nbsp;&nbsp;</span></p> <p><span style="font-weight: 400;">When it's all said and done, it's pointless to own and generate Big Data. The research community of data mining and artificial intelligence should encourage the next big step "to provide reliable, efficient, predictable, and timely information from a variety of information, complex, online, high and large".</span></p> <p><span style="font-weight: 400;">Epidemics cause serious economic, health and social consequences worldwide [4]. Infectious diseases are an epidemic, where the spread of the disease has reached an epidemic stage, and it is likely to spread across the country. Epidemics can successfully wipe out entire populations. [5] Cholera, influenza, yellow fever, dengue fever, avian flu, and diphtheria are just a few of the well-known illnesses that have afflicted people all over the world. Infectious illness is one of the factors that accounts for 43 percent of all deaths worldwide and causes severe health issues. [6] In current and historical eras, India has seen epidemics and epidemics in many areas of the world. Identification and management of disease outbreaks in the community is an essential and challenging responsibility in terms of maintaining a healthy environment.</span></p> <p><span style="font-weight: 400;">Over the past few years, the data we have selected from social media offers an unparalleled opportunity to explore this challenge. Most learning activities use social media platforms such as Twitter. It is a competitive asset to track styles for a number of reasons. The frequent recurrence of the previously delivered signals aids in minute-by-minute examination. Twitter posts are open to the public and very informative as opposed to search engine logs. In addition, the required tweet details can be easily removed with Twitter APIs. The purpose of this study is to analyse the latest methods of predicting outbreaks, methods, tools and frameworks.</span></p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/1891 INTELLIGENT TRACKING AND NAVIGATING MOVING OBJECTS IN A SMART ENVIRONMENT USING IOT NETWORK 2021-10-08T14:17:07+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com Dr.R.Parvathi 2005.parvathi@gmail.com T. Ch. Anil Kumar tcak_mech@vignan.ac.in A.Nandhakumar nandhakumar3107@gmail.com Haqqani Arshad arshadh@rcyci.edu.sa Ajith.B.Singh ajith.b.singh@gmail.com <p>Internet of things (IoT) constitutes the combinations of sensors, controllers, actuators, connectivity. Tracking and navigating moving objects are crucial tasks for security as well as a secure and smart environment. By using the sensory data, location of objects can be identified then that sensory data will be transferred to controllers for the further processes. One pivot role of the system is to identify the trajectory and location of the object using the speed, velocity, acceleration, maps etc and this can be done by machine learning programs. Analyzing service required, prediction of the trajectory and real-world knowledge of maps or locations, service request parameters are adjusted to discover relevant virtual objects dynamically to overcome the loop-carry dependency. This paper proposes an architecture that supports tracking down and allocating relevant virtual objects based on a self-aware environment technology in IoT to track moving objects.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, Dr.R.Parvathi, T. Ch. Anil Kumar, A.Nandhakumar, Haqqani Arshad, Ajith.B.Singh https://spast.org/techrep/article/view/2590 ANALYSIS OF EMPLOYABILITY SKILLS AMONG RURAL GRADUATES 2021-10-15T03:09:43+00:00 Sagayaraj K.L kasisagay@gmail.com Dr.Nisha Ashokan kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: The purpose of education is to bring out the good qualities that are hidden among students. As the employability skill is becoming more important for the graduates to get good placement, it is the need of the hour to explore new ways of imparting the employability skill among the graduates.</p> <p>Methodology: This study tried to make a critical evaluation of the existing literature on various aspects of employability skills among rural graduates.</p> <p>Findings: This study aimed at exploring the existing literature on employability skills among graduates.&nbsp; An analysis of various dimensions, categories, and frameworks of employability skills among graduate students was enumerated. This article identified the gap that is existing between academic institutions and employers. This study focused also on the important skills that are expected by employers in the 21<sup>st</sup> century.</p> <p>Originality/value: This study has identified the key employability skills that are emerging in the 21<sup>st</sup> century and how the graduates are to imparted those skills.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Sagayaraj K.L, Dr.Nisha Ashokan, Mayakannan Selvaraju https://spast.org/techrep/article/view/2025 Machine learning for Precision Agriculture: applications, issues and challenges 2021-09-30T16:35:41+00:00 Dr.M.A.Jabbar jabbar.meerja@gmail.com <p>Agriculture is very important in any country for Economic growth, with the rapid increase of population,<br>change in climatic conditions, and fewer resources it’s going to be a very difficult task to provide basic<br>needs for the population. In previous day’s we are using traditional Agriculture now we are having new<br>technologies to improve crop yield but we can’t increase the lands.<br>Precision agriculture is an innovative technology to overcome the present situation in agriculture<br>With this Efficient and Effective agriculture is possible for smart farming, Machine Learning together<br>with IoT (Internet of Things) are a revolution for the next step in agriculture. Machine learning with<br>computer vision is able to view the different crop images to monitor the crop quality and yield<br>assessment.<br>Precision agriculture is technology-enabled farm management. Emerging technologies such as IoT,<br>Cloud computing, AI and ML, and blockchain are the backbone for this precision agriculture. Many<br>researchers are integrating IoT, WSN, ML for effective farming.<br>Machine learning techniques have been widely used for harvesting, drip irrigation, crop yield prediction<br>soil prediction, and livestock management. This paper explores v</p> 2021-10-05T00:00:00+00:00 Copyright (c) 2021 Dr.M.A.Jabbar https://spast.org/techrep/article/view/552 Enhancement of images in foggy weather conditions 2021-09-14T19:41:13+00:00 Rajashree Das rajashreed08@gmail.com Tridipjit Rajkonwar udit.mamodiya@poornima.org Dibya Jyoti Bora udit.mamodiya@poornima.org <p><span style="font-weight: 400;">The aim of this experimental work is to improve quality and texture of an image as the images taken in fog weather condition are not visible clearly due to expansion of fine water droplets in the fog which causes blockage and scatter the light and hence it reduce visibility. Different algorithms and techniques were studied which enhance the quality of an image.</span></p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/1408 AIRTIFICIAL INTELLIGENCE BASED BREAST CANCER ANALYSIS TECHNIQUE 2021-09-29T07:03:31+00:00 Sivakumar Rajagopal rsivakumar@vit.ac.in Piyush Srichandan piyush.srichandan2019@vitstudent.ac.in Ayushi Tiwari ayushi.tiwari2019@vitstudent.ac.in <p>Breast cancer can be simply understood as an abnormal growth of cells that possess invasive capabilities. Some of these cells can leave the tumor location and embark upon the lymph nodes or the nearest underlying muscle tissues and that’s when breast cancer becomes life-threatening. Breast cancer can appear in different parts of the breast. Breast cancer is quite complicated and presents itself in very unique ways in different people. It is estimated that one out of 22 women in India will get breast cancer in their lifetime. Each year more than 2 million people are diagnosed with breast cancer globally. A woman is at a significantly higher risk than a man for getting breast cancer. In women, there is about a lifetime risk of 10 percent of getting breast cancer [1].</p> <p>&nbsp;An early diagnosis makes all the difference. Treating breast cancer early provides the best chance of preventing the disease from returning and potentially reaching an incurable stage. There are countless benefits of an early diagnosis. At an early stage, breast cancer is a very much curable and survivable disease. It provides the patient with many options to consider down the road instead of jumping to the last viable solution of mastectomy. Usually, when a patient experiences unusual symptoms such as nipple discharge or a lump, they should make sure to book an appointment with their doctor. Any reason for a woman to think that something has changed in their breasts is an indication. The first step of diagnosis is targeted imaging with the help of mammograms or tomosynthesis imaging or ultrasound in some cases. But usually, the first sets of scans do not impart complete information and hence additional scans every consecutive month are advised by the doctors. In some cases, a biopsy can be carried out to determine whether the tumor is benign or malignant. An accurate prognosis can be quite difficult because of the biological heterogeneity of breast cancer. Mammography has been shown to provide accurate results for early breast cancer screening but in some specific situations, such as in a patient with dense breasts [2], uncommon architectural distortions, or significant extensive scarring from prior biopsies the results were quite unreliable [3]. The conventional imaging techniques cannot precisely detect the involvement of axillary lymph nodes or even the presence of any distant metastases which adversely affect the further prognosis of the patient [4].</p> <p>Hence, we are proposing the idea of using computed tomography imaging techniques over conventional imaging techniques. Computed tomography is the x-ray technique to diagnose diseases and injuries. Tomos = slice; graphine=imaging of an object by analyzing its slices.&nbsp; If one has large breast cancer, then the patient may be ordered to get a CT scan to know the level of spreading of cancer into the chest wall (Figure 1). This helps to jump to the conclusion if cancer can be removed with mastectomy. CT scans are also used to examine other parts of the body where breast cancer can spread, such as the lymph nodes, lungs, liver, brain, or spine.&nbsp; If your symptoms or other findings suggest that cancer has severely spread then one needs to have CT scans of the head, chest, and abdomen too [5].</p> <p>We are going to carry out this task through “ARTIFICIAL INTELLIGENCE”. Since breast imaging is facing exponential growth of pressure in imaging requests, so a solution can be found to take the edge off these pressures by adopting AI to improve workflow efficiency and patient outcomes as well.</p> 2021-10-06T00:00:00+00:00 Copyright (c) 2021 Sivakumar Rajagopal, Piyush Srichandan, Ayushi Tiwari https://spast.org/techrep/article/view/2801 The Impact of Work-From-Home and Sustainability Concerns on Residential Electricity Consumptions During COVID-19 2021-10-17T13:56:07+00:00 Padma Priya R padmapriya.r@vit.ac.in Rishabh Jain rishabhjain1997@gmail.com Rekha D rekha.d@vit.ac.in <p>All around the world in countries on one side, during the COVID-19 pandemic, there was a common but indeed a milestone transition encountered in employee working style practices. It was a period where employees belonging to most of the businesses adopted and continued working in a new style known as Work-From-Home (WFH) [1-2] as- opposed-to commuting to their office premises. With more and more introduction of WFH patterns, the energy usage of smart devices such as laptops, monitors, desktop CPUs, and mobile phones has grown rapidly. On the other side, these countries around the world are also planning to harness more electricity generations through <em>renewables</em> based establishment, to become more responsible towards promoting their sustainability goals. Before the WFH-era, the main considerations for electricity load prediction from residential homes-based electricity usages in the literature, were contributed by electrical devices – washing machines, sauna, air conditioners, dish washers and TV. However, the usage from smart devices such as laptops, monitors, desktop CPUs, and mobile phones were largely ignored. But recently with a rise in WFH working style patterns, there is a pressing necessity for us to consider the electricity consumed by devices that are connected to the Internet. In this paper we have taken efforts to predict the total household energy consumption load and identify anomaly behaviours in the predicted load patterns of households from the perspective of internet-based smart devices. In our proposed architecture, we have considered a federated learning architecture instead of a centralized learning model. The proposed federated learning model consists of two phases 1) a clustering phase 2) a federated learning phase and network-usage devices electricity load consumption prediction phase. In the <em>first phase</em>, smart meters are clustered based on energy consumed by network connecting residential devices particularly from mobile phones, home office devices (monitors, laptops, tablets etc.) and security (surveillance etc.) devices. The collected features from each smart meter (representing an individual home) are transferred to a RNN based regression model which will be done in the Fog devices (a gateway device in the building apartment). During the <em>second phase</em> these residential houses are aggregated separately for each cluster to create cluster-specific models. Further the proposed RNN-based regression model predicts internet-oriented energy consumption on the clustered smart meters. Also, in this paper to the best of our knowledge for the first time we have identified the energy deficit that may be incurred if the supply of renewables (both Solar and Wind Turbines) based generations are used when a certain network-based devices are powered for a prolonged period of time, apart from other household devices in the residential buildings.&nbsp; To the best of our knowledge our paper is the first paper to consider residential network devices-based energy consumptions. So-as-to understand whether with a certain number of renewables-based electricity production being established in residential communities will satisfy the load demands during a raise in WFH working style pattern or not. And if so there is a deficit in meeting the load through renewables, how much more energy is to be met through renewables. Also we have envisioned fog distributed network paradigm, where in the Smart Meters acting as edge devices will subsequently communicate with the Fog Network were federation phase is likely to happen. Thus, in this paper we aim in to provide an insight to countries into the increased residential power demands to be considered before planning, construction of renewable energy generation systems with this new trend namely WFH if continued to be a plausible working style in future also.</p> <p>In this paper, “HUE: The Hourly Usage of Energy Dataset for Buildings in British Columbia” [3] dataset has been used to train the model and predict energy consumption patterns. The dataset contains hourly energy usage data along with housing attributes for twenty-two households in British Columbia, Canada. Also, we have used the meteorological data for Columbia from National Solar Radiation Database (NSRDB) [4]. Figure 1 specifies our system architecture about the two phases to be carried out in this proposed work. RQ1, RQ2 and RQ3 in Figure.1 represent Research Questions that will be answered in this work. They are network-connected devices load prediction, anomaly detection and green-energy deficit detection respectively.</p> <p><img src="https://spast.org/public/site/images/padmapriya_r/mceclip0.png"></p> <p><em>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Figure 1: Conceptual Architecture Diagram</em></p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Padma Priya R, Rishabh Jain, Rekha D https://spast.org/techrep/article/view/1520 Authentication and Authorization issues in IoT implementation: A Comprehensive Study 2021-09-30T15:20:42+00:00 Mansi Mehta mansimehta.dcs@indusuni.ac.in Kalyani Patel drkalyaniapatel@yahoo.co.in <p>The Internet currently connects people with one another (P2P) which as of now is being called as Internet Phase-1. The next Phase of the Internet, which helps in keeping a track of routine activities, conducted people, by device (M2P), as well as, it also allows one device to share its data with the other device and vice-versa. (M2M), has begun. Unfortunately, these networked appliances of all kinds are vulnerable to cyber-attacks, advanced threats, and hacking. These attacks and threats involve, new challenges related to authentication, access control, privacy-aware management of personal data, lack of encryption, application susceptibilities, inadequate physical security and many other such threats for IoT devices. This paper discusses the need for security measures like authentication and authorization for controlled access, as IoT is being adopted in almost all walks of life. The paper will be helpful to a researcher in understanding, the need for better authentication and authorization techniques, for protecting the personal or secondary data of a user, in order to encourage widespread acceptance &amp; implementation of Internet of Things (IoT) based solutions in different sectors like Logistics, Healthcare, Agriculture, Urban/Rural development, Manufacturing and many more.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Mansi Mehta, Dr. Kalyani A. Patel https://spast.org/techrep/article/view/2839 Blockchain and Corporate Banking 2021-10-18T05:25:11+00:00 Eugin Prakash Pathrose eugin.pathrose@skylineuniversity.ac.ae Nileena Saroja kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Blockchain technology has grown in prominence after the introduction of cryptocurrencies. The appeal of cryptocurrencies is undeniably based on the legitimacy of the same, which is ensured by Blockchain technology. Blockchain technology is extremely common due to three main characteristics. The first is decentralization, which means that there is no central authority or single point of failure that can be used to bring the system down so each node is self-contained. The second feature is immutability, which prevents false entries. Third, since any entry must be checked by all nodes in the scheme, it allows for openness. If no consensus is reached, the submission is denied. The use of blockchain technologies has expanded across a wide range of sectors. Banking is one sector that is going to be greatly affected by blockchain technologies. It facilitates the transfer of large sums of money while still posing significant security risks. As a result, blockchain technology makes it simple to fix these two issues. Blockchain technologies can be used to automate several operations in the banking industry. This paper aims to describe such processes and demonstrate how blockchain technology can be used in various areas of the banking industry.</p> 2021-10-19T00:00:00+00:00 Copyright (c) 2021 Eugin Prakash Pathrose, Nileena Saroja, Mayakannan Selvaraju https://spast.org/techrep/article/view/247 INTERNET OF THINGS (IoT) – BASED SMART IRRIGATION SYSTEM 2021-09-10T07:55:42+00:00 Shyam Mohan S jsshyammohan@kanchiuniv.ac.in <p>Agriculture is the science and art of cultivating crops. It is an important sector of an Indian Economy. Agriculture is not only work but also a way of life for many people. Farmers are facing many challenges for agriculture like irrigation problems, etc. Crops may be harmed by over-irrigation or a lack of irrigation. To avoid this problem, this chapter proposes a smart irrigation system on an IoT platform. The proposed method captures and sends the data over a wireless network without any human intervention. It makes use of smart wireless sensors to have an excellent management framework. To monitor soil moisture, temperature, and humidity, a smart irrigation system based on the Internet of Things is proposed. The control unit is implemented using the microcontroller on the Node MCU platform. Sensors for soil, temperature and humidity are deployed to determine the exact levels of moisture via the Blynk mobile app. IoT device keeps track of the moisture content of the soil and this information specifies the required amount of water to be used, avoiding overwatering.</p> 2021-09-10T00:00:00+00:00 Copyright (c) 2021 Shyam Mohan S https://spast.org/techrep/article/view/1757 Smart Attendance 2021-10-08T10:06:40+00:00 Angayarkanni Annamalai S angayarkanni.s.a@gmail.com <p>In recent days,Face recognition has numerous applications in various fields.It is widely used in unlocking phones ,keyless cars etc,.Face-X is a attendance system based on face recognition.It uses Deep Learning and Python for face recognition.When an individual’s face is shown in front of the webcam,it detects the person’s face and match it with the dataset provided and marks attendance for the particular person.Thus the system works in marking attendance for an institution or organization.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Angayarkanni Annamalai S https://spast.org/techrep/article/view/2431 Energy-Efficient Routing in WSN 2021-10-12T09:12:46+00:00 SUMAN JONNALAGADDA sumojon@gmail.com <p>Research in WSN has been increasing tremendously throughout the years. Because of its reliability, flexibility, accuracy, and easy deployment, it has a wide – range of applications in every field like Transportation, Industry, Military, Irrigation, Home Automation, Tracking, Security [2], Traffic Control [4-6], Diagnosing the Failures in the Machines [7], Offices, Health and Medical [3], etc. The technological advancements in the area of wireless communications have paved the way to develop less cost, small, and powerful multifunctional nodes in WSN which are powered by the battery which are non –rechargeable. These tiny sensors in the WSN are deployed in three ways. It can be arranged regularly in a fixed manner where the information is sent in the predefined path. This is used in the fields of medical, home, and industrial areas. The nodes are scattered in random over a finite area like rescuing operations and the places where monitoring is required in the environment and habitat. Lastly, the nodes can move to overcome the shortcomings of deployment with the help of force from Air, water, or automobiles. This is used in the fields of battlegrounds, emergency cases like fire, volcano, or tsunami places.<br>The important constraint associated with the sensors is the energy. They sense the environment; generate the information related to the surroundings. Then they transfer this data to the BS through their neighboring nodes. In addition to these, nodes in the useless states like idle position, overhearing, interfere and collision consume additional energy in the network. This energy is exhausted when the sensors communicate with each other and to the Base station while sending and receiving the required data. Redundancy of data generated by the nodes also consumes more energy in the network. Due to this energy depletion, the sensor nodes decease soon and are unavailable for the communication in the network, which in turn leads to the death of the entire network. Renewing the energy is a very expensive task. So by saving the energy we can increase the network lifetime and this is one of the most critical problems in WSN. Conserving the energy is the key issue in WSN. The lifetime of the network depends upon the usage of the node's energy levels. To achieve this we need to employ better routing methods [10-12].<br>The recent advances in WSN have given the platform to develop many Routing protocols where energy awareness is an important consideration. The efficiency of the network which depends upon the lifetime of the sensor nodes can be improved through the Clustering technology. It is a process to divide the SNs in the network field into various clusters. Each cluster in the network field has a Cluster Head (CH). This CH will be communicating to the Base Station (BS) or the Sink [13]. A detailed review of the routing protocols that are efficient for energy in the WSN is presented in this paper. The main focus is on the Hierarchical Routing protocols. Routing, energy conservation, limited resources, interoperability, security, and scalability are some of the challenging tasks that need to be addressed in the field of WSN. This work is a part of the research work carried out in the Computer Science and Engineering Department, University College of Engineering, Osmania University, and is funded by DST- SERB- (EEQ/2018/000623).</p> 2021-10-12T00:00:00+00:00 Copyright (c) 2021 SUMAN JONNALAGADDA https://spast.org/techrep/article/view/324 FUTURE PREDICTION OF HEART DISEASE THROUGH EXPLORATORY ANALYSIS OF DATA 2021-09-12T17:08:39+00:00 Dr T LALITHA t.lalitha@jainuniversity.ac.in RITIKA DIDWANIA ritikdidwania2802@gmail.com <p>This research paper aims to give an in-depth analysis of the healthcare field and data analysis related to healthcare. The Healthcare industry usually generates numerous amounts of data. These data are used for making a decision, so this must be very accurate. In order to identify the errors in the healthcare data, Exploratory Data Analysis (EDA) is proposed in this research. The EDA tries to detect the mistake, find the perfect data, check the errors, and determine the correlation. The most dependent analytical techniques and tools for improving the healthcare performance in the areas of operations, decision making, prediction of disease, etc. In most situations, a complicated combination of pathological&nbsp;and clinical&nbsp;evidence is used to diagnose cardiac disease. Because of this complication, clinical practitioners and scientists are keen to learn more about how to anticipate cardiac disease efficiently and accurately. With the use of the K-means algorithm, the factors that cause heart-related disorders and problems are considered and forecasted in this study. The research is based on publicly available medical information about heart disease. There are 208 entries in this dataset, each with eight characteristics: the patient's age, type of chest discomfort, blood glucose level, BP level, heart rate,&nbsp;ECG,&nbsp;and so on. The K-means grouping technique as well as visualisation and analytics tools are utilised to forecast cardiac disease. The proposed model's prediction is more accurate than the other model, according to the results.</p> 2021-09-14T00:00:00+00:00 Copyright (c) 2021 Dr T LALITHA, RITIKA https://spast.org/techrep/article/view/1031 BI HISTOGRAM EQUALIZATION BASED IMAGE ENHANCEMENT WITH BICUBIC INTERPOLATION 2021-09-20T09:01:58+00:00 Bhaskara rao Jana janabhaskar@gmail.com Mr.A Siva Kumar sivakumar.ece@anits.edu.in Dr.K.V.G.Srinivas srinivas.ece@anits.edu.in Prof Beatrice Seventline.J seventline.joseph@gitam.edu <p>In image processing, for enhancement histogram equalization is the widely used technique for contrast enhancement. However, this technique tends to change the brightness of the image. Here, the contrast and resolution of image were enhanced using the proposed Bi Histogram Equalization Based Image Enhancement with Bicubic Interpolation (BHBI) technique.&nbsp; Bi-histogram for contrast enhancement and bicubic interpolation for resolution enhancement has taken. Bi Histogram Equalization separates the input image's histogram into two based on input mean before equalizing them independently. Bicubic Interpolation can generate bigger or High Resolution image from one or more Low Resolution or Smaller images. The performance of the BHBI method can be compared for some typical images cameraman, lena that are applied to some existing enhancement methods like AGCWD [18], ASAUMF [33], AVGHEQ [30] and MMSICHE [11]. The performance of these existing techniques can be evaluated subjectively in terms of person illustration observation and measurably using DE, IQI, NCC, CII and AMBE. The results obtained from the BHBI technique shows better when compared with respect to the various existing techniques</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Bhaskara rao Jana, Mr.A Siva Kumar, Dr.K.V.G.Srinivas, Prof Beatrice Seventline.J https://spast.org/techrep/article/view/439 A A study of MRI data synthesis with Convolutional Networks, KNN regression and Generative Adversarial Network (GAN) 2021-09-15T11:14:11+00:00 SUNANDA DAS das.sunanda2012@gmail.com <p>Magnetic resonance imaging (MRI) is a technology mainly used for disease prediction and treatment. Sometimes doctors advised for computed tomography (CT) for diagnosis or therapy. But, the ionizing radiations for CT are the reason for damaging DNA which become vulnerable for repetition of usage. As, MRI does not use this kind of radiations, so it is quite good and safe for clinical testing. Practically, due to poor quality of MRI images, sometimes it is advised to repeat the scan test again which causes some unavoidable situations with increase of costs. Therefore, only the improvement of the quality of MRI images can give us the relief from these unnecessary problems. Some studies [1-4] have shown that the signal to noise ratio, image resolution, contrast sensitivity and artifacts are the main key factors for enhancing image quality. So, we need an automated supervised machine learning algorithm to generate high resolution data without more efforts.</p> <p>The exponentially growth of uses of MRI data [5] from past 1970’s makes a big platform for the researchers for analysing medical images. Applications of machine learning algorithms in medical field have proven its tremendous successful results to diagnose medical image. The most important is that machine learning algorithms for medical image analysis is able to find the significant relationships among data in a short time and accurate results. The aim of this study is to design a comparative analysis of MRI images for the betterment of image quality for developing treatment procedure in medical field. In this paper, some computational techniques like convolutional networks, KNN regression and generative adversarial network (GAN) are applied on the MRI images to get the high-resolution based MRI images. The methodology follows medical image localization, detection, segmentation and classification. The validation results on real data of MRI data fundamentally determines its usefulness and demonstrates the effectiveness in compared to state-of-the-art super-resolution techniques.</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 SUNANDA DAS https://spast.org/techrep/article/view/1132 EASND: Energy Adaptive Secure Neighbor Discovery Scheme for Wireless Sensor Networks 2021-09-21T17:38:58+00:00 Sagar Mekala arjunnannahi5@hotmail.com <p>Wireless Sensor Network (WSN) is defined as a distributed system of networking, which is enabled with set of resource constrained sensors, thus attempt to providing a large set of capabilities and connectivity interferences. After deployment nodes in the network must automatically effected heterogeneity of framework and design framework steps, including obtaining knowledge of neighbor nodes for relaying information. The primary goal of the neighbor discovery process is reducing power consumption and enhancing the lifespan of sensor devices. The sensor devices incorporate with advanced multi-purpose protocols, and specifically communication models with the pre-eminent objective of WSN applications. This paper introduces the power and security aware neighbor discovery for WSNs in symmetric and asymmetric scenarios. We have used different of neighbor discovery protocols and security models to make the network as a realistic application dependent model. Finally, we conduct simulation to analyse the performance of the proposed EASND in terms of energy efficiency, collisions and security. The node channel utilization is exceptionally elevated, and the energy consumption to the discovery of neighbor nodes will also be significantly minimized. Experimental results show that the proposed model has valid accomplishment.</p> 2021-09-23T00:00:00+00:00 Copyright (c) 2021 Sagar Mekala https://spast.org/techrep/article/view/495 Automatic Evaluation And Deception Of Fake News Employing Proposed Machine Learning Approach 2021-09-14T10:14:04+00:00 Prajakta Khot udit.mamodiya@poornima.org <p><span style="font-weight: 400;">Due to the speedy acquisition of the internet and social networking websites like Facebook, Twitter, Instagram, the speed by which a piece of certain news or information is disseminated has increased tremendously. Due to the increase in the utilization of social networking websites, the creators are generating and sharing more information than ever before, in which some of the information is confusing with no verification with reality. A fully automated classification of subject matter as deception is a tough and demanding task. Even a specialist of a particular domain has to go through various characteristics before giving any decision on the truth of the article.</span></p> <p><span style="font-weight: 400;">Although there are so many advantages to social networking websites, the standard of stories on those networking websites is less than a conventional news organization. [6] Many times, the media manipulates knowledge in various ways solely for its own benefit. Also, there are so many networking sites that produce fallacious articles almost predominantly.[7] The primary aim of such a database is to influence the general community sentiment in some circumstances. This problem of fake news has become a worldwide challenge. After researching this, many scientists concluded that this issue could also be addressed using machine learning and AI. [8] So in this paper, the authors are focusing on distinct algorithms of Machine Learning such as multinomial naïve Bayes using count vectorizer and TFIDF vectorizer, LSTM, logistic regression, decision tree classification, gradient boosting classifier. They performed all these algorithms using a Jupyter notebook and compared the results on the basis of a confusion matrix and, after that, determined the most accurate algorithm, which, in this case, is the TF-IDF vectorizer. After this, the dataset is trained using the visual studio code software written in python and implemented using the TF-IDF vectorizer algorithm. After running this implementation, the software will generate a link that is based on the flask micro web framework, and this flask framework is used to determine whether the news in the dataset is real or spam. In section 2 there is a brief description of the different algorithms of ML which includes Countvectorizer, TF-IDF vectorizer, LSTM, Logistic regression, Decision tree, gradient boosting and Random forest classifier. Section 3 includes the implementation of a proposed algorithm, which includes a flow chart and a jupyter notebook. Section 4 is the result, and it contains the paper's output in the form of a comparison of algorithms and representation on the Flask micro web framework.5 includes a conclusion which defines the achievement of the paper.</span></p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/1462 Analysis of Contrast and Correlation between Deep Learning Algorithms for diagnosis of COVID 19 from Lung Ultrasonography 2021-09-29T11:58:03+00:00 Sivakumar Rajagopal rsivakumar@vit.ac.in Radha Debal Goswami radhadebal.goswami2019@vitstudent.ac.in Rajat Tiwari armac134@gmail.com Eshan Sabhapandit eshan.sabhapandit2019@vitstudent.ac.in Rahul Soangra soangra@chapman.edu <p>Since the end of 2019 and early 2020s, the world has come across a life-altering virus that little did anyone knows would change the lives of everyone on this planet. The virus is known to affect people of all age groups and create complications in the lungs of human beings. Coronavirus disease 2019 (COVID-19) has affected a rapidly growing patient population worldwide. To effectively manage the disease, physicians need tests or methods [1]. The symptoms and effects really escalate if the person carrying the virus has comorbidity. Even though the symptoms are similar to that of other lung diseases such as pneumonia, the effects are far more deadly. Though there has been a significant amount of research done on the various methodologies to test and identify the COVID-19 virus and symptoms since the inception of the virus, the aim is to compare statistically and via various other scientific technologies in order to provide a solid conclusion. A study analyzing US mortality in March-July 2020 reported a 20% increase in excess deaths, only partly explained by COVID-19. Surges in excess deaths varied in timing and duration across states and were accompanied by increased mortality from non–COVID-19 causes [2].</p> <p>Identification of COVID-19 in patients from chest CT scan [3] has been the most prevalent approach but it exposes the patient to X-ray radiations and is not a suitable approach for frequent monitoring. Computer analysis of ultrasound pulmonary images is a relatively recent approach that showed a large potential to diagnose pulmonary states that are a profitable and safer alternative to CT scans. Deep learning techniques for computerized analysis of lung ultrasound images offer promising opportunities for screening and diagnosing COVID-19 (Figure 1). It cannot replace the role of radiologists but can provide them with an automated computerized opinion on the condition, highlighting the specific region of interest in the images. In this paper, the developments made in the classification of lung ultrasound images for COVID-19 identification are critically analyzed and reviewed. Lung ultrasound provides data either in the form of images or videos from which relevant frames are extracted. The general process of classifying an image involves extracting features from the image pixel matrix and using them to train a model which can later be used to classify images. Such an approach to image classification is an example of supervised machine learning. Major improvements in the classification accuracy have been done by extending this approach to neural networks forming the basis for deep learning and for image classification particularly, Convolutional Neural Network (CNN) has shown great success. Various researchers, universities, and organizations such as Google and Oxford University have developed state-of-the-art CNN frameworks, many of which have been used for lung ultrasound image classification such as VGG19, InceptionV3, Xception, and ResNet50. Specific purpose frameworks like POCOVID-net [4] were developed for point of care ultrasound (POCUS) devices. Performance of such sophisticated frameworks on COVID-19 infected lung ultrasound image dataset will be analyzed and compared using performance metrics such as confusion matrix, precision, recall/sensitivity, specificity, F1- score, and AUC.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Sivakumar Rajagopal, Radha Debal Goswami, Rajat Tiwari, Eshan Sabhapandit, Rahul Soangra https://spast.org/techrep/article/view/1697 ROLE OF IOT IN SMART AGRICULTURE 2021-09-30T10:06:31+00:00 Jemarani Jaypuria jemarani8502@gmail.com <p><span class="fontstyle0">In this era of technology, each and every sectors are using different technology for improving<br>the productivity and efficiency. By including several technologies in our day-to-day<br>application, we can reduce our efforts, similarly in case of various industries and power<br>plants application of latest technology can boost the production. This is also applicable in the<br>agricultural field, where time and cost can be brought down by the application of the Internet<br>of Things (IoT). Internet of things is the network of the device which transforms the<br>information among different devices without the involvement of manpower. By the<br>implementation of the Internet of things in the agriculture sector, we can obtain smart<br>farming. We can implement the Internet of things in any type of agriculture may it be<br>nomadic herding, livestock ranching, shifting cultivation, intense subsistence farming,<br>commercial plantations, Mediterranean agriculture, or commercial grain farming. IoT devices<br>and communication techniques are associated with wireless sensors. While implementing<br>IoT we have to attach sensors for specific agriculture applications, like soil preparation, crop<br>status, irrigation, insects and pest detection. Smart farming can help the growers throughout<br>the crop stages, from sowing until harvesting, packing, and transportation. In this current<br>technological time period, The smart cities and digitalization of livelihoods, the traditional<br>method of farming is slowly diminishing. As data-cantered and smart farming is rising,<br>people nowadays are more moving towards using scientific and high technology procedures<br>for intensive farming. If we see worldwide the agricultural evolution, in developed countries<br>like Israel, Australia, the United States and most European countries are implementing IoT in<br>the field of agriculture.</span> </p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Jemarani Jaypuria https://spast.org/techrep/article/view/2376 Twitter Data Sentiment Analysis of COVID-19 Vaccine using Different Machine Learning Model 2021-10-09T12:18:20+00:00 ANJANA MISHRA anjanamishra2184@gmail.com <p>Sentiment Analysis, also referred to as Opinion Mining is a text analysing technique that uses Natural<br>Language Processing (NLP) to interpret and classify emotions in subjective and social data. Social<br>media is generating a vast amount of sentiment-rich data in the form of tweets, status updates, blog<br>posts, etc. Sentiment analysis of this user-generated data is very useful in knowing the opinion of the<br>crowd. Twitter is one of the social media that is gaining popularity these days which also offers a fast<br>and effective way to analyse people’s perspectives towards various Covid 19. There are numerous<br>ways and algorithms to extract sentiment from a piece of text and one can be superior to the other<br>based on the context of analysis. This paper reports on comparing the efficiency of the Naive Bayes<br>algorithm using two different libraries viz NLTK and SK-Learn in python and also compares a Neural<br>Network based model built using Tensor Flow Keras with these algorithms. The Twitter Data on<br>Covid 19 vaccines from around the globe is then fed into the models which can provide us with an<br>insight of how people are reacting towards these vaccines. The result of the model classify user's<br>tweets into Positive or Negative Sentiment through which we can gather the data on how many<br>people want to take the vaccines and how many are against it.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 ANJANA MISHRA https://spast.org/techrep/article/view/2449 Securing the Confidentiality and Integrity of Cloud Computing Data 2021-10-12T13:55:39+00:00 Bramah Hazela bramahhazela77@gmail.com SHASHI KANT GUPTA raj2008enator@gmail.com Nupur Soni nupur.rajsoni@gmail.com Ch Naga Saranya nagasaranya@gmail.com <p>Cloud Service Providers (CSPs) are becoming increasingly popular as the amount of available digital data continues to grow at an exponential rate. A new set of benefits and economies made possible by cloud technologies in terms of bandwidth, computing, storage, and transmission costs, which will all reduce the overall costs for data storage for the company, which provide customers with convenient and efficient storage services. For the protection of the Cloud Client (CC) data in the cloud, several different security models are available. In order to verify that CSPs and the CC are in compliance with these safety criteria, TPA has a responsibility to thoroughly assess evidence of compliance between the CC and the CSP. Cloud user data are vulnerable to loss of confidentiality and integrity as data are stored in off premises and maintained by CSP. As a result, in order to ensure the confidentiality and integrity of stored data, the concept of auditing has been introduced into the Cloud Computing Environment. Cloud users can seek the assistance of a Third-Party Auditor who has extensive experience in conducting cloud audits in order to assess the risk associated with their services.</p> 2021-10-12T00:00:00+00:00 Copyright (c) 2021 Bramah Hazela, SHASHI KANT GUPTA, Nupur Soni, Ch Naga Saranya https://spast.org/techrep/article/view/1115 A SHORT TEXT CLUSTERING APPROACHES IN SOCIAL MEDIA 2021-09-21T10:35:42+00:00 Vinoth Dakshnamoorthy vinoth.d2019@vitstudent.ac.in Prabhavathy P pprabhavathy@vit.ac.in <p>In general, handling Bag-of-words representation or TF-IDF is an arduous task with Short text clustering<br>as it results in sparse vector representation mainly for short texts. This paper provides an in-depth study<br>on learning predictor features through an autoencoder model and sentence embedding technique.<br>Assignment of user from any clustering technique as a process of supervision in order to update the<br>weight of the encoder analysis. Short text related datasets validate of measure effective methods and<br>algorithms for short text unsupervised data. Among this short text are not able to clustering in some<br>social media. This paper discusses the challenge and methods to cluster in social media short text<br>challenging issues. Clustering methods of algorithms to sort the problem to recovery the short text using<br>K-means algorithms of Convolution Neural Network method. This paper provides a comprehensive<br>review of short text prediction using the clustering algorithm, explore the research challenges, and open<br>issues in this area.</p> 2021-09-21T00:00:00+00:00 Copyright (c) 2021 Vinoth Dakshnamoorthy, Prabhavathy P https://spast.org/techrep/article/view/76 The Online Retail Market Analysis for Social Development with Machine Learning 2021-07-16T19:30:05+00:00 Manjushree Nayak nayaksai.sairam@gmail.com Bhavana Narain narainbhawna@gmail.com <p>Present era is a digital era where retail marketing &amp; online marketing plays an important role in people living style. Filling the gap between customer &amp;market is a technological responsibility of technocrat’s. In our work we have collected online data and retail data of last 5 years. These data were collected from two major organizations which deal with online marketing and retail marketing. Techniques from unsupervised data type were implemented to analyze the collected data. Knowledge gain from this analysis is used for marketing upliftment &amp; social development. New modified K mean clustering Algorithm (NMKMCA) is used for data analysis. Accuracy result of retail marketing&amp; online marketing is compared in our work. We have taken I/O time and computational time as our working parameters. Result of this parameters are analyzed &amp; discussed in our work. In last section of our work we find that NMKMCA will take less time in computing very large dataset.</p> 2021-07-18T00:00:00+00:00 Copyright (c) 2021 Office SPAST https://spast.org/techrep/article/view/2245 Breast Cancer Analysis using Data Mining Techniques 2021-10-07T13:00:55+00:00 sathya A sathya.a@rajalakshmi.edu.in <p>Cancer is an abnormal growth of cells in the body. These abnormal cells grow to form a swelling called tumor. Breast cancer is the most affecting cancer in women rarely in men. It is the extra tissue growth in the breast which starts from a single abnormal cell that leads to the growth of a tumor[1-2]. The most effective way to lessen the rate of death caused by breast cancer is early detection. Planning to help the physicians in refining the correctness of diagnostic results, PC supported analysis framework is helpful for disease recognition and investigation these days. Data mining [3-6] is a logical process for defining patterns in enormous datasets and the completion goal of this process is to mine some information from the dataset which is then changed into an understandable format for further use. This analysis was done by using some data mining techniques such as Bi-Clustering mining, AdaBoost and MapReduce on the dataset of the patients[7-8]. The MapReduce algorithm will perform mapping and reducing process to get a consolidated data from the enormous data. Then Bi-Clustering is a clustering algorithm which is performed on the dataset and it discovers a pattern based on similarities called bicluster and those bicluster are used further analysis process. Some of the nutrients are suggested for the patients to be taken and not to be taken during the period of treatment. The effective way for reducing the mortality of cancer and improving the healthy life of the affected patients is analyzing and getting awareness about the disease in the early stage. This proposed concept offers some of the easy and cost-effective way for analyzing the curing possibility of cancer and the result will be provided very faster which help the patients to gain some confidence and satisfaction to overcome their disease.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 sathya A https://spast.org/techrep/article/view/1674 An in-depth study of Smart Agriculture based on Internet of Things and Wireless Sensor Networks 2021-10-08T06:19:33+00:00 Thanga prasath S prasadrec@gmail.com Dr. C Navaneethan navaneethan.c@vit.ac.in <p>In recent years, the world has been under immense pressure to find a way to restore the farmers' right to do their regular job, which is to produce grain. Allowing the ratio of farmers to decrease and the percentage of agricultural land to decrease indiscriminately is the most problematic issue that all nations are currently confronted with. Allowing the reduction of agricultural land in almost every country on earth has been a massive headache for human survival. Now the governments are awakening, trying to stop the conversion of the farmland to commercial land, and set some complex rules to stop it. Some natural factors, such as farm capability, climate, available resources, pests, agro-labourer, unpredictable market price, and consumer supply and demand, drastically reduce crop yielding levels. Almost every country's government closely monitors and prioritizes agriculture. Many countries have proclaimed agricultural raw material subsidies and sanctioned loans to farmers to increase food production. They are encouraging their farmers to use cutting-edge agrarian technology in order to increase grain quality and improve farming.</p> <p>The Wireless Sensor Networks and the Internet of Things have a key role to play in smart farming [1-2]. Above mentioned technologies make it possible to monitor the farmland without human effort and manage farms anywhere and anytime [3]. The Internet of Things (IoT) facilitates agricultural industries in transforming to the next progressive stage in order to meet the nation's needs. IoT dramatically reduces human intervention in agriculture by converting manual processes into automated processes. The IoT architecture is determined by the application and the environment in which it is used [4]. The Internet of Things uses a wide range of sensor nodes positioned on fields using global positioning technology to acquire information relevant to crop growth [5]. These sensor nodes collect information on soil nutrients, temperature, water level, and humidity. Each sensor has a limited energy supplier and storage [6] embedded in it. This is providing enough power supply to its units [7]. Nowadays, IoT technologies are commonly used in agro-industries to track farmland and notify farmers of necessary changes from anywhere in the world, allowing them to make timely crop-growth decisions [8]. The farmers physically monitoring the farmland for a whole day is not possible, so we need to identify and use cutting-edge technology in the agricultural field in order to ensure high productivity and profits. Wireless Sensor Network (WSN) is the trustable technology used in Agro-industries to boost production [9]. The recent surge in emerging technologies such as information and communications technologies (ICT), information retrieval technology, and the geographical positioning system (GPS) ensures the environmental relation farmings [10-11]. Wireless technology collects parameters from the sensor and sends them to a remote server via a wireless gateway [12]. Researchers and scientists have been trying hard to find the finest modern technology to enhance farmers' production, quantity, and quality. In this survey, we specifically examine the influence of new technology and sensors used in smart farming to secure food supply and how those devices are changing agricultural sectors to satisfy the food demands of an ever-growing population.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Thanga prasath S, Dr. C Navaneethan https://spast.org/techrep/article/view/2503 DESIGN OF LOAD BALANCING TECHNIQUE FOR CLOUD COMPUTING ENVIRONMENT 2021-10-14T05:43:21+00:00 Shikha Shivaliya abhishek14482@gmail.com Dr.Vijay Anand ieeemtech@gmail.com <p><strong>Abstract</strong></p> <p>Cloud computing is delivering services by reducing data ownership, improved scalability, agility to business, infrastructure cost reduction and availability of resources just in time. By discussion cloud computing is not a single technology but it is the combination of several technologies which enables a new way for IT growth.[1]</p> <p>&nbsp;</p> <p>In a scenario of limited servers available at data center, if the request submitted are high than the capacity of the data center, its overall performance degrades. In such cases load balancer is used to improve the performance of data center. Load balancing is a technique to distribute load among multiple entities such as CPUs, disk drives, server or any other type of device. The goal of load balancing is primarily to obtain much greater utilization of resources. Load balancing [2-4] can be provided either through hardware or software. Load balancing can be provided through the specialized devices such as a multilayer switch that can route the packets to the destination or the cluster. Hardware based load balancing is complex in configuration &amp; maintenance, and not suitable for hosted environment. Load balancing can also be achieved through the software either using operating system or as an add-on application. Software based load balancing is simple to deploy and have the performance similar to that of hardware based load balancing. Some software based load balancing includes those bundles with Microsoft azure or Linux and add on such as PM proxy. Load balancer manages the traffic flow between various servers. Load Balancer is placed between the server and the client and distributes the load among the available servers depending upon the algorithm of the Load balancer. Load balancer is not only improves the response time of cloud applications but also ensures the optimum utilization of the resources. [5-10]</p> <p>This article proposes a dynamic load balancing technique, which combines advanced features of existing load balancing techniques like- throttled, round robin and active load balancing.</p> 2021-10-16T00:00:00+00:00 Copyright (c) 2021 Shikha Shivaliya, Dr.Vijay Anand https://spast.org/techrep/article/view/399 IoT ENABLED REAL TIME UNDERGROUND CABLE FAULT DETECTION AND LOCATION IDENTIFICATION SYSTEM 2021-09-14T09:02:20+00:00 Eldhose K A udit.mamodiya@poornima.org <p><span style="font-weight: 400;">The use of underground power cables is expanding nowadays due to the safety considerations and enhanced reliability in the distribution and transmission systems. Despite these benefits, underground cables are susceptible to various faults, including open circuits, short circuits, and earth faults which are very difficult to locate and the entire cable must be pulled out of the ground to verify and repair the faults. The objective of our project is to design and fabricate an underground power cable fault detection and location identification system that distinguishes and locates wide varieties of fault in underground power transmission cables from the base station in real time by employing Internet of Things (IoT).</span></p> <p><span style="font-weight: 400;">&nbsp;The proposed system executes real-time monitoring of the entire underground power system at different power distribution points which are computed as a node from where current and voltage data sets are fed into a web server for the analysis of the entire system as shown in the figure1 [1]. A prototype is modelled with a set of resistors symbolizing cable length in meters (1 ohm/m) and fault creation is done manually by a set of switches in series and parallel for demonstration of different faults at different locations (Figure 2). This prototype uses the base theory of Ohm's law which states that when a voltage is applied at the supply end, current would vary depending on the location of the fault in the cable [2].&nbsp;</span></p> <p><span style="font-weight: 400;">Novel ideas to restrain the present inefficacy of detecting an exact location of an open circuit fault is accomplished by the use of capacitive current in cable lines by employing parallel capacitors. The proposed model mainly uses current sensors, ATmega328P Arduino Uno microcontroller and ESP8266 NodeMCU Wi-Fi module [1-3]. Internet of Things is incorporated to send the cable line data within a fraction of seconds to the webpage using HTTP protocol. When there is a fault, the current changes accordingly and thus its corresponding voltage value&nbsp;is transmitted to an ADC, which generates precise digital data that the programmed Arduino would exhibit in LCD. This data is fed to the WIFI module of esp826 NodeMCU . The system continuously delivers the conditions of the transmission cable into a webpage and then to the officials in electricity control board. Any deviation from the normal values will be considered as a fault and automatic monitoring, analysing and recording can be done efficiently through this method. The microcontroller codes the approximate location of fault, while the IP address at each node denotes the fault has occurred at that locality [3]. Thus, this proposed methodology provides a reduction in time to locate a fault and thus repairing the electricity system as soon as possible.</span></p> <p><span style="font-weight: 400;">The simulation of the proposed system is developed using Proteus software. The simulation for any number of nodes will be a replication of the simulation done here by considering a single node as shown in figure 3. This project is a humble effort to benefit a reduction in time to locate and detect a fault in the field and quick fix to reactivate the power system, thus increased performance, and reduction in the operating expenses. This proposed method can be utilized for the wide area control of transmission lines easily. This project idea can be further enhanced by checking the pre-fault current conditions by maintaining the previous record of faults in a database and locating the fault conditions in advance.</span></p> <p><img src="https://spast.org/public/site/images/uditm/mceclip0.png"></p> <p><strong>Fig.1. </strong><span style="font-weight: 400;">Block diagram</span></p> <p><img src="https://spast.org/public/site/images/uditm/mceclip1.png"><br><br></p> <p><strong>Fig.2. </strong><span style="font-weight: 400;">Cable Representation</span></p> <p><img src="https://spast.org/public/site/images/uditm/mceclip2.png"><br><br><br></p> <p><strong>&nbsp;&nbsp;&nbsp;&nbsp;</strong></p> <p>&nbsp;</p> <p><strong>Fig.3. </strong><span style="font-weight: 400;">Simulation of proposed system</span></p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/1905 e-Health Assisted Smart Supermarket 2021-10-08T10:37:53+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com Jose Anand joseanandme@yahoo.co.in Nitin Mishra drnitinmishra10@gmail.com S S P M Sharma B sharma.vitam@gmail.com Bharat Mukundrai Joshi j28bharat54@gmail.com M.Sangeetha sangeetha.m@reva.edu.in <p>Worldwide improvement is technology can be noticed at a faster pace in every piece of application and developments. At the same time, the exploitation of technologies in certain sector is very stumpy. Usually people buy various things from the shops without knowing the ingredients present in that. Where some of the ingredients may not be good for the individuals based on their health conditions. The intake of such items create health issues in various ways and feel later for consuming such items. So far there is no pre-indication system to indicate the purchaser about the health issues for the items purchased from the shop. In this paper, proposed a smart trolley that assist the customer about the health issues related of a product he/she picked to purchase with the support of a mobile app in the smart phone.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, Jose Anand, Nitin Mishra, S S P M Sharma B, Bharat Mukundrai Joshi, M.Sangeetha https://spast.org/techrep/article/view/1165 A Survey on Applications of Multi-Attribute Decision Making Algorithms in Cloud Computing 2021-09-23T09:55:19+00:00 Sanjaya Kumar Panda sanjayauce@gmail.com Munmun Saha munmunsas@gmail.com Suvasini Panigrahi suvasini26@gmail.com <p>Cloud computing is growing tremendously for its on-demand services, a massive pool of distributed resources, rapid provisioning of resources and many more. It empowers many organizations/customers to build on-demand applications without investing large capital in creating hardware infrastructure. These organizations encounter numerous challenges towards obtaining full-pledged services from the cloud service providers (CSPs). One such challenge is identifying and deciding upon a suitable CSP that can fulfill the quality of service (QoS) requirements of these organizations. Moreover, the services offered by the CSPs are interrelated, and beneficial and non-beneficial in nature. As a result, it makes difficult for organizations to suitably evaluate the services rendered by the CSPs. Therefore, multi-attribute decision making (MADM) algorithms are applied in the literature to overcome the above challenge of uncertainty. In this paper, we survey applications of such algorithms from the perspective of cloud computing. The survey covers both traditional and recent algorithms with their objectives, processes, pros, cons, and implementations. We also present the upcoming challenges and open issues, followed by the performance metrics and tools for their possible implementations. Finally, we conclude by summarizing the survey with some notable remarks.</p> 2021-09-24T00:00:00+00:00 Copyright (c) 2021 Sanjaya Kumar Panda, Munmun Saha, Suvasini Panigrahi https://spast.org/techrep/article/view/1421 IoT Security on smart grid: Threats and Mitigation Frameworks 2021-09-29T10:22:49+00:00 RANJIT KUMAR ranjitpes@gmail.com <p>Number of smart grid IoTs is expected to grow exponentially in the near future, as predicted by the analyst. The improvement of smart grid IoT Security system increases due to communication technologies used in traditional power systems. Smart grid IoTs include critical devices due to its complex architectures. Smart grid IoTs that are connected and controlled remotely through the Internet are becoming more universal, and as a result, homes and businesses have ever increasing attack surfaces on their networks. It can lead to security arrears, large-scale economic damage when the confidentiality, integrity of the communication is broken down. These huge systems may be endangered to threats. Consequently, there is a lot of research effort to enhance smart grid security in government, industry and academia. We present a broad survey supported by a thorough review of earlier work. Moreover, recent advances and corrective measures are presented on smart grid IoT security. In this paper, the threats and mitigation framework of the smart grid IoT are analyzed. This paper reviews the existing literature on the smart grid IoTs in energy systems. We focus on threat types and provide an in-depth of the threat state of the smart grid. Explicitly, we focus on the discussion and scrutiny of network vulnerabilities, attack countermeasures, and security requirements. We focus to supply a deep understanding of threat vulnerabilities and mitigation framework and give a path on<br>future research directions for threat in smart grid IoTs.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 RANJIT KUMAR https://spast.org/techrep/article/view/1497 Odia Text Classification Using Naïve Bayes Algorithm: An Empirical Study 2021-09-30T19:26:15+00:00 Rekhanjali Sahoo Rekha rekhanjalisahoo23@gmail.com <p>Text classification is recognized as one of the key techniques basically used for classifying the text in different classes including positive, negative and neutral. This paper illustrates the Odia text classification process using Naïve Bayes Algorithm, which is a very fast growing discipline in computer science. This classification algorithm is suitable for binary and multiclass classification in machine learning concept and belongs to supervised classification category used&nbsp;to classify future objects by assigning class labels to instances using conditional probability. In this paper, an auxiliary feature method for Odia text is proposed. It determines features by an existing feature selection method of Naïve bayes algorithm, and selects an auxiliary feature which can classify the text at the selected features, and then the chosen conditional probability is used to improve high classification accuracy. Illustrative examples are shown that the proposed method increases the performance of Naïve Bayes classifier. Around one thousand sentences are taken to be considered as both training and tested for empirical diagnosis of the proposed work. Accuracy is depending on mainly training the corpus size which is designed by own and may be increased to some extend depend on the parameters as well as the large size of corpus further. The result shows that Naïve Bayes technique significantly outperform many other technique like HMM (Hidden Markov Model), CRF (Conditional Random Field) and KNN (K Nearest Neighbourhood). Text classification plays an important role in Sentiment Analysis, Information Extraction, Text Summarization, Text Retrieval, and Question Answering.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Rekhanjali Sahoo Rekha https://spast.org/techrep/article/view/2890 GAMAN – GENETIC ALGORITHM IN MOBILE ADHOC NETWORKS FOR CREATING SYSTEMATIC QoS 2021-10-21T06:18:37+00:00 Tapas Bapu B R tapasbapusaec@gmail.com R Anitha anithar@saec.ac.in S Soundararajan kannanarchieves@gmail.com Nagaraju V nagaraju.sse@saveetha.com Partheeban Nagappan n.partheeban@galgotiasuniversity.edu.in A.Daniel danielarockiam@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>MANET is a mobile ad-hoc network that is made of several mobile nodes that can communicate in multi-way without any fixed or regular infrastructure. Due to its special features such as its self-organization, easy deployment it has been preferred for many military and civil applications. MANET has also gained popularity in the multimedia field. MANET has certain levels of requirements such as QoS (Quality of Service), jitter and energy, bandwidth, and end-2-end delay.&nbsp; MANET’s one of the basic requirements is having QoS and should have efficient routing to support other applications. In this research paper, a special Genetic Algorithm known as the GA algorithm based on routing on a Mobile Ad-hoc network is designed and termed as GAMAN. The proposed model uses a 2 QoS system for routing. The outcome of this paper showed that the GAMAN method is a significant one for QoS in MANET.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Tapas Bapu B R, R Anitha, S Soundararajan, Nagaraju V, Partheeban Nagappan, A.Daniel, Mayakannan Selvaraju https://spast.org/techrep/article/view/99 Hybrid Support Vector Machine and Distance Classifier in Breast Tumor Detection 2021-08-08T11:25:00+00:00 Usha Sharma usha28383@gmail.com Bhavana Narain narainbhawna@gmail.com Vaibhav Nohria vaibhavnohria36@gmail.com <p>It is time to look back to balance life style as cancer is affecting all stage of life. Several research studies are going on to make the detection process of cancer painless. Technology is playing an important role in this process. We are pursuing our work in support of easy detection of cancerous tumor by applying technology. Artificial Intelligence is used to detect MRI images and help in decision making. In our work we have proposed two hybrid model. First model is the combination of Support vector machine and Modified Back Propagation Neural Network. Second model is combination of Distance classifier and Modified Back Propagation. We have collected more than five thousand MRI dataset related to breast cancer. These images were preprocessed and applied in this hybrid models. In the first section of our work we have given introduction of Support vector machine. In the second and third section hybrid model of the Support vector machine and distance classifier are discussed. In result and discussion we have presented the sample of statistical data and output. Our model is 97% accurate in detection of tumor in breast.</p> <p> </p> 2021-08-08T00:00:00+00:00 Copyright (c) 2021 Usha Sharma, Bhavana Narain, Vaibhav Nohria https://spast.org/techrep/article/view/2926 Enhancement of IOT to connecting People and Applicances using IOE 2021-10-26T14:46:07+00:00 S. Hemamalini pithemalatha@gmail.com S. Hemalatha pithemalatha@gmail.com P. Perumal perumalp@srec.ac.in S. Lakshmi elzie.moses@gmail.com <p>The term IOE is called an Internet of Everything (IoE) which is a phrase with the aim of refers to internet-connected computers and consumer electronics that are equipped with advanced digital applications. It is a concept that the future of technology will be made up of a variety of appliances, gadgets, and devices all linked to the internet. The Internet of Everything (IoE) is predicated on the notion that the internet will become more ubiquitous in the future. As in the past, connectivity is not for control to laptops, desktop computers, in addition to a few decades’ devices. Instead, machines will grow smarter as a result of greater data access and networking possibilities. Everything on the internet is utilized as fog computing and cyber security. This article describes the IOE technology, which connecting people and their home appliances through an application to facilitate be attached to a portable cell phone appliance and the idea of facial recognition, as an example in which the face may be identified and taken to the identification when the person enters the home and uses the camera. The Global Positioning System (GPS) installed in the applications may be used to track this behavior while in theft.</p> 2021-10-27T00:00:00+00:00 Copyright (c) 2021 S. Hemamalini, S. Hemalatha, P. Perumal, S. Lakshmi https://spast.org/techrep/article/view/2374 SMART ASSISTANT FOR ENHANCING AGRONOMICS 2021-10-09T05:30:56+00:00 sathya A sathya.a@rajalakshmi.edu.in k.Poornimathi poornimathi.k@rajalakshmi.edu.in k.Poornimathi poornimathi.k@rajalakshmi.edu.in Priya L priya.l@rajalakshmi.edu.in J Anitha anitha.j@rajalakshmi.edu.in <p>Agriculture has been considered as the backbone of India, since it has been the primary source of Livelihood for majority of Indian population. But the growth of agriculture has been declined in recent years which leads to a significant fall in the overall Gross Domestic Product (GDP) of India. Indian agricultural sector is on the brink of failure due to several factors like crop failures, farmer suicides. is; the proper information is not communicated to the farmers. Other reasons include lack of access to inputs and credit, and the inability to bear risks, language problem, insufficient funds. Information technology [1,7] plays a vital role in this digital era. Even though there are a lot of help centres, welfare schemes and modern technologies that the government provides, very few have adequate knowledge about that and fail to utilize them. Another major contribution is the information and skills gap that constrains the adoption of available technologies and management practices, or reduces their technical efficiency when adopted. Public extension programs are often underfunded; suffer from weak agricultural research and lack adequate contact to farmers. Farmers are not ready to adapt new methodologies [2] and machineries that are highly crucial for the current climatic changes, soil conditions and several other factors caused due to new technologies. They can no longer stick upon the older methodologies of cultivation which would not be an appropriate way with recent changes in the environment. Thus, educating them in modern agriculture and equipping them with necessary modern machinery in a proper way is significantly crucial[3-5].</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; This system makes the farmer aware of all the welfare schemes and subsidies provided by the government in case of any loss due to the natural disaster or any other relevant activities and help the farmers to utilize them for the welfare of the crop. E-Services for crop and govt schemes aims to provide information about the services or schemes available for the farmers.&nbsp; . Farmers can also apply for the scheme they are eligible , through the application. It provides a broad knowledge about several crops. Farmers can also buy and rent machineries from the government by availing the discounts they are eligible. Thus, the overall system acts as an apprentice for farmers and helps them to efficiently cultivate the crops and maximize the productivity.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 sathya A, k.Poornimathi, k.Poornimathi, Priya L, J Anitha https://spast.org/techrep/article/view/452 Improving trust levels in wireless networks using blockchain powered Dempster Shaffer route optimization 2021-09-15T11:33:27+00:00 Ashutosh choudhary akchoudhary@rpr.amity.edu <p>Improved node and network trust levels in wireless networks assist the network to have better security and quality of service (QoS) performance. Security performance is measured in terms of number of attacks detected and mitigated by the network, while QoS performance is measured in terms of end-to-end delay, communication throughput, energy requirement, and packet delivery ratio metrics. In order have good trust levels in the network, a wide variety of methods including reputation evaluation, historical analysis, temporal network &amp; node analysis, etc. are proposed by network researchers. [1] Implementation of these algorithms increases computational complexity of the network, thereby increasing end-to-end delay during communication. During routing, repeated calculations of trust in the network further increases number of computational steps required for route selection. These added calculations improve network security, but reduce overall QoS at node and network level. Moreover, if these calculations are simplified for reducing computational complexity; then QoS improves but network security is compromised. In order to maintain a high network QoS with sufficient level of security; this paper proposes a novel blockchain powered routing model that uses Dempster Shaffer (DS) route optimization. The model utilizes node-to-node distance, energy efficacy, and temporal packet forwarding capabilities in order to select the most efficient routing strategy. This strategy is combined with a proof-of-work (PoW) based blockchain solution, that allows the network to have high trust levels. Due to decentralized processing, the network is able to have good QoS levels along with high security. Results of the proposed model were compared with existing models; and it is observed that the underlying architecture is 10% efficient in terms of end-to-end delay, consumes 12% lower energy, and can detect &amp; mitigate Masquerading, Sybil, and Flooding attacks with high efficiency.</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Ashutosh choudhary https://spast.org/techrep/article/view/2647 Convolution Neural Network, Support Vector Machine, Residual Neural Network, Magnetic Resonance imag- ing. 2021-10-17T15:16:52+00:00 Kunal Singh vtu11767@veltech.edu.in Dr. Shailendra Kumar Mishra shailendra@veltech.edu.in Praveen Kumar vtu14350@veltech.edu.in Raushan Kumar vtu14349@veltech.edu.in <p>A brain tumour is the cancerous or non-cancerous growth of abnormal brain cells that can be described as adenomas (benign) and pernicious (malignant). The benign does not contain active cells but active cells are present in malignant cancer. These tumours are also classified as primary and metastatic brain cancer. In a primary brain tumour, the cell is normally a brain cell. however, in metastatic brain tumours, tumour cells are spared into other body parts. The crucial type of cancer is glioma and this cancer is found in different grades that are classified into a high grade(HG) and low grade(LG) tumours and this is also called glioblastoma multiform and oligodendrogliomas or astrocytomas. A brain tumour is a type of cancer that can not be easily detected by a doctor in the starting stages. Generally, the shape and size of the tumour are unidentified. The Brain tumours classification is performed by serologic analysis and is not usually conducted before conclusive brain surgery. Normally Brain tumour is predicted by Magnetic Resonance Imaging (MRI) images, however, it is time-consuming and high cost. Nowadays a lot of data sets are available for identifying the several stages of brain tumours such as Glioma, Meningioma and a Pituitary tumour to train the machine learning model. The conventional ML Models logistic regression, SVM, CNN and RNN will predict the location of tumours present in the brain and also able to create tumour pattern mask. All Existing Models are available mainly to deal with 2D image data sets. The optimal contrast model takes original images and reference images and provide a more visual image and Non- linear stretching boost the textual information and compress the level of local brightness in the images. The dataset that is used consists of 3200 with size 512*512 images and provides 96% accuracy, The different types of tumour categories accurately using GoogLeNet.99.57%, 99.78% and 99.56% for meningioma, glioma and pituitary tumours[2] and the challenging task is to extract the precise tumour structure present in the 3D MRI images. 3D model Brats 2018 data set is used and the accuracy is 80%[3]. However, their accuracy is very less. In this paper, an effective Machine Learning (ML) model (3D U-net) has been developed that can generate a tumour pattern mask for any type of tumour present in the brain. The overall procedure is that first, the Brats dataset that consists of 3D MRI images is feed to the 3D U-net neural network and then this generates a brain tumour Mask. Finally, the model is going to predict the survival&nbsp;days of the&nbsp;people who are affected by a Brain tumour. The architecture of 3D U-net is similar to the architecture of U-net. In the 3D U-net, the analysis path is on the left side and the&nbsp;synthetic part is on the right side. In the analysis path 3D U-net consists of 3*3*3 convolution network with Relu function and 2*2*2 max polling. As per our calculation, The proposed model provides better accuracy as compared to the conventional method. Simulation results achieve 86% accuracy.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Kunal Singh, Dr. Shailendra Kumar Mishra, Praveen Kumar, Raushan Kumar https://spast.org/techrep/article/view/1404 Simple and Effective Decision making system for Angiography Analysis 2021-09-29T07:03:09+00:00 Sivakumar Rajagopal rsivakumar@vit.ac.in Fatima Mohammad Amin fatimamohammad.amin2020@vitstudent.ac.in <p>Angiography is the X-ray imaging of blood flow in the body. An angiogram can show doctors what's wrong with the patient's blood vessels. It can show how many of the coronary arteries are blocked or narrowed by fatty plaques. By acquiring this information, we can help doctors to determine what treatment is best for the patient and how much danger is caused by the patient's heart condition to their health [1]. The total number of heart attacks occurring in the United States is around 1.5 million (mostly for older age groups), and the number of deaths is around half a million [2]. Cardiovascular disease has seriously affected the lives of modern people [3]. One of the most commonly used imaging methods for diagnosing the cardiovascular disease is Angiography [3](Figure 1-2).</p> <p>The need for a Matlab-based decision-making system arises in angiography to analyze various parts of the body more quickly and easily. It reduces the manual effort put in by the doctor, with the advantage of giving out results in a matter of few seconds. A machine learning algorithm will also be implemented for the analysis, which will further make the process less intensive and this will overcome the limitations shown by the Matlab-based system [4].</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Sivakumar Rajagopal, Fatima Mohammad Amin https://spast.org/techrep/article/view/586 A AUTOMATED SECURED UAV COMMUNICATION USING ECOFRIENDLY 2021-09-17T13:58:02+00:00 shriram s sriram142000@gmail.com Saathwick V saathwick104@gmail.com Kavitha R kavitha_r@cse.sastra.ac.in <p>Unmanned aerial vehicles are employed in numerous applications such as traffic monitoring,<br>capturing elicit datasets, gathering crucial information in a seemingly dangerous<br>environment, surveillance, etc. As sensitivity and criticality increase, the need for security<br>poses a crucial challenge. However, currently, these data can be captured due to certain<br>security vulnerabilities of UAVs and also due to malicious attacks like Man in the Middle<br>Attack, eavesdropping, de-auth attack, and so on, due to the broadcasting nature of the<br>wireless medium. Kaspersky released a report on attacks deployed against IoT devices in<br>2019 in which more than 100 million attacks on IoT devices were detected during the first<br>half of 2019 and the global average cost for a data breach in 2020 is estimated to be $3.86M.<br>One of the solutions to prevent these attacks is by using Blockchain technology. Here in this<br>paper, we propose a UAV security module which is biodegradable and automated exploited,<br>to protect the crucial data sent over the wireless medium, which is implemented using<br>Matlab, cisco packet tracer, and ethereum blockchain</p> 2021-09-19T00:00:00+00:00 Copyright (c) 2021 shriram s, Saathwick V, Kavitha R https://spast.org/techrep/article/view/1476 Crime Data Analysis Using Time Series Approach 2021-09-29T12:18:55+00:00 Hareesh BK hareesh.bk@msds.christuniversity.in Deepa V Jose deepa.v.jose@christuniversity.in Vijalakshmi A vijalakshmi.nair@christuniversity.in <p>Crime, which is an unlawful act, is considered as one of the key reasons that adversely affect the growth of any nation. Crime data analysis is an effective mechanism to identify the patterns of crimes which can be used to prevent the occurrence of the same in future.&nbsp; This paper investigates the relationship between the different types of crimes and a state-wise analysis of crime rates in India. This study is conducted based on secondary data collected from the National Crime Records Bureau (NCRB). The paper also predicts the murder rates from 2016 to 2026 using time series analysis in the Indian scenario.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Hareesh BK, Deepa V Jose, Vijalakshmi A https://spast.org/techrep/article/view/2202 Electroencephalography and Blood Cells Analysis for Cerebral Malaria Detection using Deep Learning and Neural Networks 2021-10-01T13:35:49+00:00 Sivakumar Rajagopal rsivakumar@vit.ac.in AUHONA GHOSH auhona.ghosh2018@vitstudent.ac.in Nikita Mohanty nikita.mohanty2018@vitstudent.ac.in Kartikey Mishra kartikey.mishra2018@vitstudent.ac.in <p>Cerebral malaria is a clinical syndrome that is marked by the asexual parasitic form - ‘plasmodium falciparum’. It has been a major health issue contributing to a huge number of deaths around the World, especially widespread in the suburban regions of Africa. Its mortality rate stands at 20% in adults and 15% in children. However, if diagnosed at an early stage, patients can receive proper treatment and recover quickly thus avoiding fatal and long-lasting neurological outcomes such as severe psychosis, metabolic acidosis and hypoglycemia. The symptoms associated with cerebral malaria often fall into the common bracket including fever and body ache which makes diagnosis difficult and possibly inaccurate. Therefore, it requires to be tested by a model that is specific to confirm the presence of the disease.</p> <p>To achieve this, a bipartite framework is considered. The developed model aims to determine the contribution of this unicellular protozoan parasite and its impact on blood cells and neurological dysfunction. The design is divided into the following functions - the recognition of potential seizures in the patient and the identification of parasitic blood cells. The association of the results will help hypothesize and confirm cerebral malaria. In the first half, The deep-learning algorithm consists of a Neural Network Sequential model which is compiled under the ‘adam’ optimizer, ‘SparseCategoricalCrossentropy’ loss and matrices based on accuracy used in the framework in order to grasp the discriminative electroencephalogram (EEG) features of epileptic seizures recorded for each patient for 23.6 seconds [1]. Particularly, it works with Seizure-activity recordings to detect the various representations of the differing EEG patterns to reveal the correlation between successive data samples which is then utilised for training and classifications with 97.22% accuracy. The latter work focuses on inspecting the decompressed blood cells images that are fed into a deep convolutional neural network and different lossy image compression methods are examined as contrasting compression ratios impact the classification accuracies to distinguish parasitized cells from the healthy cells [2,3]. This is made possible using the sequential neural network model just like the previous model but this time it is compiled with ‘binary_crossentropy’ loss instead [4]. With this, the transfer of medical findings is made effortless since compression of images is done without loss of valuable information which is known as image augmentation done with the help of a library named ‘ImageDataGenerator’. As shown in figure 1, It helps in precise diagnosis with 94.7% accuracy.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Sivakumar Rajagopal, AUHONA GHOSH, Nikita Mohanty, Kartikey Mishra https://spast.org/techrep/article/view/1550 Intensify Cloud Security and Privacy Against Phishing Attacks 2021-10-02T09:17:51+00:00 Debabrata Dansana debabratadansana07@gmail.com Vivek Kumar Prasad vivek.prasad@nirmauni.ac.in Madhuri Bhavsar madhuri.bhavsar@nirmauni.ac.in Brojo Kishore Mishra brojomishra@gmail.com <p>The world of computation has shifted from centralized (client-server, not web-based) to distributed systems during the last three decades. We are now reverting to virtual centralization, i.e., Cloud Computing (CC). The location of data and processes makes all the difference. On the one hand, a person has complete control over the data and operations in their computer. Cloud computing involves a vendor providing service and data upkeep. At the same time, the client or customer is ignorant of where the processes are operating. As a result, the client does not influence it and doesn't have the right to do it.&nbsp;The internet is used as a communication medium for CC. When it comes to data security in cloud computing, the vendor must guarantee service level agreements (SLAs) to persuade the client. Organizations that utilize CC as a service architecture are keen to look into security and confidentiality concerns for their mission-critical and non-sensitive applications. However, because they provide various services such as Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), ensuring the security of business data in the "Cloud" is challenging.</p> <p>Each service has its own set of security concerns. As a result, the SLA must define several degrees of security and their complexity depending on the benefits for the client to comprehend the security rules in place. Phishing is a social engineering assault frequently used to obtain user information, such as login passwords and credit card details. It happens when an attacker poses as a trustworthy entity and tricks the victim into opening an email, instant message, or text message. In this research paper, the methodology that tries to identify the phishing attack in the Cloud ecosystem has been explored and mentioned to minimize the attacks to increase the Cloud trust level. The approach used here classifies the malicious and non-malicious URLs with an accuracy of 92.89%. The experimental setup and its outcome prove suitable for identifying the fishing attacks in the cloud ecosystem.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Debabrata Dansana, Vivek Kumar Prasad, Madhuri Bhavsar, Brojo Kishore Mishra https://spast.org/techrep/article/view/2870 To Establish the Significance of relationship between Marketing Establish the Significance of relationship between Marketing Establish the Significance of relationship between Marketing Establish the Significance of relationship between Marketing Estab 2021-10-19T13:23:49+00:00 Ronobir Chandra Sarker faisalsyedmtp@gmail.com <p>Marketing and human resources have similar goals aimed at distinct audiences. Marketing is in charge of the company's branding and conveying it to customers. Strategic human resource management (SHRM) is in charge of employment branding, ensuring that both internal workers and external prospects appropriately view the organisation. HR and marketing collaborate to identify the best labour to promote and grow the brand, while marketing produces and communicates the brand message to workers. However, it has been observed that with the help of automation techniques such as Machine Learning (ML), SHRM and Marketing have transformed their operations and functions in many ways. Technology has always played a vital role in attaining business goals by increasing efficiency and optimising processes. Accordingly, the paper attempts to analyse the significance of the relationship between Marketing Management, Machine Learning and Strategic human resource management.</p> 2021-10-19T00:00:00+00:00 Copyright (c) 2021 Ronobir Chandra Sarker https://spast.org/techrep/article/view/73 Crime Prediction and Intrusion Detection with IoT and Machine Learning 2021-07-08T12:30:19+00:00 Anirudh Kumar Tiwari tiwarianirudh646@gmail.com Prof.(Dr.) Bhavana Narain narainbhawna@gmail.com <p>In this era of digitalization crime investigation and prediction is top and foremost necessity. An action or commission which constitutes an offence and is punishable by Law is called crime.&nbsp; It can be performed by individual or group .it can commit against government or private sector.it may be harm someone reputation, physical harm or mental harm crime can cause direct harm or indirect harm to whoever the victim&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; is. <br>&nbsp;The purpose of our work is to design a prototype that helps the police in detecting crime locations. We have taken a condition that if any person is going somewhere and after seeing an accident, when the photo of that accident is taken then automatically it will be sent to nearest police Station. For this, it is necessary to have an application designed by us both in the sender and the receiver. This whole matter will directly connect the police with crime location which ease the police can reach that location. GPS will be used for location detection. In our work we have collected dataset with the help of digital camera which is attached with IoT device. In first part of our paper we have discussed the grounds of our work under introduction of crime, digital image processing, GPS and IoT. In second part of our work we have discussed the methodology of our work here sensor board, GPS setting has been discussed along with dataset. There is a number of data collection technologies in the IoT. The most widely used technology is the Wireless sensor network (WSN) uses multi-hopping and self-organization to maintain control over the communication nodes.</p> <p>&nbsp;</p> <p>&nbsp;</p> 2021-07-21T00:00:00+00:00 Copyright (c) 2021 Anirudh Kumar Tiwari, Prof.(Dr.) Bhavana Narain https://spast.org/techrep/article/view/2242 Cust-kproto: Customized Clustering Algorithm for Clinical Data Analysis Based on k-prototype Algorithm 2021-10-07T13:26:48+00:00 Pradnya Bhambre pradnyabhambre@gmail.com Nusrat Khan nusrat.khan@sinhgad.edu <p>For mixed data containing numerical and categorical attributes, k-prototype algorithm<br>is used for clustering. But this k-prototype algorithm has a disadvantage of selecting initial<br>centroids randomly. Because of this random selection of centroids, the results of the<br>clustering are determined by the quality of the initially selected centroids. i.e. it is sensitive to<br>the selection of the centroids. There is no easy and universally applicable method to select<br>the centroids. [1] k-prototype algorithm needs different execution time and utilized space in<br>successive runs for finding the same number of clusters. There are many problems<br>associated with this selection of initial centroids like variables are of numerical as well as<br>categorical type, so to find central values of particular data field, any one mathematical<br>formula could not be adopted. Therefore, an attempt is made here to find an algorithm which<br>takes account of numerical and categorical data fields. This algorithm implements medians<br>which helps in obtaining centroids having positions near centre of the data points, so this<br>algorithm requires less number of iterations as compared to k-prototype algorithm. In this<br>paper a customized k-prototype algorithm (cust-kproto) is proposed which is used to<br>determine initial centroids.<br>Authors have conducted experiments on Thyroid Dataset containing numerical and<br>categorical attributes. This cust-kproto clustering algorithm and traditional k-prototype<br>algorithm are run many times and the average time and space required for clustering are<br>compared. Experimental results shows that the execution time and space utilized by<br>cust-kproto algorithm is better than traditional k-prototype clustering algorithm and also it<br>produced accurate, consistent and quality cluster results. It can be used for incomplete<br>datasets where values of all the data fields could not be obtained for all the variables. It<br>requires less number of iterations than the k-prototype algorithm, as it converges in minimum<br>steps. It can be applied to datasets containing numerical, Boolean and categorical data<br>fields.<br>This algorithm is designed mainly for analysis of clinical data where accuracy and<br>speed are having utmost importance, also because of large volume of clinical data the<br>utilization space should be minimum. Therefore, this cust-kproto algorithm will be very<br>helpful in cluster analysis in healthcare industry to analyze the clinical data more efficiently<br>and better clustering results can be obtained. This clustering could investigate the hidden<br>relationships between different variables, could find out trends in data, can be helpful in<br>formulating various decision strategies of disease prevention, prediction as well as diagnosis<br>and cure of the patients. The results can be helpful for clinicians, healthcare professionals,<br>healthcare organizations etc. This cust-kproto algorithm is simple and easy to understand. It<br>improves the traditional k-prototype algorithm which can be proved by experimental results<br>on real dataset.<br>Using this cust-kproto algorithm useful, quick and efficient clustering results can be obtained.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Pradnya Bhambre, Nusrat Khan https://spast.org/techrep/article/view/167 MACHINE LEARNING ALGORITHM: WINE QUALITY PREDICTION 2021-09-02T06:17:23+00:00 Prateek Singhal prateeksinghal2031@gmail.com Pawan Singh pawansingh51279@gmail.com Bramah Hazela bhazela@lko.amity.edu Vineet Singh vsingh@lko.amity.edu Vikrant Singh singhvikrant.rv@gmail.com <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>Wine classification may be a difficult task since taste is that the least understood of the human senses. A good wine quality prediction is often very useful within the certification phase, since currently the sensory analysis is performed by human tasters, being clearly a subjective approach. An automatic predictive system is often integrated into a choice network, helping the speed and quality of the performance. Furthermore, a feature selection process can help to research the impact of the analytical tests. If it is concluded that several input variables are very relevant to predict the wine quality, since within the production process some variables are often controlled, this information could be used to improve the wine quality.</p> </div> </div> </div> 2021-09-08T00:00:00+00:00 Copyright (c) 2021 Prateek Singhal, Pawan Singh, Bramah Hazela, Vineet Singh, Vikrant Singh https://spast.org/techrep/article/view/882 Review paper - Implementation of Smart Pet Care Applications in an IoT Based Environment 2021-09-15T19:02:05+00:00 Poornima Lankani poornima_2019@kln.ac.lk WLSV Liyanage sayurivliyanage@gmail.com <p>The idea of information Technology and machines has become a rising demand, leading to the concept of interconnection between humans and machines. This concept has adopted a negative impact on human lives and their well-being. Because of this negativity, people tend to adopt pets to get emotional support. Pets require extra care, and it is not easy as it used to be with today's busy lifestyle. As a result, one of the significant challenges has been figuring out how to grow pets in a simple manner. The best solution for this kind of problem is to use new innovative technologies. In this matter, it should include an IoT-based solution. The question that led to this research was, "How to implement a Smart Pet care Application within a proper IoT based Environment?". Implementation of a smart pet care application that satisfies every requirement of petting would ensure greater comfort and peace of mind for pet owners. This paper discusses the characteristics and technologies of the latest smart Pet Care applications and proposes solutions that satisfy the current requirements of pet owners. Before implementing this smart pet care application, a study was performed to identify features and facilities of existing pet care applications using related research papers. This research explores the impact of the IoT concept on the potential of smart Pet Care applications across modern technologies to facilitate human contact with pets. The outcome is an IoT-based mobile application that satisfies the users' requirements by analyzing these data.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Poornima Lankani, WLSV Liyanage https://spast.org/techrep/article/view/2391 Feasibility Study on Amalgamation of Multiple Measures to Detect the Driver Drowsiness 2021-10-10T14:50:23+00:00 Jaspreet Singh Bajaj jaspreet.bajaj@chitkara.edu.in Naveen Kumar naveen.sharma@chitkara.edu.in Rajesh Kumar Kaushal rajesh.kaushal@chitkara.edu.in <p>Driver Drowsiness is one of the major causes of road accidents which leads to fatal and non-fatal injuries, sudden deaths and substantial monetary losses. According to NSF (National Sleep Foundation) report, 54% of adult drivers drive four-wheelers while feeling drowsy and 28% fell asleep while driving vehicle [1]. According to report (road accidents in India 2019) published by Transport Research Wing, under the Ministry of Road and Highways, Govt of India, 1,51,113 deaths reported in India in 2019 out of which 23.5% deaths report other than human error like drowsiness and bad weather conditions. This is a major concern, hence a smart and intelligent system to detect the driver drowsiness is required at an early stage, which in turn would avoid crashes. Due to advancements in technologies like Artificial Intelligence (AI), Machine Learning (ML) and other subsets of AI, various approaches have been carried out to detect driver drowsiness. This further assists in saving precious human life and reduces monetary losses. Many researchers have proposed different techniques and methods to detect the driver drowsiness. The most common measures are subjective, vehicle-based, physiological and behavioural. In subjective measures the drowsiness is detected by collecting the driver's observation at the time of driving the four-wheeler. The most common methods under subjective measures are SSS (Stanford sleeping scale) and KSS (Karolinska Sleepiness Scale) [2]. &nbsp;The second category is vehicle-based measure in which few sensors are mounted on the various vehicle components i.e. steering wheels, driver seat and acceleration pedal. The most commonly used vehicle-based measures to detect the driver drowsiness are SDLP (Standard Deviation of Lane Position) and SWM (Steering Wheel Movement) [3]. The third category is physiological measures that helps to detect the driver drowsiness in the early stage. Most of the studies considered the EEG (Electroencephalogram), ECG (Electrocardiogram), EMG (Electromyogram) and EOG (Electro-Oculogram) as physiological parameters to detect the driver drowsiness [4]. In behavioural measures, three major signs i.e. eye lid closure movement, head/body movement and frequently yawning has been captured by using a camera and further analysed by a machine-learning algorithm to detect the drowsiness condition and alert the driver [5]. The subjective measures can only be used in simulated environment only. Moreover, this measure could not detect sudden changes that occurred during the driving the vehicle and drivers could only register responses after a fine internal of few minutes. The other three measures i.e. &nbsp;vehicle-based, physiological and behavioural measures are reviewed in detail and various pros and cons have been found. From the comparative analysis of these measures concluded that none of the measures provide accuracy alone. In addition, every measure has their cons in different conditions and fail to detect the drowsy state of the driver in accurate manner. A hybrid solution is the need for early detection of drowsiness of driver by amalgamation of multiple effective measures. Many researchers have also concluded that developing a driver drowsiness detection system by using hybrid measures would be more efficient and highly recommended. The main contribution of this paper to evaluate and identify the effective measures to detect driver drowsiness and choose the best measures to combine which help in early detection of the driver drowsiness in more efficient manner and avoid the crashes on the roads.</p> 2021-10-11T00:00:00+00:00 Copyright (c) 2021 Jaspreet Singh Bajaj, Naveen Kumar, Rajesh Kumar Kaushal https://spast.org/techrep/article/view/2580 DETECTION OF RICE LEAF DISEASES USING CONVOLUTIONAL NEURAL NETWORK 2021-10-14T20:29:53+00:00 Poorni R poorniram21@gmail.com Poorni R poorniram21@gmail.com Preethi Kalaiselvan kannanarchieves@gmail.com Nikhil Thomas kannanarchieves@gmail.com Srinivasan T kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: The objective of this paper is to provide a system that helps the farmers to identify the disease in the rice leaf using its image.</p> <p>Methodology: The dataset is first assembled by gathering unhealthy rice leaf pictures. The data is then preprocessed and expanded through a number of data augmentation techniques. The data is then split into two sections: training and&nbsp;testing dataset. It is then classified by the CNN model. Here transfer leaning is used using a pre-trained model named Inception v3. Finally, the name of the disease and its remedy is displayed to the user. &nbsp;</p> <p>Findings: For image classification, a deep learning model is trained with labelled images in order to learn how to identify and classify them according to visual patterns. We used an opensource implementation comprising CNN, named Inceptionv3, which is provided as part of the Keras module and it was recognized as validation on ImageNet. For each input image, the feature vector has size features = 2048. For this module, the size of the input image is fixed to height x width = 299 x 299 pixels. Once the convergence was achieved after a few iterations, the batch size was increased to 64 images and the number of epochs to ten. The Adam optimizer was utilized with a 0.001 learning rate and category cross entropy loss. Then, for the identification of diseases in the rice leaf, Transfer Learning is applied using a pre-trained Inception v3 model. For diagnosing the infection in the rice leaf, an accuracy of 94.48 percent was reached. The name of the disease is displayed for the given input image along with the recommended solutions by the developed model.</p> <p>Originality/value: The training accuracy acquired during training is 96.34 percent, and the validation accuracy is 94.48 percent. Thus, the adjusted inception v3 model may be utilized as a diagnostic technique to identify the disease in the rice leaf and give necessary expertise recommendations to treat the diseases.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Poorni R, Poorni R, Preethi Kalaiselvan, Nikhil Thomas, Srinivasan T, Mayakannan Selvaraju https://spast.org/techrep/article/view/1940 CONVOLUTIONAL NEURAL NETWORK BASED PLANT NUTRIENT DEFICIENCY DETECTION 2021-10-09T12:37:47+00:00 Shivvani P N, kannanarchieves@gmail.com Sowmiya M kannanselva1986@gmail.com Deepika P amudhavaish@gmail.com ShwarrnamalyaPriyanka kannan.maya1986@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: The objective of this paper is to detect the Nutrient deficiency, which is a condition in plants in which the plant lacks particular amount of nutrients which are essential for the healthy state of a plant. If this situation keeps prevailing, it will have a adverse effect on the growth of plants.</p> <p>Methodology: An automated nutrition deficiency detection system has been proposed using Convolutional Neural Network, in this proposed methodology the input dataset images, following the pre-processing, the images are processed through various CNN layers, the deficit nutrient is detected.</p> <p>Findings: The manual conventional method used to detect the nutrient deficiency is a tedious process and does not detect individual deficiency classes. The existing system which made use of the conventional pre trained inception CNN model, used to detect the nutrient deficiency of the Okra Plant does not detect individual deficiency classes and also it has a less accuracy of about 86%. In this paper, an automated nutrition deficiency detection system has been proposed using Convolutional Neural Network. Spatial information in an image is obtained by using Convolutional Neural Network. On having the input dataset images, following the training process, the images are processed through various CNN layers, the deficit nutrient is detected. The proposed system can identify if the following nutrients- nitrogen, phosphorus, potassium are present in proper amounts and holds an accuracy percentage as high as 95%. The final output is a user interface which is made very simple and reliable for the farmers to understand. Just by providing the respective input image, the farmer gets to know what nutrient deficiency the plant is suffering from and also the remedial measures he can take to overcome it. The system also displays the threshold percentage for each of the nutrient deficiency i.e., the percentage below which the plant starts suffering from each particular deficiency. This system aims to serves as effective tool for Nutrient Deficiency detection.</p> <p>Sufficient amount of water, sunlight and nutrients a plant intake is the most essential part in agriculture system. Macronutrients and micronutrients requirement quantity vary from plant to plant. On comparing macronutrient and micronutrient the former is higher than the latter for the development of tissue and cell. Nitrogen (N), Phosphorus (P), Potassium (K) are the list of macronutrients. To diagnose the condition of the plants based on image a deep convolutional neural network is used. Starting by the process by resizing and preprocessing the input image, training of datasets is done. The dataset represents the colour variation in the plant leaves which helps to identify the deficient nutrient. Further CNN layers are created and thus the deficient nutrient is classified.</p> <p>Originality/value: In this study, the proposed system can identify if the following nutrients- nitrogen, phosphorus, potassium are present in proper amounts or whether it's deficient and holds an accuracy percentage as high as 95%.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Shivvani P N,, Sowmiya M, Deepika P, ShwarrnamalyaPriyanka, Mayakannan Selvaraju https://spast.org/techrep/article/view/1271 Architecture for Evaluating Customer Retention Strategies 2021-09-27T15:53:37+00:00 LOKESHKUMAR R RAMASAMY lokeshkumar.r@vit.ac.in <p>The modernizing world with increasing technological developments and the everyday generation of new innovative strategies has created a competitive era. The advancement of technology, implementation of advanced solutions, and strategies attract many customers, which results in changing markets and business. With this Digital Darwinism, organizations face a major pitfall known as Customer Attrition. Addressing this problem is the need of the hour for every organization aiming to increase customer loyalty and its revenue. Understanding the business ecosystem and coming up with a strategy that strengthens the roots of the firm and increasing their customer lifetime is an approach, but the question arises that how good is your strategy when it comes to the comparison with the competitors in the market [8]. The architecture proposed in this paper evaluates business strategy using customer segmentation and customer lifetime value prediction, churn prediction, uplift modeling, and survival analysis.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 LOKESHKUMAR R RAMASAMY https://spast.org/techrep/article/view/96 An OPTIMIZATION OF BACK PROPAGATION NEURALNETWORK FOR RAIN FORCASTING 2021-08-07T13:45:56+00:00 Vertika Shrivastava mail2vertika@gmail.com Sanjeev Karmakar sanjeev.karmakar@bitdurg.ac.in <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Deep learning has recently emerged as a viable method for addressing difficult problems and analyzing massive amounts of data. The Mahanadi river basin at the appropriate scale is generally the most logical geographical unit of stream flow analysis and water resources management. We created a method of rainfall forecasting model by analyzing rainfall data from India and predicting future rainfall using optimized neural networks. We will predict weather data time series especially long-range rainfall over Mahanadi river basin. The purpose of this research is to provide a thorough overview of current scientific studies for short-term Region, Month, and&nbsp;temperature-based&nbsp;rainfall forecasting on a geographical scale. This article offers a thorough examination and comparison of several neural network topologies utilized by experts for rainfall prediction. The article also addresses the difficulties encountered while using various computational models for yearly/monthly&nbsp;rainfall forecasts. Furthermore, the article provides several accuracy metrics used by experts to evaluate the performance of ANN</p> 2021-08-07T00:00:00+00:00 Copyright (c) 2021 Vertika Shrivastava, Sanjeev Karmakar https://spast.org/techrep/article/view/865 Empty Parking Space Detection using Mask R-CNN and Computer Vision 2021-09-15T19:14:59+00:00 Mounika padala mounika1999@gmail.com Shashi Shirupa shashishirupa00@gmail.com Sai Prashanth Mallellu saiprashanth08@ieee.org <p>In the current scenario, finding an empty parking space has become a tedious job due to continuous traffic flow in urban areas. This paper presents a highly efficient approach to detect empty car spaces in a parking lot in real-time. It uses Masked Regional Convolutional Neural Networks and Computer Vision based library OpenCV. Computer vision is used for processing video frames and detecting empty spaces in real-time, whereas Masked RCNN is used to detect cars in the video. As soon as a car leaves the parking space, the OpenCV library will detect it and indicate that the parking space is empty. Our method can work accurately irrespective of the presence of daylight. Drivers can use this method to locate an empty parking space beforehand instead of searching for one.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Mounika padala, Shashi Shirupa, Sai Prashanth Mallellu https://spast.org/techrep/article/view/224 Prediction of User Overall Gratification in Indian Tourism Domain on Hotel classes and Trip-Types 2021-09-08T16:36:22+00:00 Venkata daya sagar Ketaraju sagar.tadepalli@kluniversity.in <p>The revenue and economy of the country in the past years significantly depend on tourism. The hotel sector's role is even more prominent in tourism. The plans and decisions of tours of users can be recommended with the collaboration of E-commerce and hotel management. The traveling proportion of the population is getting minor over the months due to the worst impact of COVID-19. Thus not just the tourism, the hotel sector is also in vain in terms of revenue. Users' past experiences and opinions help boost their satisfaction levels by providing recommendations and retaining them. The present scenario and stats prove that the selection and decision of hotels have enormous support on user reviews. This research article tries to find and analyze the various aspects that contribute more towards the gratification levels of users in Indian top tourism city hotels listed by the Master and VISA Inc survey. This survey focuses on the item-item collaborative filtering and regression techniques based on TripAdvisor reviews of recent times. Once the dimensions are known, it helps in improving them and thus even enhances the ratings of Asian continental hotel management. This study proves that the online travel platform helps obtain reviews from users to maintain the travel recommender systems.</p> 2021-09-08T00:00:00+00:00 Copyright (c) 2021 Venkata daya sagar Ketaraju https://spast.org/techrep/article/view/2372 State-of-Art of Face Shields Manufactured Through Additive Manufacturing, Issues and Mitigation Approaches 2021-10-08T14:54:47+00:00 Rajesh Kumar Kaushal rajesh.kaushal@chitkara.edu.in Naveen Kumar naveen.sharma@chitkara.edu.in Simranjeet Singh simranjeet.singh@chitkara.edu.in Akhilendra Khare akhilendra.khare@chitkara.edu.in Harmaninderjit Singh harmaninder.jit@chitkara.edu.in <p>Recently, coronavirus has caused great damages to the entire human race without any discrimination. Almost every country got affected due to this invisible enemy. Many frontline health warriors are helping communities and countries to protect from this deadly disease. Due to this pandemic spread, face shields are recommended as a safety measure. Even WHO (World Health Organization) also recommended to use face shields as a protective measure. Due to this worldwide emergency, there have been many fears of shortages in protective equipment’s like masks, face shields and other medical devices. In such a circumstance, additive manufacturing turned into supplementary manufacturing process to fulfil the touchy needs and to facilitate the wellbeing around the world. Manufacturing objects through 3D printing is also known as additive manufacturing or rapid prototyping [1].</p> <p>This study is primarily disclosing the present state of art with respect to role of additive manufacturing in constructing face shields due to coronavirus. This study is disclosing various type of materials used for additive manufacturing, methods used for their sanitation and application areas [2–4]. This study found that face shields manufactured through additive manufacturing approach were lacking in the technology intervention. Moreover, it is also found that anthropometric dimensions were also ignored while manufacturing such face shields. This study also found that majority of these face shields were manufactured for radiologists, anaesthetists, maxillofacial surgeons and frontline health professionals. Only few studies manufactured face shields for the patients. The unique contribution of this study is to propose a technological advanced face shield by introducing electronic sensors into it and that too by considering anthropometric dimensions of Indian male and female head size [5].</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Rajesh Kumar Kaushal, Naveen Kumar, Simranjeet Singh, Akhilendra Khare, Harmaninderjit Singh https://spast.org/techrep/article/view/1770 A Survey on Precision agriculture using Machine Learning 2021-09-30T10:28:07+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com D.Prabhu prabhumecc@gmail.com Golda Dilip goldadilip@gmail.com <p><strong>Purpose: </strong>The objective of this paper is to yield suggestion contraption that encourages ranchers to choose the correct harvest to plant in their fields.</p> <p><strong>Methodology: </strong>Machine picking up information on systems gives an effective structure to decision-making through records at varies time</p> <p><strong>Findings: </strong>Precision Agriculture (PA) allows for the exact usage of data sources like water toxicant, seed, and composts at the absolute time to the harvest for expanding productiveness, decent, and yields. By fetching sensors and mapping fields, ranchers can comprehend their range in a higher way safeguard the assets being utilized and decrease unfavorable effects on the earth. The vast majority of the Indian ranchers practice customary cultivating styles to decide yield to be developed in order. In any case, the ranchers don't comprehend crop yield is related to soil qualities and climatic conditions. In this way, this paper proposes a yield suggestion contraption that encourages ranchers to choose the correct harvest to plant in their fields. Machine picking up information on systems gives an effective structure to decision-making through records at varies time. This paper bears a survey onset of machine picking up information on methods to help the ranchers in settling on choice roughly appropriate yield to develop depending regarding their matter's conspicuous traits.</p> <p><strong>Originality/value: </strong>In this study, the results show precision agriculture relate to certain crop parameters positively. To promote future research and practical applications, a framework has been developed for identified crop prediction and increase in yield to implemented crop rotation, soil characteristics, rainfall, land instruction and incontrollable elements along with weather.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, D.Prabhu, Golda Dilip https://spast.org/techrep/article/view/300 Introducing Multi Level Security in Web Applications 2021-09-11T18:59:24+00:00 Subhranshu Mohanty smohanty@aimt.ac.in <p>Now almost all organizations use web applications for their daily activities and to be protected from security breaches. There are many conventional security methods are available in the market but adding security using them are more challenging and exposed to the attackers. We have recognized one method which will have multiple layers of security at client side. This technique protects bots or scripts to protect extra load on the server and the application as well. We have also observed that the conventional methods have same pattern to access resources of the organizations and attackers intentionally or unintentionally prepare scripts or bot programs to slow down the application and sometime try to steal organization data. Data is the weapon in today’s era and each organization is protecting its data by using various methods and techniques. Our method not only protects application from the unwanted requests on the server, it also protects data by adding one more layer of security to the application at server side.</p> <p><strong>Keywords: </strong>Web applications, Security breaches, Scripts or bot programs</p> 2021-09-14T00:00:00+00:00 Copyright (c) 2021 Subhranshu Mohanty https://spast.org/techrep/article/view/2604 Augmented Reality Sudoku Solver 2021-10-17T14:21:08+00:00 Ananya G M ananyagm.is18@rvce.du.in <p>The role of computers in the puzzle industry is becoming increasingly significant, as they have replaced not just riddle designers, but also problem solvers [1]. The majority of riddles have been designed for entertainment. Sudoku is one such type of puzzle. The goal in this puzzle is to fill in all the blank squares with the numbers 1 to 9 keeping in mind certain constraints. The criterion is simple: after the Sudoku is finished, each row and column must include all numbers from 1 to 9 precisely once, for instance, no row or column may have two fives. Furthermore, each sub grid of size 3 by 3 must include all digits from 1 to 9 exactly once. Playing Sudoku on a regular basis through infinite various stages would enhance players' attention, patience, and logical thinking.</p> <p>The pipeline begins with the detection of sudoku puzzle from the image frame, followed by identification of digits, solving the recognized sudoku and subsequently replacing the empty cells of the detected puzzle with the solution mask.</p> <p>As the first step in the pipeline, the video frame captured undergoes adaptive thresholding, which separates the foreground image from the background noise. The algorithm determines the threshold value for a pixel based on a small region around it, which results in different thresholds for different regions of the same image leading to better results for images with varying illumination. This is followed by identification of contours and focusing on the region of interest using reverse perspective transform, and erasing gridlines.</p> <p>The next phase deals with the recognition of digits within the grid. A ConvNet [2-3] trained on printed numerals with varying fonts, weights, rotation and translation achieves this with 99.6% accuracy.</p> <p>The sudoku puzzle is then solved using a library in Rust, called “sudoku”, a basic sudoku generator and a prototype solver using human strategies. It is based on jczsolve [4], which is the world’s fastest sudoku solving algorithm. It is capable of applying strategies like naked and hidden singles, locked candidates, hidden subsets and basic fish. It is a utility built for classical 9*9 sudoku puzzles. Furthermore, additional changes are made in order to improve the solving speed to a greater extent.</p> <p>The solution mask is merged with the original frame in the final stage to give an illusion of Augmented Reality. The correct digits from the solution are transferred to another canvas which is already of the form of the sudoku grid, the dimensions of the canvas being 180 x 180. The OpenCV’s getPerspectiveTransform and warpPerspective strategies are utilized to extend this back onto the first picture, however, this time the source and objective directions are inversed. Finally, this image is converged with the first to give the arrangement.&nbsp;Augmented Reality is used to merge the solution mask onto the real-time image.</p> <p>The solver is implemented as a web-based real time application. It shows promising results. However, more work could be carried out in areas where the solver can perform better, such as, motion blurring and inadequate lighting. But, despite the shortcomings, it still serves as one of the fastest augmented reality sudoku solvers.&nbsp;With the advancement in technology occurring at such a rapid pace, there is no doubt that the issues will be resolved in the near future. For the time being, people can enjoy playing sudoku without any interruptions and having to worry about the solution.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Ananya G M https://spast.org/techrep/article/view/506 Evolution of Artificial Intelligence in Revolutionising Web-Based and Online Intelligent Educational Systems 2021-09-15T12:12:42+00:00 Ashraf Alam ashraf_alam@kgpian.iitkgp.ac.in Shamsher Alam shamsher.alam@gov.in Shamsher Alam shamsher.alam@gov.in <p><strong>Artificial intelligence (AI):</strong> Today AI is a significant technology worldwide, and is the fastest growing [1]. These advancements enabled the incorporation of traditional computer configurations into low-cost smart gadgets, thus bringing AI to masses. These devices have incorporated powerful built-in capabilities for complex computational operations (edge computing). The ability to connect rapidly to network, capacity to collaborate on issues utilising cloud-based services, and facility to access both public and private data sources have made AI technology even more significant [2-4]. AI technology, because of its rapid advancements and discoveries in computing and robotics, has resulted in intelligent computers, robots, and other artefacts that can mimic human skills.</p> <p><strong>AI in Education:</strong> How might artificial intelligence assist in the process of learning? The impact of technology on the way learning and teaching are carried out is shown through new modes of delivery. Due to rapid advancement in AI technology in last decade, the application of AI in educational settings is rapidly becoming the standard. Historically, AI technologies have made many contributions to education. AI has enabled the development of robotic instructions, automated scoring system, and other ways to assist teachers and students [5-7]. Artificial intelligence has been warmly welcomed and extensively embraced by educational institutions in a number of ways. Artificial intelligence began with computers and computer-related technologies, progressed to online intelligent and web-based educational systems, and eventually moved to web-based chatbots and humanoid robots fulfilling the responsibilities and functions of teachers [8-9]. Instructor-specific apps assist instructors in grading and marking assignments, streamlining instructional processes, and cutting down on time spent on administrative chores. Because systems are now created utilising machine learning, they are highly adaptable. This enables learning materials and courses to be customised for each learner [10-11]. As a consequence, it is feasible to raise student performance, enrolment ratios, and course completion rates. AI is critical for effectively educating future generations. AI contributes towards shaping of education in two distinct ways: (1) by determining the kind of education that is needed, and (2) by improving the educational process by assisting and increasing instructors' abilities to educate pupils. While considering schooling, it is necessary to consider the fact that AI and comparable technologies will eventually disrupt a large number of occupations including education [12-13]. Significant changes in professions such as the need to update educational materials may result in development of many new job portfolios. AI will eventually change work environments by deciding occupations and acting as a kind of instructor by reforming and aiding educational procedures [14].</p> <p><strong>Aims of this study:</strong> This study investigated how artificial intelligence affects educational opportunities, and is limited to the use of AI technology in educational administration, pedagogy, curriculum transaction, and learning processes. In this paper, researchers have also analysed the challenges and opportunities caused by adoption of AI technology in education.</p> <p><strong>Methods and Methodology:</strong> The research approach and methodology used in this study were a literature review and qualitative research methods. This approach was critical to the study's success. We conducted a comprehensive study of current advances in AI technology applied to education sector in order to demonstrate the value of AI in teaching and student assessment.</p> <p><strong>Structure of the research paper:</strong> This article initially outlines AI’s application in the domain of education, such as virtual classrooms, teacher evaluation, and adaptive learning. Following it, the paper discovers AI’s impact and benefits for both professors and students by assisting professors in developing their teaching abilities and assisting students in mastering new information. Finally, it puts forward the challenging issues AI may encounter in aiding several cases of school reforms, as well as the long-term effect on education.</p> <p><strong>Findings and Conclusion:</strong> Our study establishes that all NLP-enabled intelligent education systems include AI. These AI enabled methodologies enhance learner’s and teacher’s capacity for reflection, for answering inquiries, for resolving conflicting assertions, for generating new queries, and for decision-making. Nonetheless, when it comes to professional work life, it is difficult to resist conflating artificial intelligence with other technological advances.</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Ashraf Alam, Shamsher Alam, Shamsher Alam https://spast.org/techrep/article/view/545 Using Classification Data Mining for Predicting Student Performance 2021-09-16T11:19:29+00:00 Guna Sekhar Sajja abhishek14482@gmail.com Harikumar Pallathadka abhishek14482@gmail.com Khongdet Phasinam abhishek14482@gmail.com Samrat Ray abhishek14482@gmail.com <p>An institute must be able to predict student performance in today's competitive environment, categorize individuals based on their skills, and strive to improve their performance in future examinations in order to stay competitive. If you want to increase your academic performance, you should tell your students well in advance to focus their efforts on a certain topic. Analyses of this kind help institutes reduce their failure rates. This study predicts a student's performance in a course based on their past achievement in similar courses. Uncovering hidden patterns inside huge data sets involves using a variety of data mining techniques. For analysis and forecasting, these patterns may be quite useful. Educational data mining refers to a set of educational-related data mining applications that have been developed. In these programs, students' and teachers' data is analyzed in order to provide useful information. The analysis might be used to categorize or forecast. These include Random Forest, ID3, C4.5 and SVM as well as other machine learning techniques. Student data set from UCI machinery is used in experimental study.</p> <p>There are several types of educational data mining [1] that are used to examine educational data. To keep track of students, teachers, and courses, educational institutions maintain a large amount of data. Among the information in this database are student personal and academic information, teacher personal and academic information, syllabuses and other materials like question papers and circulars. In order to enhance the lives of their students and teachers, a number of institutions and non-profit organizations have begun to employ educational data mining.</p> <p>The performance of students is one of the most important prerequisites for every institute. It is possible to predict the performance of pupils [2] by looking at their previous academic achievement. It appears that students' talents and interests may be related to their performance. In this way, educators may give greater attention to students who need it the most.</p> <p>There is a framework for predicting student performance that is presented. Students' performance data is used as input in this system. This student data set has been preprocessed to eliminate noise from the data and to ensure that the input data set is consistent before it is used. The input data set is then subjected to a variety of machine learning techniques, including Random Forest, ID3, C4.5, and SVM. Various data classifications are carried out. There is a comparison of the classification results of several methods. [3][4]</p> <p>Experimental study relies on a data set of UCI machinery student performance [5]. Each of the 33 characteristics in this data collection contains 649 occurrences. The University of Minho, Portugal, provided this data collection as a gift to the community. Fig.1 shows the accuracy of several machine learning methods. Student data was fed into machine learning algorithms. On the graph, you can see the results in terms of categorization accuracy.</p> <p>&nbsp;</p> <p>Fig.1. Classification of Student Data</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Guna Sekhar Sajja, Harikumar Pallathadka, Khongdet Phasinam, Samrat Ray https://spast.org/techrep/article/view/2715 Forward and Backward Engineering in Inventory Management System 2021-10-15T14:56:01+00:00 Gurjapna gurjapna.kaur@gmail.com <p>This research intends to carry out forward engineering and backward engineering [1] for an application of managing the inventory for different products category. The inventory management system manages the cost like retail price, wholesale price, e-commerce price and the actual cost of different products falling under any product category.</p> <p>Also, we can create an inventory for any new category. In this paper, traditional software Engineering process or Forward Engineering is carried out and software re-engineering process or Backward Engineering is proposed for better and faster end results. The approach used here is Product based approach.</p> <p>&nbsp;The forward engineering is the last milestone in an inventory management system application. Backward engineering [2] is also done because forward engineering was taking ample amount of time to reach to the end expected result.</p> <p>The forward engineering follows the model of software engineering, namely analysis, design and implementation whereas this model uses the product-based approach with following the model of software engineering such as analysis phase is mapped with analyzing or identifying the product and also the identified product is suitable under which broad category, design and implementation phase is mapped with product’s actual cost, retail cost, wholesale cost and its e-commerce price. Instead of following the forward engineering phase, this study also stresses upon backward or reverse engineering process such as assessing the market trend or scenario that which product is most liked or purchased in recent times and later on finalizes different prices like retail, wholesale and e-commerce price.</p> <p>The backward engineering [3] follows implementation recovery, design recovery and analysis recovery. As this model follows the product approach such as analyzing the top-rated products on online e-commerce platforms or most often purchased and also liked products by the customers on online platforms is the first step of reverse engineering. After this, the research will focus on analyzing the price of the product, it’s retail price and wholesale price and e-commerce price.</p> <p>In this way, the implementation recovery is mapped with analyzing the market trend with the aim of identification of the product and deciding the category of the product. And design recovery and analysis recovery are mapped with setting the prices of the products along with its actual cost. The programming language uses Java for the existing application and new application will be rebuilt using PHP framework. The application is built on 2 different programming languages. The analysis conducted observes the behavior of the application by adopting forward engineering or backward engineering [4] based on few parameters like which process consumes more time incase when number of products are considered more.</p> <p>&nbsp;</p> <p>The analysis conducted completely determines the comparison between the programming languages used, for example on which language the computation analysis is much faster. It also determines that product-based approach is more fruitful in which software engineering process, that is forward engineering or the backward engineering [5]. Parameters taken for consideration are processing time for identifying or analyzing the product, measuring the amount of time consumed for reaching the final end product.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Gurjapna https://spast.org/techrep/article/view/1474 A Machine Learning Based Framework for Pre-processing and Classification of Medical Images 2021-09-29T12:29:09+00:00 Mr. Shehab Mohamed Beram Mohamed Beram abhishek14482@gmail.com Harikumar Pallathadka ieeemtech@gmail.com Indrajit Patra ieeemtech@gmail.com Dr. P Prabhu ieeemtech@gmail.com <p>Medical imaging plays an essential role in disease diagnosis and treatment. Image processing has been applied in many scientific domains, such as medicine and biology, where researchers use textural features to represent distinct types of cells or analyse photos to discern between alive and dead cells. [1][2]</p> <p>As the standard methods of microscopic image analysis of the damaged area are completely dependent on a labor-intensive approach with a limited number of bone samples, it leads to a significant difficulty for predictability and consistency. As a result, image processing in a digital system was employed to create a new scheme that includes numerous phases such as pre-processing, object representation, feature extraction, classification, and image interpretation.[3][4]</p> <p>As a first step, the most fundamental aspect of image processing is to remove noise without interfering with diagnostic information. The previous process removes noise while adding blur to the image. To obtain a precise idea, we implemented soft and hard thresholds with varying coefficients. [5]</p> <p>The Wavelet denoising tool was discovered to be a powerful image enhancement tool. In the future, we plan to use the threshold function for medical images, and surface images can be denoised using an improved execution parameter. The wavelet validates a competent representation of images in the second stage of preprocessing, where the image is divided into subbands such as close, horizontal, vertical, and diagonal estimation. [6][7]</p> <p>Our proposed work was associated with preprocessing to eliminate noise and get smooth images. This procedure would aid in improving image quality and removing false segments. KNN, SVM, and ANN classification algorithms are used to classify image datasets.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Mr. Shehab Mohamed Beram Mohamed Beram, Harikumar Pallathadka, Indrajit Patra, Dr. P Prabhu https://spast.org/techrep/article/view/69 AN ALGORITHMS AND EFFICIENCY OF GREEN COMPUTING 2021-10-09T05:40:44+00:00 Prof.(Dr.) Bhavana Narain narainbhawna@gmail.com Dr. Manjushree Nayak nayaksai.sairam@gmail.com Usha Sharma usha28383@gmail.com <p>Energy crisis brings green computing, and green computing needs algorithms and mechanisms to be redesigned for energy efficiency. In computer science, the analysis of algorithms is the determination of the number of resources (such as time and storage) necessary to execute them. Most algorithms are designed to work with inputs of arbitrary length. Usually the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity). The analysis of energy consumption in green computing considers various efficient algorithms. Computing with green algorithm can enable more energy-efficient use of computing power. In this paper we will consider various algorithms for computing energy consumption in green computing.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Prof.(Dr.) Bhavana Narain, Dr. Manjushree Nayak, Usha Sharma https://spast.org/techrep/article/view/1716 DigiVoter: Blockchain Secured Digital Voting Platform with Aadhaar ID Verification 2021-10-08T07:54:45+00:00 Navamani T M navamani.tm@vit.ac.in Tajinder Singh Sondhi sstajinder1@gmail.com Shivam Ghildiyal shivam.ghildiyal97@gmail.com <p>The Election Commission of India has talked about India's Electronic Voting Machines (EVMs) as dependable and impeccable, yet comparable electronic voting machines utilized around the world have shown to experience the ill effects of genuine security issues. This research aims to build a Digital Voting Platform secured with the revolutionary concept of blockchain. This model secures the blocks using 512-bit SHA-2 (Secure Hashing Algorithm 2). For authentication of users, the proposed model incorporates the use of biometric data which will be mapped to the cryptic unique Aadhaar card number of the voter. Thus, a unique platform is proposed to make the voting procedure easier, secured and fault tolerant using blockchain.</p> <p>There is a diverse potential of blockchain in the developing countries, as their primary focus is on the element of trust. In 2014, Denmark's Liberal Alliance party declared that it wanted to utilize blockchain innovation for secure electronic voting (or e-voting) at its yearly gathering. Considering the scenario, that was a great initiative, to make progress in the field of technology. This is the technology which aids us on daily basis in various ways, on a larger scale with more productivity. It motivated us to do something relevant in India too.</p> <p>The current political situation of India greatly motivates us to put our engineering minds into the revolution of digital India. This undertaking intends to create a Digital Voting Platform (DigiVoter) secured with the progressive and recent idea of blockchain technology.</p> <p>The system currently in use lacks transparency. Without transparency, there is always a question regarding its validity. Since many things are not known to people, manipulating information becomes an easy task for the criminals, who seek to tamper information in accordance to their needs. Also, since the current system is a centralized system, it has a single point of failure. Thus, it is prone to various types of cyber-attacks. There are many types of attacks mainly classified as software-attacks (by the use of code to manipulate information) and hardware-attacks (by using an electronic hardware device to physically overwrite information).</p> <p>In blockchains, records are maintained in a continuous list fashion and are secured using cryptography. Each record denoted as a block contains an index, a timestamp, a hash which is a link to the preceding block and transaction data. This dynamic design of blockchain makes them secure. In the proposed approach, the hashing of Aadhaar number will be done using 512-bit SHA-2 algorithm.&nbsp; Along with it, user authentication will also be done where the user’s fingerprint will be scanned and then mapped to his/her unique Aadhaar card number, whose hash will then be calculated again in order to achieve privacy and extra level of security. Also, this hash is used to cross-verify if the user is voting multiple times. A separate radix tree (space-optimized trie) is maintained to drastically speed up this process.</p> <p>The innovative concept applied here is that a voter’s unique fingerprint, after getting mapped to its unique Aadhaar card number converted into its cryptic form (Aadhaar Hash) along with vote details (the Vote Hash), will again be used to generate the hash of the current block. This multi-layered security model of blockchain will not only provide data security by keeping the votes decentralized and the chain immutable, but also manage to create an extra level of authentication so that each and every block is unique, both in terms of hash, and the vote it contains.</p> <p>The proposed system is secure by design since it saves the hash of a user’s Aadhaar instead of the raw number. Along with this, double security would be insured since the vote will itself be protected in a blockchain.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Navamani T M, Tajinder Singh Sondhi, Shivam Ghildiyal https://spast.org/techrep/article/view/2389 TOWARDS INVESTIGATION OF VARIOUS LOAD BALANCING TECHNIQUES IN CLOUD COMPUTING 2021-10-10T13:12:41+00:00 Zeba Quereshi abhishek14482@gmail.com Abhay Jain ieeemtech1@gmail.com <p>Abstract<br>Cloud computing is an on-demand service where customers may access any time common IT resources, information, software and other equipment. It is a web-based development that offers virtual resources over the Internet as a service. The higher the cloud use, the higher the charge. The allocation of loads to components processing is a challenging phenomenon. In a multi-node system, there is a very good possibility that some nodes will be idle while others will be overloaded. The load balancing algorithms' objective is to keep the load on each processing element constant. <br>Cloud computing [1] gives omnipresent access to shared pools of customized system resources and superior services which may be delivered quickly, often through the Internet, with minimizing management efforts. Like a public service, cloud computing relies on the pooling of resources to create coherence and economies of scale.<br>Figure 1 depicts a generic cloud computing paradigm. Third-party clouds allow firms to focus on their core competencies rather than on computer infrastructure and upkeep. Cloud computing, according to proponents, allows businesses to avoid or reduce upfront IT infrastructure expenditures. Cloud computing [2] supporters also say that it enables companies to get their applications up and running faster, with greater management and less maintenance, and that it enables IT staff to more quickly change resources to meet changing and unexpected demand.</p> <p>Figure 1: Cloud Computing Model<br>Cloud providers usually employ a "pay-as-you-go" approach, which might result in unanticipated operational costs if administrators are unfamiliar with cloud-pricing methods.<br>Cloud load balancing [3] [4] [5] is the technique used to dissimulate demands among different computing resources. Each job should be planned correctly in order to balance the load so that each user gets service within the quickest time. Round robin, ant-colony optimization, particle swarm optimization, max min, min-min, and others are all load balancing methods.<br>A technique for Random Load Balancing in Cloud Computing is devised, in which the amount of user requests is instantly assigned to the resources. This method employs a random strategy to decrease the request's waiting time. This proposed load balancing technique is simulated in Cloud Analyst tool. The performance comparison study between proposed random method and other available load balancing methods is conducted. It is found that the performance of proposed random method is better in terms of response time.</p> 2021-10-11T00:00:00+00:00 Copyright (c) 2021 Zeba Quereshi, Abhay Jain https://spast.org/techrep/article/view/280 The ELLIPTIC CURVE CRYPTOGRAPHY APPLIED FOR (k,n) THRESHOLD SECRET SHARING SCHEME 2021-09-11T14:31:50+00:00 SUNEETHA CH schivuku@gitam.edu Neelima CH schivuku@gitam.edu <p>Invention of Secret Sharing Scheme by Adi Shamir along with the prevalent advancements offers strong protection of the secret key in communication network. Shamir’s scheme which is established using Lagrange Interpolation polynomial. The group manager or dealer of the group splits the secret S to be communicated into n pieces allots all the n pieces to n participants.&nbsp; A subgroup of t or more participants of the group come together to reconstruct the secret key. Later the cryptanalysis of secret sharing scheme came into picture in the direction of cheater detection whose motivation is to fool the honest participants. The present paper goals to describe a modification to (k,n) threshold secret scheme using elliptic curve cryptography to avoid the dishonest shareholders and faked shares. In this scheme the group manager or dealer distributes the shares among the participants as affine points on the elliptic curve so that the share modification by the participants or faked shares can be easily detected.</p> 2021-09-11T00:00:00+00:00 Copyright (c) 2021 SUNEETHA CH, Ms https://spast.org/techrep/article/view/1025 The Role of Deep Learning and Deep Neural Networks in Predicting and Measurement of Quality of Orange Fruits 2021-09-20T09:10:55+00:00 Pravin Ghatode pravin.cse@orientaluniversity.in Sanjay Kumar Sharma sanjaysharmaemail@gmail.com <p>Non-destructive natural product quality measurement is important and completely essential for life sustenance and rural agriculture. Customers' needs should be accommodated by fruits on the market. In order to analyse Orange fruit in general, visual inspection and the use of scale as a specific consistency metric are used. This study demonstrates the prediction and calculation of Orange quality using features extracted from the fruit's outer shell. Researchers conducted research on the sited region using various methods such as machine learning, SVM classifier, ANN, Computer Vision, Image Processing, and others, which have shown to be ineffective in predicting and measuring the quality of Orange fruits. It is proposed for the sited research to work on the problem of Orange quality using Deep Neural Network methods and a multi-layer approach to obtain more reliable results up to 95% with a non-destructive procedure.</p> <p>The manual fruit quality by visual inspection is laborious and time consuming job. Also suffers from the problem of inconsistency in opinion by different persons. For manual quality and grading, farmers, buyers as well as retailers need to pay more cost which leads to adding more cost to fruits price. There is a need for real time automatic fruit quality prediction system with classification of variety and grading. However, it is very difficult to predict the quality and classification of a variety of fruits using the non-destructive technology in real time. There are many factors that need to be considered in fruit grading and quality prediction. Looking features including size, weight, volume, shape, colour, outside defects and more specifically outer texture of fruits are very important factors for fruit quality grading. Internal flavour factors such as sweetness, bitterness, acidity, saltiness, moisture and texture of fruit such as hardness, crispness and nutrients also seriously affect fruit grading.</p> <p>Sweetness and flavour are desirable attributes used for quality control and assurance of citrus fruit, which are largely determined by total soluble solids (TSS), titrable acidity (TA) and TSS: TA ratio.</p> <p>As per the inputs received with the senior executive Mr. Karale from ICAR-Central Citrus Research Institute of India based in Nagpur, Maharashtra and research study done from my side, there is no work happened in predictive analysis of quality of oranges without using any hardware or machine.</p> <p>By considering the orange fruits quality problem, it is proposed to work on orange fruit quality prediction by applying Deep Learning algorithm/s on the real time images of outer shell texture of the oranges.</p> <p>Figure 1 Different citrus fruits</p> <p>Most of the work for grading and fetching the different features of fruits is done on the basis of Near Infrared (NIR) analysis technology, which works for single seed feature prediction analysis and not for groups of seeds&nbsp; [1] - [6].This technology may cause harm to human beings due to the use of infrared.</p> <p>Researchers used a multi-class SVM with K-means clustering to classify diseases and Fuzzy logic to determine the magnitude of the disease. [7] - [13]</p> <p>The research work carried out on the basis of Image Processing to process and predict the features of fruit surface, it has potential for predicting colour of oranges in an objective and noncontact way, under this technique it is using the Gray Level Co-occurrence matrix method to sort and grade the fruits and mostly used in fruit sorting techniques. This technique requires hardware setup and does not work on real time environment [14] - [21].</p> <p>Study also uses Machine Vision and Artificial Neural Network technology, which uses supervised learning to detect defects on food and fruit surfaces, with an accuracy rate of up to 80%. [22] – [24].</p> <p>The FL method explains how humans make decisions involving all intermediate possibilities between digital values NO and YES (0 and 1).Research work related to detection of fruits quality is done on the basis of Fuzzy Logic method to fetch the maturity level of a fruit [25].</p> <p>Figure 2 Green Colored Orange with more juice and sweetness&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p> <p>Figure 3 Yellowish Orange with less juice and more Sweetness</p> <p>Figure 4 Green and Yellowish Orange with more juice and more sweetness</p> <p>By using the above analysis, it is strongly influenced that work is done on either a single seed or a single fruit picture, which cannot provide a real-time prediction of fruit quality. It is also less effective to extract the inner features of fruits.</p> <p>By considering the real time prediction of feature extraction in the proposed solution, the system will be able to predict the internal quality of Orange fruits by using Deep Learning Techniques. It is intended to provide automated non-destructive solution with or without hardware to the ICAR-Central Citrus Research Institute f India based in Nagpur, Maharashtra. Through proposed research by considering variety of oranges from different regions for system training result will be more accurate. The new approach by using DNN-Deep Neural Networks can be used for multi-layer feature extraction from fruit surface to predict quality and grading of Oranges.</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Pravin Ghatode, Dr. https://spast.org/techrep/article/view/2578 CLIENT SERVER APPLICATION TO AID THE NECESSITIES OF THE NEEDY 2021-10-14T20:17:08+00:00 P. Bini Palas, binipalas16@gmail.com Hariohm Varush, kannanarchieves@gmail.com V. Chandru kannanarchieves@gmail.com K. Hariharan kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: The objective of the paper is to introduce an application that allows donating funds&nbsp; necessities to orphanages through a webpage which is incredibly safe and user friendly.</p> <p>Methodology: A system which employs online servers with the help of socket communication and client server communication using Content Delivery Network (CDN) is developed. The system battles the issues found in existing systems by having a wide list of orphanages present including its vital information and also donation information. It also has an authentication system which requires information only privy to said orphanage managers such as their license number, FCRA certificate and 80G certificate and is extremely hard to crack. This boosts up confidence and trust among the donors The first stage of proposed project is to create a medium between all the clients of said servers and the server. This is done by using socket communication. Application Programming Interface (API), serves as an interface for the application. Communication between client server communications is made through Content Delivery Network. Authentication is an important part of the project to have secure and safe transactions.</p> <p>Findings: The findings indicate that orphanages will be able to receive donations in an authentic manner. The project uses sockets for communication between different clients connected to the server. The system will then be able to send the data between the clients and the server at a transport layer level using sockets so that the connected clients don’t have to refresh the connection to see the changes. REST API is a major component of the system. Representational state transfer (REST) is an http protocol subset for architecture applications that makes use of web services. It consists of a set of rules that developers abide by, to create their own API. At its core, a CDN is a network of servers which are linked together to deliver content as cheaply, swiftly, reliably and securely as possible. CDN will station servers at the exchange points between different networks for faster speed. These points are called Internet exchange points and are the primary location where different Internet providers supply each other with traffic. Registration of orphanages involves the orphanage owner to share their licence number, FCRA registration and 80G certificate. The orphanages have to submit these mandate documents to get registered. To authenticate the clients/users, JSON Web Token (JWT) authentication is used. JWT authentication involves JSON Web Token which is used to generate tokens for an application. It works by creating a token that authenticates or proves the user’s indent and will then transfer the data to the client. Then the client will transfer the said token to the server back again for the following request so that the server will be able to know that a request is transferred from a certain identity.</p> <p>Originality/value: In this study, the developed webpage promotes an authentication system which disables fraud and also legitimizes the orphanages which creates trust between the orphanages and the donors that the orphanage will make good use of the funds.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 P. Bini Palas,, Hariohm Varush,, V. Chandru, K. Hariharan, Mayakannan Selvaraju https://spast.org/techrep/article/view/1957 Detection of DDoS attack in SDN using SVM 2021-10-05T12:56:18+00:00 Neethu S neethus@rvce.edu.in <p>Software Defined Networking (SDN) adopts an approach that facilitates dynamic network configuration based on program logic in order to improve performance. The control plane and data plane make up an SDN. DDoS attacks can lead to financial losses or worse if they disrupt network services. Support Vector Machine classifier is used to distinguish the attack traffic and usual traffic in the network. The Support Vector Machine (SVM) is a supervised machine learning algorithm-based classification method. [1] Data is shown as points in an n-dimensional space, with n being the number of characteristics. Finding the hyperplane that separates the two classes is the next stage. The key to designing an efficient system is to choose the correct hyperplane. The Support Vectors define the hyperplane and are referred to as "cases." The hyperplane produced by an ideal SVM should clearly split the instances into two non-overlapping groups. In practice, the classification method has certain mistakes, thus SVM seeks to optimize the margins by selecting an optimal hyperplane. To distinguish the attack traffic data from the usual traffic, we utilize an SVM classifier. [3] Implementing an SVM or any other classifier on an SDN architecture to categories network traffic as static or dynamic is still a concept in beta, and experimental results and conclusive successes could signal a tectonic shift in how the network security is against service interruption threats.The focus is on analyzing changes in traffic characteristic values and determining the feasibility of the proposed solution. By deploying the SDN experimental environment the experiment's detection accuracy rate is high. This paper presents a method for detecting and mitigating attacks with less resource consumption by utilizing SDN's central control.&nbsp;</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Neethu S https://spast.org/techrep/article/view/488 IoT Data Analysis Using Different Clustering Algorithms- A Survey 2021-09-14T10:15:13+00:00 Prabhat Das udit.mamodiya@poornima.org <p><span style="font-weight: 400;">The Internet of Things (IoT) is a network of electronic devices that are integrated with sensors, software, and other technologies that utilise the Internet to connect and exchange data with other devices and systems [1]. The term Internet of Things was first coined by Kevin Ashton, Procter and Gamble in 1999 [2]. Although many devices shared data among each other since 1970 for an instance we can consider the famous Coca-Cola vending machine at Carnegie Mellon University becoming the first interconnected device sharing data [3].&nbsp; Since then we have come a long way, were right now devices like toothbrushes are also connected. The internet of things is basically a massive chunk of inter connected devices along with various sensors sharing data among each other through the network. As per techjury.net 2.5 quintillion bytes of data is generated per day [4]. Now this data getting generated in every seconds stacks up to be a wholesome heterogeneous data. With the increase in usage of these devices there has been a major change in the rise of data creation and consumption due to the quick development of contemporary technology and the advent of the Internet of Things. Starting from the inception of IoT technology, though it has solved many problems, but it has also introduced many challenges. Keeping all the aspects into consideration one such challenge is data management, with the goal of extracting useful information and patterns from the vast amount of unstructured IoT data. However, due to some limitations, we are still encountering issues and challenges in obtaining patterns and meaning information, and thus formulating a research agenda to get a sense of the challenges that one might encounter while conducting research on clustering of this massive chunk of unstructured IoT data.</span> <span style="font-weight: 400;">Hence this data can be used to extract meaningful information and to enrich our life. IoT data is an important source of abstract and contextual data that has the five V’s of big data. Considering all these aspects of IoT our aim is to come up with a solution with respect to clustering of data. For some, data analysis for homogeneous data is a simple process, but data analysis for heterogeneous data is more difficult since it comprises diverse sorts of data, such as data with various datatypes and multiple difficulties. Similarly, data created by IoT devices and sensors may have a wide range of variances and problems, and in order to collect the data quickly, we recommend to group devices together based on their commonalities, so that data clustering becomes easier and more efficient afterwards. In this survey paper, we looked at a number of studies that discussed data clustering and network characteristics. Because IoT devices generate a large quantity of data, several algorithms may be employed depending on how they operate. The motive of this survey is to analyze those clustering algorithms which have been frequently referred for this task and thereby to find out the pros and cons of the same so that an efficient method of clustering can be considered to overcome the challenge with respect to the IoT data.</span></p> <p>&nbsp;</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/2660 IoT and Blockchain rostrum - 2021 Implementation, challenges and Future Phenomenal. 2021-10-18T06:33:52+00:00 Aafrin Julaya aafrinbanujulaya.20.rs@indusuni.ac.in Akshara Dave aksharadave.mca@indusuni.ac.in <p>Both Iot and Blockchain are the area that focuses on the security of the data. So many researches are<br>already addressed in this relevant domain. Still there are many opportunities in IoT that need to be<br>addressed. For example, as the number of devices are increasing whereas the memory space is<br>decreasing while using the network layer communications protocol for relaying datagrams. Its<br>routing function enables internetworking, and essentially establishes the Internet (IPv4 &amp; IPv6) as<br>well as security issues such as vulnerable access control mechanisms.<br>This paper’s main objective is to focus on the advantages of blockchain and implement it on IoT. In<br>this research paper, how the combination of blockchain and IoT can create the wonders in the field<br>of data security of smart devices is also discussed. Moreover, based on the past data, some<br>challenges of IoT is also addressed. If asked to a normal user, he/she will say blockchain technology<br>generally focuses on crypto-currency only. But the other use cases of block chain are also discussed.<br>Blockchain allows the devices being connected with each other via a common link in a sequential<br>manner (one-after-one). IoT uses the blockchain as a ledger that can monitor and road that how<br>devices interact, coordinate and communicates, in which state they are and how serve with other IoT<br>devices. In IoT, blockchain can be used for implementation of cloud computing and artificial<br>intelligence. Earlier the researchers have focused on the cross-domain commission and authority<br>control by trusted third parties. Hence, if attacked, the third parties are at huge risks too. Blockchain<br>IoT implementation required in healthcare and fitness applications and sensor devices in which data<br>confidentiality is key requirement.<br>Using various IoT devices available around us, we can get structured data in Electronic Health<br>Records (EHRs). Though, the initial idea behind this research will be to provide a distributed, tightly<br>secured and authorized access to these sensitive data with the help of blockchain technology.<br>This includes a web and mobile application allowing the patient as well as the medical and<br>paramedical staff to have a secure access to health information. Use cases for blockchain will start in small projects that reduces duplicative work but can eventually<br>shift to a system where patient’s control access rights to their data.<br>In 2021, as future work can implement the industrial system using the EHRs. And system must be<br>support wider range of sensors that can be implemented on a wearable device. Additionally, can<br>implement second layer of security by encrypting the data before storing in the Blockchain IoT<br>would increase to way of architecture.<br>IoT, Blockchain, Healthcare, Security, Electronic Health Records, IP, Confidentiality, Encryption,<br>Authentication, Artificial Intelligence and Machine Learning.&nbsp;</p> 2021-10-19T00:00:00+00:00 Copyright (c) 2021 Aafrin Julaya, Akshara Dave https://spast.org/techrep/article/view/1385 Similarity Learning-Based Supervised Discrete Hash Signature Scheme for Authentication of Smart Home Scenario 2021-09-28T16:14:24+00:00 Swapna Sudha K swapanak41@gmail.com Jeyanthi N njeyanthi@vit.ac.in <p class="Abstract"><span lang="EN-US">Smart home technology is one of the significant emerging applications in the Internet of Things that facilitates the user to control home devices in the remote manner. In this context, investigating and addressing IoT security issues is highly challenging as the operating strategies of IoT application differs due to their heterogeneity characteristics. At this juncture, an authentication mechanism with anonymity and efficiency is necessary for facilitating secure communication in the smart home scenario, since the user or home communication channels are generally determined to be highly insecure. In this paper, Similarity Learning-Based Supervised Discrete Hash Signature Scheme (SLSDHS) is proposed for achieving secure user during the process of smart home authentication. It leverages the mutual association between possible semantic labels in order to learn maximized more stable hash codes, which is a predominant improvement over the traditional hash code approaches. The communication overhead and computation overhead of the proposed SLSDHS is identified to be considerably minimized compared to the benchmarked schemes used for comparison. The security analysis of the proposed SLSDHS scheme evaluated using informal analysis, formal analysis and AVIPSA tool-based model checks confirmed its predominance with respect to automated testing of internet security protocols.</span></p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Swapna Sudha K, Jeyanthi N https://spast.org/techrep/article/view/2072 Detection of Stress Using Machine Learning Approach 2021-09-30T18:17:51+00:00 Siddartha S S siddarthass.ec18@rvce.edu.in Shwetha Baliga shwethaprabhun@rvce.edu.in Prithvi V Patil prithvivpatil.ec18@rvce.edu.in Rajath Rao T N rajathraotn.ec18@rvce.edu.in Ruturaj B Jadhav ruturajbjadhav.ec18@rvce.edu.in <p>Stress is a common problem in modern day life. It is defined as the brain’s reaction to any<br>demand physically or psychologically. It is important to identify stress and eliminate it.<br>Prolonged stress can lead to physical and physiological problems. A study from Delhi based<br>TCHO showed that 74% Indians suffer from stress and 88% suffer from anxiety. In this<br>direction, detection of stress in individuals becomes very important. With the help of<br>technologies like Machine Learning and Deep Learning the detection of stress using<br>Electroencephalography (EEG) signals can be achieved in less time with far better accuracy.<br>This project aims at developing tools with the help of Machine Learning for detection of stress<br>using EEG signals. It also aims at developing a Machine Learning model that can learn from<br>individual data rather than generalizing on the entire dataset. The aim is to obtain a model<br>with better accuracy and which functions with less error. Python language provides various<br>open source libraries like tensorflow, keras, etc which are Machine Learning libraries. With the<br>help of these libraries Artificial Neural Networks are utilized to build the model. The model is<br>trained and tested using an open source dataset.<br>The pre-processed EEG signals are taken from the dataset and features are extracted. The<br>input features are used to train and test the model. The Deep Learning model is built using<br>neural networks and Gated Recurrent Unit (GRU). It is designed to give the classification<br>output for three classes that are high stress, low stress and no stress. The accuracy vs epoch<br>graph is shown in fig.1. After trials and experimentation the final obtained model resulted in a<br>training accuracy of 99.33% and a validation accuracy of 95.15%. Precision, recall and f1-<br>scores are shown in table 1.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Siddartha S S, Shwetha Baliga, Prithvi V Patil, Rajath Rao T N, Ruturaj B Jadhav https://spast.org/techrep/article/view/1233 A A Novel Transfer Learning Technique for Detecting Breast Cancer Mammograms using VGG16 Bottleneck Feature 2021-09-26T06:49:23+00:00 SASHIKANTA PRUSTY sashi.prusty79@gmail.com <p>In general detecting cancer manually in whole slide images require a significant time and effort on the laborious process. Breast cancer represents the highest percentage of cancers and second most common cancer overall that affect women with 87,090 deaths approximately as reported by ICMR (Indian Council of Medical Research), 2018 in India [1]. Breast tumours are classified in two ways such as&nbsp; a) benign that are not very harmful and would not cause breast cancer as it is just the formation of tissues that doesn’t spread over the breast areas and b) Malignant: This type of tumours is extremely dangerous and would form abnormal cell that can develop in all over breast areas. Unrestrained growth on cells will be simply develop and can affect the lymph system which may be demolishing the hard tissues. Although so many technology and tools are available but still there has a big challenge for detecting cancerous cells in early stages for both doctors and patients. In present, Deep learning (DL) is a sort of Artificial Intelligence (AI) technique that replicates how the human brain processes. The applications present in DL can detect, recognize, and evaluate malignant tumours from images using neural networks that can learn from data without supervision. In this work, a deep learning based transfer learning model has implemented as it is a pre-trained model which uses knowledge from a previous task to boost generalization in a new one. It seems to be widely popular in deep learning right now since it can train deep neural networks with a small amount of data. Because most real-world diseases do not have millions of labelled data points to train such complicated models, this is particularly valuable in the healthcare profession. Here, one of transfer learning model called VGG16 (Visual Geometry Group16) has implemented. As the name suggest, this VGG16 has taken 13 convolutional layer and 3 fully connected layers to train the dataset. Instead of having a huge number of hyper-parameters, VGG16 concentrated on having 3x3 filter convolution layers with a stride 1 and always used the same padding as well as maxpool layer of 2x2 filter stride 2. Here, also ImageDataGenerator from keras.preprocessing has used to import data with labels into the model. The performance of this model and depth-wise Convolutional Neural Networks in medical imaging is examined in this paper, with a focus on breast cancer classification using Mammography images. In the training period, VGG16 classifier could able to predict whether the breast image contains any type of cancerous cells or not. The overall process involves detecting the mass and segmenting it on Mammography images of DDSM (Digital Database for Screening of Mammography) data and performance evaluation. This research looks at a smaller dataset MIAS and attempts at classifying between Normal vs. Cancerous. VGG16 Bottleneck features and dense Layer with heavy regularization is used for feature extraction and to train this model. RELU activation has been implemented for both the dense layer of 256 units and the network to prevent negative values from being forwarded. Even though two units of dense layer were used in the end, softmax activation was able to determine whether tumours were malignant and benign. Based on the model's confidence in which class the images belong to, the softmax layer will output a value between 0 and 1. The basic idea is to apply a pertained model VGG16 on Breast cancer mammogram images in order to classify cancerous tissues, to segment the Mass present on images if available by considering both Malignant and Benign tumours. Finally, this VGG16 model has successfully implemented and produces Network’s test score of 87.999 percent which could be applied for breast cancer disease prediction.</p> <p>The below fig. 1 depicts the keywords taken from Scopus journal regarding breast cancer disease that has experimented by using RStudio software. Image classification of VGG16 model has specified in below fig. 2 that uses multiple layers such as convolutional, pooling and dense layers for classifying the breast mammogram images of size 224*224*3 pixel into 1*1*1000. Here, randomly 32 breast images have picked with figsize= (16, 16) with hspace=0.2 and wspace=0.001 containing both left and right view as shown in the below fig. 3. The MIAS dataset has balanced by applying balanceViaSmote () method of having image size= (224,224). The fig. 4 displays the 206 breast images that have equally distributed into normal and abnormal malignant classes by applying Smote () function. The fig. 5 displays about the VGG16 model containing 16 layers among which there have 13 convolution layers (all rectangles with blue color), 5 max pool layers (all the rectangles with red color), and 3 fully connected networks present in order to classify mammography breast images. Below fig. 6 represents that here, VGG16 model gives network test score of 87.99 percent while implementing MIAS dataset w.r.t. 75 epochs of training.</p> 2021-09-28T00:00:00+00:00 Copyright (c) 2021 SASHIKANTA PRUSTY https://spast.org/techrep/article/view/603 Mrs Performance evaluation of Convolution Neural network for handwritten Digit Recognition 2021-09-16T14:04:12+00:00 Manjula T R tr.manjula@jainuniversity.ac.in <p>Image classification problems are very well addressed by computer vision methods. However is not devoid of manual feature extraction. The recent advancements in the field of artificial neural network and particularly convolution neural network are proven to outperform the conventional methods. The CNN, a deep learning technique is capable of addressing a large number of classification and recognition problems. However, there is no one unique model that works for all and exhibits high degree of flexibility in the selection of model parameters such as&nbsp;&nbsp; filter count, kernel size,&nbsp; number of layers, pooling size and optimiser. The performance of CNN is evaluated for handwritten digit recognition of MNIST data base. The kernel size and type of optimiser have greater contribution on accuracy. The single layer CNN model with 32 filter count, kernel size of 9x9, pooling size of 2x2 and adam as an optimiser has achieved an recognition accuracy of 99.13%</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Manjula T R https://spast.org/techrep/article/view/1491 An An Analytic study on Streaming Landscape Transformation Preferences of Consumers towards Shift to OTT Platforms during Pandemic Covid-19 2021-09-30T19:26:42+00:00 Seema Garg sgarg3@amity.edu Navita Mahajan navitamahajan07@gmail.com Pranav Gupta rahulguptapranav@gmail.com <p><strong>Abstract</strong></p> <p>The present research deals with transformation of streaming landscape preferences for consumers for various OTT (Over the Top) platforms. The emphasis has been upon understanding the key criteria which encourages users to use any particular OTT platform. The study is based on the analysis upon their viewing habits including the time spent by customers on such platforms, the release period of the content the type of content preferred by them. Apart from this, the impact of the COVID-19 pandemic has also been tried to be captured into this study along with the dynamics played by the regional content. The profile analysis, multilinear regression has been used to analyse the results. The results of the study conclude that the shift in the favour of OTT consumption going forward and even the even the low users are consuming over forty minutes of content on OTT platforms on a daily basis. The factors like latest exclusive and international content having a positive influence were identified during the study. The&nbsp; increased&nbsp; content&nbsp; consumption&nbsp; during&nbsp; pandemic&nbsp; has&nbsp; also positively contributed to sign ups for an OTT service and the transformation is not temporary. The study implications are that OTT platforms should capitalize the current scenario of pandemic to experiment with pay-per view movie releases. The increased focus on regional content can be beneficial in tapping the unexplored segments.</p> <p>&nbsp;</p> <p>Keywords: OTT platforms, Transformation, Consumers, Covid -19 pandemic</p> <p>&nbsp;</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Seema Garg, Dr, Pranav Gupta https://spast.org/techrep/article/view/2884 SECURE BC- HRQLRL POL PROTOCOL FOR THE SMART HOME GATEWAY TO AVOID DATA FORGERY 2021-10-19T14:40:53+00:00 Pradeep Kumar Tiwari pradeeptiwari.mca@gmail.com Arjun Singh vitarjun@gmail.com <p>The world is made smarter along with more receptive with the quick augmentation of the Internet of Things (IoT). Such transformation of IoT is utilized by Smart Home (SH), which appears to be the future wave. But, data security along with privacy problems for the centralized gateway data and data sharing have also increased with the augmenting broad adoption of IoT. Utilizing Blockchain (BC) technology, an approach for data privacy along with security is proposed by this paper in an SH for solving these challenges. A solution is generated by the works for overcoming the reported security drawback in the generally utilized permissioned SH gateways. For avoiding data forgery, a secure BC-based Highly Regularized Q-Learning reinforcement learning-based proof of learning (HRQLRL PoL) protocol for the SH gateway is proffered via the proposed work. Initially, the user details are registered by the work and it saves their personnel details in a security provision of the developed Cipolla’s Extended Euclidean Distance Algorithm Based Lattice Cryptosystem (CEED-LC) method. A small Key Generation (KG) with high security is offered via the proposed cryptosystem and the data’s confidentiality and integrity is maintained. Then, the blocks are produced with a unique gateway enrolment ID under the agreement of the HRQLRL PoL protocol that secures data with low computation time and memory. The chances of majority attacks like adversarial attacks are avoided and data’s decentralized distribution is offered. Experimental results illustrate that by attaining a higher throughput along with Packet delivery ratio (PDR) value, the proposed framework is very secure against vulnerabilities. It remains to be very scalable when analogized to the existent top-notch methods.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Pradeep Kumar Tiwari, Arjun Singh https://spast.org/techrep/article/view/222 The ELLIPTIC CURVE CRYPTOGRAPHY APPLIED FOR (k,n) THRESHOLD SECRET SHARING SCHEME 2021-09-08T16:20:07+00:00 SUNEETHA CH schivuku@gitam.edu Neelima CH schivuku@gitam.edu <p>Invention of Secret Sharing Scheme by Adi Shamir along with the prevalent advancements offers strong protection of the secret key in communication network. Shamir’s scheme which is established using Lagrange Interpolation polynomial. The group manager or dealer of the group splits the secret S to be communicated into n pieces allots all the n pieces to n participants. &nbsp;A subgroup of t or more participants of the group come together to reconstruct the secret key. Later the cryptanalysis of secret sharing scheme came into picture in the direction of cheater detection whose motivation is to fool the honest participants. The present paper goals to describe a modification to (k,n) threshold secret scheme using elliptic curve cryptography to avoid the dishonest shareholders and faked shares. In this scheme the group manager or dealer distributes the shares among the participants as affine points on the elliptic curve so that the share modification by the participants or faked shares can be easily detected.</p> 2021-09-08T00:00:00+00:00 Copyright (c) 2021 SUNEETHA CH, Ms https://spast.org/techrep/article/view/2406 Identification of Asymmetric DDoS Attacks at Layer 7 with Idle Hyperlink 2021-10-11T12:08:32+00:00 Mahadev Agra mahadev.agra@gmail.com Khushwant Singh erkhushwantsingh@gmail.com Yudhvir Singh dr.yudhvirs@gmail.com Dheerdhwaj Barak barakdheer410@gmail.com Kiran Sood kiransood1982@gmail.com <p>Asymmetric denial-of-service (DDoS) attacks have become very complicated to deal with because of the use of several Internet Protocol (IP) addresses in these types of attacks. The Purpose of research is different strategies are used by an attacker to achieve the objective of forcing the unavailability of the targeted server. Sometimes the large size of the file is used with a higher transfer rate so that the targeted website could be hanged for a legitimate customer. The design of a novel mechanism, in the form of Dynamic honey link (DHL), to prevent identification by sophisticated DDoS attacking tools, the findings this developed here for the detection of asymmetric attacks. Parameters such as IP addresses, Time of Request, and difference of time between requests of IPs are used in this mechanism and are verified applying correlation coefficients and p-values.</p> 2021-10-11T00:00:00+00:00 Copyright (c) 2021 Mahadev Agra , Khushwant Singh , Yudhvir Singh , Dheerdhwaj Barak, Kiran Sood https://spast.org/techrep/article/view/2443 Technological change in Agriculture: From urban to rural path for Agridrones 2021-10-12T13:14:54+00:00 Pradeep Kumar Tiwari pradeeptiwari.mca@gmail.com Kusumlata Jain Kusumlata.Jain@jaipur.manipal.edu Shivaani Gupta shivani.gupta@vit.ac.in Smaranika Mohapatra smaranika.mohapatra@jaipur.manipal.edu Smaranika. Mohapatra smaranika.mohapatra@jaipur.manipal.edu <p>Two main domains of Indian economy are agriculture and manufacturing. More than 50 percent of the total employment force directly, indirect depends on agriculture which contributes around 17-18 percent to the country’s GDP [1] from agriculture. Uncertainty in production, changing government policies, urbanization effects this percentage and an estimated drop in this work force would be 25.7 percent by 2050 [2]. With the challenges which are may or may not be controlled by human intervention but use of different machines and technology can work coping against those natural factors in one or another way. Unmanned Air Vehicles with sensors, a synonym for Flying Agriculture Networks is a new technology for agriculture is widely used for different chore in agriculture and prove their efficiency to improve the production. This technological transformation supports the farming communities for improvement, can become a strong key to feed the growing population of world. With increasing availability of affordable systems, use of smart systems may boost the production towards rising consumer expectation and cares the rapid growth of agriculture production. India accounted for 55 per cent market share of the US$ 200-250 billion global services sourcing business in 2019-20 [3], which is the largest sourcing destination; on the other hand, India accounted for only 8 percent for smart agriculture devices. The use of technology in India for agriculture is a big challenge due to traditional believes, social and political issues. This paper survey is about Indian perspective for the use of FANET in agriculture, their challenges, and opportunities. A detailed survey is conducted with 2000 farmers across India for availability and challenges facing them to use technology. The Paper is also discussing Government support, initiatives in the last few years and current reports for use of UAV’s in Indian Agriculture.</p> 2021-10-12T00:00:00+00:00 Copyright (c) 2021 Pradeep Kumar Tiwari, Kusumlata Jain, Shivaani Gupta, Smaranika Mohapatra, Smaranika. Mohapatra https://spast.org/techrep/article/view/1808 Enhancement of Imbalance Data Classification with Boosting Methods: An Experiment 2021-10-09T05:42:19+00:00 Smita Ghorpade smita.ghorpade@gmail.com Ratna Chaudhari kadamnehaaa@gmail.com Seema Patil sima.patil1969@gmail.com <p>In data mining and machine learning area, the expansion of ensemble methods has<br>achieved a good attention from the scientific community. Scientist has proven increased<br>efficiencies of ensemble classifiers in various real world problems such as Image analysis<br>and classification, deep learning, speech emotion recognition, sentiment analysis,<br>forecasting crypto-currency, prediction of gas consumption. Ensemble methods integrate<br>several learning algorithms which gives better predictive performance as compare to any of<br>the basic learning algorithms alone. Combining several learning models shows better<br>performance as compare to single base learners. The idea of boosting emanates from the<br>area of machine learning. Classification of imbalanced data set is a broader research area<br>where the data classification is skewed or biased. It is challenging task for imbalance data<br>set to have appropriate distribution of data samples in each class by machine learning<br>algorithm. The distribution of classes can depart from small bias to extreme imbalance which<br>leads to minority class and majority class. Minority class is a class in which very few data<br>samples are predicted by the model. Majority class is a class in which large numbers of data<br>samples are predicted by the model. Standard machine learning algorithm gravitates<br>towards the majority class data samples which out-turn in imperfect predictive accuracy over<br>the minority class. In the learning algorithms, several approaches have introduced which<br>strengthen them towards the minority class samples.One of the well-known methods is<br>ensemble method.<br>However, ensemble method is one of the most well-known approaches. Ensemble<br>method combines the collection of best classifiers for classification to improve the<br>performance. There are five popular advanced ensemble techniques such as boosting,<br>bagging, blending, voting and stacking. In ensemble learning, boosting is one of the most<br>promising techniques in which many weak classifiers are aggregated and constructs a<br>strong classifier. The beauty of boosting is its serialized learning nature, which intends to<br>minimize the errors of the previously modelled classifier. Most of the popular boosting<br>algorithms are AdaBoostM1, Logitboost, Gentle Adaboost, GradientBoost, XGboost,<br>LightGBM, CatBoost, SMOTEBoost, RUSBoost, MEboost, AdaCost, AdaC1, AdaC2 and<br>AdaC3[15]. In classification task, Boosting of the ensemble learning has made prominent<br>progress.<br>In this study initially, problem domain is analysed for imbalanced data set<br>classification. Then this problem is formulated by framing null and alternative hypothesis.<br>The null hypothesis is stated as “There is no significant difference between single classifier<br>and classifier with ensemble techniques - AdaboostM1 and Bagging”. Alternative hypothesis<br>is stated as “Ensemble techniques AdaBoostM1 and Bagging works more superior as<br>compare to single classifier”. To test the hypothesis, we have carried out an experiment. We<br>Enhancement of Imbalance Data Classification<br>with Boosting Methods: An Experiment<br>have chosen three imbalanced data sets which are named as Thyroid, Glass and Ecoli3.<br>Our main objective is to check the accuracy score of ensemble methods with mentioned<br>classifiers. Initially we have applied four classifiers: Naïve Bayes, Multi-layer Perceptron,<br>Locally weighted learning and REPTree on these three data sets. The accuracy score of<br>each classifier is measured. Then we applied four boosting algorithms along with these<br>classifiers and observed the results. To examine the performance of boosting algorithm, a<br>comprehensive statistical test suite is used which shows evaluation metrics.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Smita Ghorpade, Ratna Chaudhari, Seema Patil https://spast.org/techrep/article/view/2556 A review on finding the fractal dimension using Differential Box Counting and Reticular Cell Counting Methods 2021-10-17T11:57:52+00:00 murali krishna Senapaty muralisenapaty@gmail.com SANTOSINI SAMANTARAY santosini06@gmail.com <p>Fractal Dimension is the measurement of self-similarity and the degree of the complexity in which the image is being coved up. It is a useful parameter while analyzing the image for calculating the self-similarity index. The 1D and 2D can be taken into the consideration. The fractal geometry has been first invented and introduced by the Mendel brot in 1983. Mandel Brot in1983 had first invented the concept of fractal geometry for describing the self-similar sets which are being called fractal dimension or FD. The fractal-based examination end up being of incredible intrigue for the advanced picture investigation. It is utilized in an incredibly wide area of applications, for example, in money and securities exchange, medicine, quality of food investigation, structural designing, and even craftsmanship. Our objective is to find out the FD of the grey and color image by using the two most popular FD calculation methods Differential Box Counting Method (DBC) and Reticular Cell Counting Method (RCC). At last, we have concluded with the thesis by giving the most accurate and minimum FD value among DBC and RCC methods. To comprehend the idea of fractal measurement we have to expound the measurement first. The line has the measurement as 1, a plane has the measurement as 2 where as a 3D shape has the measurement esteem as 3. The element of a line is 1 on the grounds that there is just a single hub or it very well may be said that there is just 1 heading is there through which one can proceed onward line. Where in the event of a plane there are 2 diverse pivot presents to travel one is length and another is broadness. Computerized shading pictures are a lot of liable to the dim level picture for ascertaining the fractal measurement of a shading picture. Just distinction is that in the dark scale picture we are ascertaining the fractal measurement of a specific picture utilizing the dim power esteem however in the shading picture for figuring the FD we have to consider the dim level force esteem and the R G B power esteem. From those dim estimations, all things considered, points containing in the picture are thinking about for computation of measurement according to a few strategies. In the dark picture, there is just one dim incentive from every pixel of the picture. In Fractals, the fake items have level surfaces which can be characterized by the assistance of polygons or smooth bend surfaces according to the necessity. Be that as it may, the common items frequently have harsh, rugged, irregular edges which are extremely hard to speak to with various bends and polygons. For the characteristic articles or pictures, we require both bend and polygons at the same time. Attempting to draw the normal articles by methods for straight lines is difficult. It is a must draw the common items the PC or machine should draw the spiked lines. Two endpoints will be given to the machine then the machine will attempt to draw the straight line and putting all the focuses for the items around the line. At last, by utilizing any best fit calculation we have to discover wellspring two nearest points to the line for figuring a rough estimation of the FD. At that point, by applying the Euclidian calculation the separation between the two focuses will be determined which is known. Presently a day it is conceivable to discover the self-comparability of the normal picture with the assistance of the above-said idea called fractals, and in this segment, I have given a short depiction in regards to FD and fractal geometry.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 murali krishna Senapaty, SANTOSINI SAMANTARAY https://spast.org/techrep/article/view/1917 Review of Psychometric Data Analysis for Healthcare Based on Emotional Intelligence 2021-10-09T13:05:31+00:00 Madhumitha K madhumithak41@gmail.com Chenchu Lakshmi V vclakshmi12@gmail.com Kayalvizhi S kayalvizhi.s@eec.srmrmp.edu.in BhavathaRanjanni S bhavatharanjanni@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p><strong>Purpose:</strong> The main purpose of the paper is to review the various existing systems for emotion analysis in audio, video and text using machine learning algorithms.</p> <p><strong>Methodology:</strong> This review is based on the data collected from more than 10 papers which describes how emotion analysis is performed in audio, video and text using machine learning algorithms. The findings and results of each paper are described in this paper of review.</p> <p><strong>Findings:</strong> The technological advancements in the current world and also due to the Covid-19 pandemic there is high competition among people resulting in locked-in syndrome, stress, and various other psychological problems like bipolar disorders, schizophrenia severe depression, So, the need for the proper psychometric analyser is tremendously increasing to analyse the emotions of people, so that a person’s emotion can be found and necessary actions can be taken according to their health condition. These data prove that the emotion analysers could help people to overcome their issues if there is a proper end to end communication between a psychologist and the affecter person through a web application or mobile application. Though these systems are found with some drawbacks, the accuracy of some systems is high.</p> <p><strong>Originality: </strong>From The data collected from different papers and the observation on these papers shows us the need for a proper website to help these people. So, the main objective of our review on these papers is to develop a project with the main objectives of as, (1) To identify the emotion of a person in the video, audio and text using machine learning models. (2) The proposed system makes it easy for the psychologist to identify the patient’s emotions in online consultations. (3) While talking in a video call the emotion of a person can be identified using his/her facial expressions. In an audio phone call, the emotion is recognized using the tone of the person. Text chat emotions can also be identified using the words and their context.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Madhumitha K, Chenchu Lakshmi V, Kayalvizhi S, BhavathaRanjanni S, Mayakannan Selvaraju https://spast.org/techrep/article/view/504 Opportunities created by Digital Technology and increased Data 2021-09-14T10:13:14+00:00 Abhay Pratap Chauhan udit.mamodiya@poornima.org <p><span style="font-weight: 400;">The major goal of this paper is to raise awareness of how digital technologies are altering traditional practises in a variety of sectors. It is necessary to know about these technologies as they are making our life easy. Also there are some threats for us from these technologies as they are collecting our data, our financial transactions, our likes and dislikes. The only thing that is making these technologies not reliable is its security. Our hypothesis is that every organisation should make their own official language in which they code algorithms for securing their data and these languages should be confidential to the officials. So that if anybody will not know the syntax of that language, it will be difficult to theft our data.&nbsp;</span></p> <p><span style="font-weight: 400;"><img src="https://spast.org/public/site/images/uditm/mceclip0.png"></span></p> <p><strong>Fig. 1.</strong><span style="font-weight: 400;"> Big Data Analytics Flowchart[11].</span></p> <p><span style="font-weight: 400;"><img src="https://spast.org/public/site/images/uditm/mceclip1.png"></span></p> <p><span style="font-weight: 400;">&nbsp;&nbsp;&nbsp;</span><strong>Fig. 2.</strong><span style="font-weight: 400;"> Accessing Cloud Storage[12].</span><strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</strong></p> <p>&nbsp;</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/581 A Survey of Optical Character Recognition Techniques on Indic Script 2021-09-17T12:37:55+00:00 Ramya Ch chramya0719@gmail.com Vishnu Vardhan B mailvishnu@jntuh.ac.in <p>Optical Character Recognition (OCR) is a technique that converts printed text and images into a digitized form which can be manipulated by a machine. It has many application sectors like Banking, Financial, Legal applications etc. Initially researchers were addressed and proposed many algorithms in image processing for character recognition and mapping. Most of the researchers focused on the Latin script English as it was supported by the Encoding standard ASCII. Later, people start realizing that OCR techniques for other languages are also gaining momentum these days. With the advent of technology and Unicode revolution, native language-based OCR solutions started emerging. In this paper we aim to focus on the latest machine learning techniques applied on OCR for the language English and two languages from Indian continent were presented. Out of the two Indian languages, one is the stroke-based language i.e., Hindi and the other being cursive script-based language Telugu.</p> 2021-09-19T00:00:00+00:00 Copyright (c) 2021 Ramya Ch, Dr. B. Vishnu Vardhan https://spast.org/techrep/article/view/1435 A Study of Intelligent Reflecting Surface beyond 5G Communication Systems 2021-09-29T12:31:43+00:00 Shaik Rajak rajak_shaik@srmap.edu.in <p>&nbsp;Intelligent Reflecting Surfaces (IRS) or Reconfigurable Intelligent Surfaces (RIS) is a key enable technology, which can enhance the performance of the future communication systems by using passive reflecting elements [1]. The IRS model is designed with large number of passive elements that operate at high frequencies, millimeter wave and sub-millimeter wave to reflect the incident signal towards the receiver without any relay [2]. Spectral efficiency became the major concern since the last decade. Recent investigations have also shown that the IRS can effectively serve the users with high data rate requirements beyond 5G communication systems. In recent years, with rapid increase of data usage, mobile users demand more transmission power. It is difficult to satisfy all users with limited transmission power, however IRS has the ability to solve the power requirement problem by using the concept of reflecting elements [3-4]. Many research works also encourage the use of IRS to improve the performance of THz communication systems without the need for additional power consumption.</p> <p>In our study on IRS, we explored the previous research efforts and overview of IRS in all directions of wireless technology. This report categorized the work as follows, spectral efficiency by using the number of reflecting elements, the impact of IRS on Energy Efficient communication model, optimal transmit power with large size of IRS block, security, deep learning and machine learning-based control, and effective utilization of reflecting surfaces. Later, we discussed the role of IRS in next-generation communication systems like massive MIMO and mmwave technologies.</p> <p>In this paper, we also analyze the EE of IRS by varying the number of users and transmit power. Numerical results in Fig. 1 shows that the EE is increasing with IRS for a smaller number of users, then it decreased gradually with a large number of users. In addition, we examined the EE of IRS by varying the transmit power in Fig 2, where EE increases until transmit power reaches 7 Watts. From the above analysis, we noticed that much research has to be done in IRS to find the optimal number of users and transmit power to maximize the EE. We certainly believe that this paper will help the researchers and industry experts to understand the recent works on IRS as well as the basic differences with other technologies, which lead to overcome the challenges and find a suitable environment to implement the IRS models with more accurate results.&nbsp;</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Shaik Rajak https://spast.org/techrep/article/view/1249 Effective Online Proctoring System 2021-09-26T14:39:31+00:00 Gautham Sivakmar gauthammass2@gmail.com Sivakumar R rsivakumar@vit.ac.in <p>In the past 1-2 years the field of education has seen drastic changes. The Covid-19 pandemic has pushed even the backward of institutions to shift towards online centric learning [1-2]. Hence the need for online proctoring has also risen, but due to their hastily development most of the software lack crucial functions and options or aren’t as user friendly as they are supposed to be. Keeping in mind the vast majority of the population aren’t so familiar with gadgets the requirement of an easily understandable basic interface is momentous[3-4].</p> <p>This paper aims to create such an interface with necessary functions and options to the users while also maintaining a strict privacy policy. We address the major drawbacks of other existing similar software such as providing an unique exam id to every student for fast and efficient tracking, assigning the time and date for exams (figure 1), providing the students more information as to at what stage of evaluation their papers and constantly update it, the proctors can keep a constant track of this and thereby maintaining an efficient process in evaluation, allowing users to view their submission in future whenever necessary and providing a trouble-free experience for the institutions and proctors (figure 2). This project can be further improved by integrating machine learning which uses AI to constantly track incoming videos of candidates for malpractices. This papers also serves useful during regular times of offline learning due to its efficient tracking and storage of records.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Gautham Sivakmar, Sivakumar R https://spast.org/techrep/article/view/2754 S A STUDY ON STATISTICAL ANALYSIS OF RISK FACTORS IN THE ASSESSMENT OF CARDIOVASCULAR EVENTS IN INDIAN POPULATION 2021-10-17T18:34:15+00:00 Sudha S sudhasubramaniam@gmail.com <p>Cardiovascular disease (CVD) has emerged as one of the major health problem all over the<br>world. In this study, statistical analysis is performed to identify the association of diabetes,<br>cholesterol, hypertension, obesity, sex and age with CVDs. A total of 713 patients are<br>examined and their clinical parameters are collected. The data’s are recorded and analysed<br>using R-studio. The analysis suggests that age, sex, hypertension and diabetes are important<br>risk factors for the occurrence of CVDs. The results of cases and controls are compared by<br>chi-square test. It shows that at 95 % level of significance, there is an association between<br>age, cholesterol, BMI, glucose, BP with CVD. Correlation of individual risk factor with carotid<br>intima media thickness (CIMT) is done by Pearson’s correlation. Prediction of CVD is carried<br>out is carried out using various machine learning techniques from independent variables such<br>as age, BMI, cholesterol, diastolic and systolic BP, glucose, gender, height, and weight. The<br>CVD dataset is trained on seven different machine learning algorithms. The tunable<br>parameters of each algorithm are tuned to get maximum accuracy. An increase in accuracy is<br>observed with the random forest algorithm and XGBoost algorithm. The XGBoost algorithm<br>performs better with an accuracy value of 98.85%.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Sudha S https://spast.org/techrep/article/view/1472 Review of Quality of Service in Wireless Communication 2021-09-29T12:12:10+00:00 Ankit Pandit panditankit48@gmail.com <p>The growing demand for new technologies, services and content is changing the way users access the Internet. Because of the flexibility offered by wireless networks, operators must provide high-speed, high-quality transmission services to an increasing number of cellular users. This fact has become a reality thanks to the proliferation of wireless equipment such as laptops, smartphones, and tablets. There are two groups of access technologies: wired and wireless. In a wired network, the materials that make up the cable limit the performance that the end-user can achieve. The wireless network group has no physical connection and is gradually becoming the standard for remote applications; creates a range of available bandwidth, depending on the quality of the wireless network [1], characterized, for example, by noise interference and the distance of the user from the base station. Currently, the costs associated with deploying cellular systems are high and, oddly enough, these networks have problems with internal coverage.<br>In this article, we are interested in an important task of the eNodeB in the LTE network architecture [2], that is, RRM (Radio Resource Management) [3], the purpose of which is to accept or reject requests for a network connection, ensuring the optimal allocation of radio resources between user equipment (UEs). It basically consists of two elements; admission control (AC) and package scheduling (PS) [4]. In this paper, we will focus on PS, which provides efficient allocation of radio resources in both directions, namely the uplink (in our case) and the downlink.<br>Various approaches and algorithms have been proposed in the literature to satisfy this need (efficient resource allocation), this variety and endless algorithms are associated with the factors under consideration that allow optimal control of radio resources, in particular the type of traffic and the QoS [5] required by the UE.<br>This article explores the various scheduling algorithms proposed for LTE (uplink and downlink). Therefore, we offer our assessment and analysis</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Ankit Pandit https://spast.org/techrep/article/view/2790 Auction based Resource allocation in cloud computing using Blockchain Techniques 2021-10-17T18:03:20+00:00 N Vijayaraj vijaycseraj@gmail.com <p>Cloud computing is new computing paradigm for IT, education, industries, researcher. Recently cloud environment implemented based on virtualization, parallel computing, resource management, service oriented architecture, distributed computing etc. Cloud computing mainly based on resource allocation. Under the condition of restricted neighborhood processing assets, e.g., cell phones, it is normal for rational miners, i.e., consensus nodes, to offload computational undertakings for confirmation of work to the distributed computing servers. Hence, we center around the exchanging between the distributed computing specialist organization and excavators, and propose a bartering based market display for effective processing resource allocation. In this resource allocation, allocate the resource in between of cloud user and cloud service provider using blockchain technology. Propose an auction mechanism that accomplishes ideal social welfare. In the multi demand offering plan, the social welfare expansion issue is NP-hard. In this way, we plan an inexact calculation which ensures the honesty, singular soundness and computational productivity.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 N Vijayaraj https://spast.org/techrep/article/view/1585 An investigation on the difference between pixel based and object based classification for Land Use/Land Cover from Landsat 8 imagery 2021-09-29T19:01:11+00:00 Rohini Selvaraj rohini.s2018@vitstudent.ac.in Suresh Kumar Nagarajan sureshkumar.n@vit.ac.in <p>Classifications of Land Use and Land Cover (LULC) have proved as one of the useful tools for resource managers to understand the environmental changes that occur over time [1]. The objective of the work is to compare the pixel-based supervised classification[2] with &amp; without Principal Component Analysis (PCA)[3] and object-based supervised classification[4]. In this study, the data captured from Landsat 8 satellite by Operational Land Imager (OLI) sensor was used for LULC classification. For comparison, six LULC classes were used, that are: agriculture land, urban land, waterbody, dense forest, bare land, and uncultivated land. The maximum likelihood classification (MLC)[5] algorithm was employed to attain a supervised pixel and object-based classification. The study investigated the difference between the object-based and pixel-based classification in LULC classification.&nbsp;</p> 2021-10-01T00:00:00+00:00 Copyright (c) 2021 Rohini Selvaraj, Suresh Kumar Nagarajan https://spast.org/techrep/article/view/200 Predictive Analysis of Flood Forecasting through Machine Learning Algorithms: Study on Cauvery Basin, India 2021-09-08T12:39:52+00:00 Shobhit Shukla shobhitshukla89@gmail.com <p>This paper tries to explore several machine learning techniques namely, Nonlinear Autoregressive Network with Exogenous Inputs (NARX) [1], Artificial Neural Networks (ANN) [2], Tree Bagger [3], Support Vector Machine (SVM) [4], Gaussian Process Regression (GPR) and Adaptive Neuro Fuzzy Inference System (ANFIS) [4] for the purpose of foretelling floods by envisaging river flow in Cauvery river basin of southern India. The techniques were applied on various models constructed from combinations of various antecedent river flow values from two gauging stations and the results were compared for each technique. This paper utilizes three standard performance assessment measures viz. Mean Squared Error (MSE), coefficient of correlation (R) and Nash-Sutcliffe coefficient (NS) [5] for the purpose of assessing the efficacy of models developed. A comprehensive evaluation of the performance parameters for each model established that the Support Vector Machine model achieved best performance in comparison with other models for the purpose of flood forecasting.</p> 2021-09-08T00:00:00+00:00 Copyright (c) 2021 SHASHI KANT GUPTA, Shobhit Shukla https://spast.org/techrep/article/view/2351 EFFECTIVE DEMARCATION OF CORONAVIRUS DISEASE TRANSMISSION RATE USING RULE BASED TECHNIQUES 2021-10-07T10:21:44+00:00 Srinivasan SP srinivasan.sp@rajalakshmi.edu.in Anitha J anithaj.17@gmail.com <p>COVID-19 is a real life-threatening disease which has locked up the whole Globe. In spite of the rapid sharing of information and inculcating social responsibilities by multiple factor such as personal hygiene, social distancing and self-isolation the spread of COVID19 is increasing day by day. Controlling the spread of disease in early stage becomes the primary responsibility of the government. The judicial administration along with the health-care worke­­rs struggles tirelessly to flatten the curve by taking measure to reduce the spread of disease. The primary objective of this paper is to help the government in this aspect of buying time to control the spread of virus. World Health Organisation notified that “COVID-19 is transmitted via droplets and fomites during close unprotected contacts between an infector and infected.”&nbsp; Hence the key strategy should focus on preventing the virus before becoming a community transmission. Locating the area of very high possibilities of COVID victims becomes the highest priority to block transmission. This can be done by identifying the parameters which influence the spread of disease and applying on the large volume of data obtained by the area wise survey information. In this paper a decision tree is built using Iterative Dichotomiser (ID3) and it is further compared using Naive Bayes classification algorithm to check the accuracy of the result obtained by ID3. The obtained result of tested data is further solved and an improvised optimal decision tree is build which is used to identify and block the area of highest risk factor.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Srinivasan SP, Anitha J https://spast.org/techrep/article/view/1749 IOT BASED - SMART SHOPPING USING NEAR FIELD COMMUNICATION TAGS AND READERS 2021-10-08T10:32:41+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com Kanaga Suba Raja.S skanagasubaraja@gmail.com R.S.Kumar kannanarchieves@gmail.com V.Balaji kannanarchieves@gmail.com Usha Kiruthika usha.kiruthika@gmail.com <p>Abstract: Ongoing Covid-19 lockdown has thrust a digital new normal upon everyone globally. The digital revolution under industry 4.0 was estimated to take up&nbsp; to&nbsp; 10&nbsp; years&nbsp; to get accepted, but covid-19 has brought this behavioral change to accept smart digital technologies within 100 days of Covid lockdown worldwide. Everything that was happening normally in the past has to integrate with smart digital interface. Day-To-Day shopping for essentials which had very limited digital tech integration in the form of point of sale (POS) Billing has to drastically adapt hybrid online cum in store smart tech for their future survival. India has millions of mom and pop stores (kirana stores) that need to upgrade with smart tech. Considering their low investments and turn over a cost effective and phased up gradation is needed. Rapid assessment&nbsp;&nbsp; of small size department store has brought out a few low cost entry points for smart tech adoption. Usually, the conventional Barcode-based Shopping system makes the customers experience at the supermarket tough,&nbsp; as&nbsp; the&nbsp; customer&nbsp; needs&nbsp; to&nbsp; wait&nbsp; for a long time in the billing counter which lowers their patience level. AIOT-based – intelligent shopping cart with built-in payment modules could be chosen to avoid long lines at billing desks and provide customers with a trouble less shopping experience. Here, we propose to use&nbsp; NTAG215&nbsp; NFC&nbsp; readers&nbsp; in the store for tracking and the customers smart phone for ins tore and online purchase&nbsp; app&nbsp; which&nbsp; has&nbsp; barcode&nbsp; reader&nbsp; for scanning the products. The customer gets&nbsp; to&nbsp; know&nbsp; the&nbsp; offers and discounts that are currently active on the product picked. In the&nbsp; next&nbsp; phases&nbsp; smart&nbsp; cart&nbsp; and&nbsp; smart&nbsp; racks&nbsp; can&nbsp; be&nbsp; gradually&nbsp; integrated&nbsp; in&nbsp; phased&nbsp; manner&nbsp; without&nbsp; creating&nbsp;&nbsp; a excessive upfront investments that are suitable for Indian stores.</p> <p>Purpose: The objective of this paper is to create IOT based - Smart shopping using near field communication tags and readers</p> <p>Methodology: Rapid assessment of small size department store has brought out a few low cost entry points for smart tech adoption. Usually, the conventional Barcode-based Shopping system makes the customers experience at the supermarket tough,&nbsp; as&nbsp; the&nbsp; customer&nbsp; needs&nbsp; to&nbsp; wait&nbsp; for a long time in the billing counter which lowers their patience level.</p> <p>Findings: AIOT-based – intelligent shopping cart with built-in payment modules could be chosen to avoid long lines at billing desks and provide customers with a trouble less shopping experience. Here, we propose to use&nbsp; NTAG215&nbsp; NFC&nbsp; readers&nbsp; in the store for tracking and the customers smart phone for in store and online purchase&nbsp; app&nbsp; which&nbsp; has&nbsp; barcode&nbsp; reader&nbsp; for scanning the products. The customer gets&nbsp; to&nbsp; know&nbsp; the&nbsp; offers and discounts that are currently active on the product picked.</p> <p>Originality/value: The main idea of this fore mentioned proposed system is to satisfy customers with good quality of products which might not be disgruntled to the customers while doing an online purchase, and reducing their waiting time at the billing counter by introducing the term mobile self billing to them while they can also track their purchase list going above their budget before going to actual billing counter.&nbsp;</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, Kanaga Suba Raja.S, R.S.Kumar, V.Balaji, Usha Kiruthika https://spast.org/techrep/article/view/911 Mr Novel Digital Filter Design for Noise Removal in Fetal ECG Signals 2021-09-16T12:16:30+00:00 S.M.Seeni Mohamed Aliar Maraikkayar seenimohamedali@sethu.ac.in Tamilselvi Rajendran tamilselvi@sethu.ac.in Parisa Beham M parisabeham@sethu.ac.in Amjath Hasan M amjathhasan.5@gmail.com <p>In the Maternal and Fetal Health care research, it is necessary to analyse both Maternal&nbsp; Electrocardiogram (ECG) and Fetal ECG(FECG) for analysing the health status of the mother and fetus. Noises are nothing but the presence of interferences due to the powerline, motion artifacts, electromyogram and baseline wander during ECG measurement. In this scenario, cardiotocograpy (CTG) signal plays a vital role in the measurement of FECG which includes Fetal Heart Rate (FHR) and Uterine Contractions (UC). In the FECG signal, it is necessary to filter the noises present to have an accurate classification of the fetus condition. In the recent literatures, Infinite Impulse Response (IIR),Finite Impulse Response (FIR) and Adaptive filters are predominantly used for noise removal in FECG signals. Still now, achieving high Signal to Noise Ratio (SNR) is a major challenge in noise removal in biomedical signal processing. Motivated by the above said issues, in this work, a novel filter design is proposed to improve the SNR value. In the proposed design, an adaptive filter is convolved with the IIR filter; the resultant is again convolved with the Chebyshev filter to improve the filtering response of the system. The system performance has been evaluated based on signal to noise ratio and power of the FECG signal. The experimental results have also been compared with the state of the art filters and it is observed that the proposed filter design achieve high SNR and signal power of 39.92dB and -100dB respectively.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 S.M.Seeni Mohamed Aliar Maraikkayar, Tamilselvi R, Parisa Beham M, Amjath Hasan https://spast.org/techrep/article/view/1022 Implementation to Secure Iot Network Through SDN Controller with Blockchain 2021-10-22T16:41:50+00:00 Apeksha Sakhare apeksha.sakhare@raisoni.net <p>To beautify the security of the SDN networks deployed within the cloud environment, Software-defined networking (SDN) has improved to change the everyday trend of the existing network. This works offers to implement SDN authorized blockchain applied to overcloud. For network management and adaptation, the SDN controller is probably used. This evaluation offers a precis of regular safety troubles with SDN just after being connected to clouds, outlines arrival concepts of the nowadays added blockchain model, and advises the causes that offer blockchain as a tremendous protection element for solutions in a manner where SDN &amp; cloud are associated. Because of that, a considerable rise in the quantity of data of users (private, venture, commercial, etc.) is available over the internet, thus allowing extreme risks through harmful users [1]. Numerous safety recommendations were advised &amp; applied to safeguard users’ facts against unspecified risks. Many of those solutions are found out employing conventional networking strategies that are complicated and very hard to control. These strategies depend on the manual design of gadgets developing in policy disputes, which can additionally negotiate the security of networks such problem may be directed by deploying Software Defined Networking (SDN) model that delivers a wide network clarity, centralized command, flexile network structure &amp; simplicity of command, with the usage of sorting control plane (network controller) &amp; statistics plane(forwarding devices). The controller detects, directs, and commands the action of forwarding devices is detected, directed &amp; commanded by the controller using OpenFlow protocol. Herein, we advocate &amp; justify SDN based completely network-great firewall with the help of utilizing competencies of open flow, as one among protection solutions to limit corrupt traffic entering into a network [2-3]</p> 2021-10-22T00:00:00+00:00 Copyright (c) 2021 Apeksha Sakhare https://spast.org/techrep/article/view/428 Deep Learning Models used to study the driver behaviour with alert system 2021-09-15T19:56:13+00:00 Ravinder Kaur rk0019@srmist.edu.in Dr. Jiendra Singh jitendrs@srmist.edu.in <p><span style="font-weight: 400;">The issue of safe driving is one that affects people all over the globe. A large number of fatal accidents occur. Driving a car is a difficult task that requires comprehensive concentration and concentration. Distortions can be classified into three categories: visual diversions (driver's eyes are taken off the road), manual distractions ( driver's hands are taken off the wheel), and cognitive distractions (driver's mind is taken off the driving task). A total of 36,750 people died in motor vehicle crashes in 2018, according to the National Highway Traffic Safety Administration (NHTSA). Our methodology automatically detects and notifies the car owners when they are engaging in disoriented driving behaviour.&nbsp;</span></p> <p><span style="font-weight: 400;">A Real-Life Drowsiness Dataset created by a research team at the University of Texas at Arlington was used to detect multi-stage sleepine</span><span style="font-weight: 400;">ss.</span><span style="font-weight: 400;"> We used the StateFarm dataset</span><span style="font-weight: 400;">,</span><span style="font-weight: 400;"> which contained snapshots taken from a video captured by a camera mounted in</span><span style="font-weight: 400;"> th</span><span style="font-weight: 400;">e car, to create our visualisation.</span></p> <p><span style="font-weight: 400;">In the case of a classification algorithm where the forecasting input is a likelihood value in the range of 0 to 1, accuracy and logarithmic loss (also known as cross-entropy) is used to quantify the effectiveness of the system. Each layer serves a specific function: e.g., Average pooling on a global scale, or Layers with dropouts Layers of batch normalisation and density</span></p> <p><span style="font-weight: 400;">With the weights from training on the ImageNet dataset, we used classification models and CNN, LSTM and VGG -16 and VGG-16, RESNET 50, Xception and MobileNet models used for drowsiness and distraction datasets respectively. We obtained good results from all few&nbsp; of the architectures and their accuracies are shown in Fig.1.&nbsp;<img src="https://spast.org/public/site/images/msrkaur3/img1.png" alt="" width="604" height="207"></span></p> <p><strong>Fig.1.</strong><span style="font-weight: 400;"> Accuracy of different algorithms per dataset</span></p> <p><span style="font-weight: 400;">This problem will be solved by developing a recognition system to recognize key characteristics of drowsiness and distraction, as well as sending out a warning when one becomes drowsy before it is too late.</span></p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Ravinder Kaur, Dr. Jiendra Singh https://spast.org/techrep/article/view/2576 USE OF ALEXNET ARCHITECTURE IN THE DETECTION OF BONE MARROW WHITE BLOOD CANCER CELLS 2021-10-14T20:06:28+00:00 S.Karthigaiveni, kannanarchieves@gmail.com S.Janani, drsjananiece@gmail.com S.G.Hymlin Rose kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <h1>Abstract</h1> <p><strong>Purpose:</strong> To investigate the cancer affected area in the white blood cell images.</p> <p><strong>Methodology:</strong> In order to train and evaluate our CNN model, we implement a ten-fold cross-validation on the whole dataset, where 90% of the images are used for training and 10% are used for testing. The proposed system is implemented using CNN layer functions. The dataset is acquired from two different subsets of a dataset collection. The input to the model constitutes segmented cells of 227*227*3 images with zero center normalization.</p> <p><strong>Findings: </strong>92.86% of accuracy has been achieved with 40 images by using convolutional neural network (CNN) of two layers. CNN can perform with more than 10,000 images. Based on the comparison analysis of this work with other works, CNN can produce the best results, if more number of images are used.</p> <p>Originality/value: In this study, the pre-trained convolutional neural network with multiclass models have been modified for Support Vector Machine for classification of WBC into different categories for leukemia detection..</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 S.Karthigaiveni,, S.Janani,, S.G.Hymlin Rose, Mayakannan Selvaraju https://spast.org/techrep/article/view/1935 Enhanced Retinal Biometric System for the Diabetic Population using Modified Discrete Grey Wolf Optimization Algorithm 2021-10-09T12:36:47+00:00 SHEEJA V FRANCIS sheejavf@gmail.com Sivakamasundari J sivakamasundarij17@gmail.com <p>Biometric system is the most widely used technology for automatic recognition and authentication of an individual in today’s modern world. These systems work by acquiring, analysing and matching a&nbsp;person’s unique physiological characteristics&nbsp;such as finger print, face patterns, Iris features etc., with the help of an available data base. Retina based identification is one such robust and reliable form of biometric solution. As the blood vascular patterns of retina are unique for individuals, these are used as features for developing a Retinal Biometric System [1]. However, in diabetic persons, retinal complications such as exudates and haemorrhages&nbsp;may obscure these&nbsp;vascular patterns,&nbsp;cause mismatch in the authentication process. &nbsp;As sedentary life style has led to an alarming increase in the number of diabetic cases, it is necessary to improve the accuracy&nbsp;of conventional retinal image based&nbsp;authentication systems in order to cater to this growing population of &nbsp;the society. This paper proposes an enhanced retinal biometric system, where, retinal vasculatures are clearly extracted&nbsp;by an automatic segmentation technique using Modified Discrete Grey Wolf Optimizer (MDGWO) [2]. This algorithm is a population-based meta-heuristic swarm intelligence method to find optimal solutions of threshold values in Kapur’s Multilevel Thresholding (KMLT) [3].</p> <p>&nbsp;</p> <p>Original retinal fundus images &nbsp;(20 normal, 50 abnormal) obtained from the publicly available databases&nbsp;such as DRIVE, STARE and HRF&nbsp;are used for the study. In the preprocessing step, these colour images are resized and morphological operations based contrast enhancement&nbsp;are&nbsp;carried out on the blood vessels&nbsp;in the green channel [4]. Then, background is removed and segmentation is employed using MDGWO based Kapur MLT algorithm. Finally, diabetes disease findings are removed using morphological connected components method. In addition to the statistical texture features such as energy, contrast, entropy, homogeneity, maximum probability, mean and standard deviation,&nbsp;vessel feature namely vessel pixel count is also&nbsp;obtained from the segmented vasculatures [5]. &nbsp;The validation is carried out by comparing the segmented vessel images against its ground truth images&nbsp;using binary similarity measures [6]. Though all features were found to be statistically significant decision is made based on the vessel pixel count&nbsp;feature as per expert opinion. The results obtained from one normal and abnormal images are shown in figs. 1 and 2. This retinal biometric system shows improved matching accuracy of 97.5%. Hence, MDGWO based enhanced retinal biometric system is&nbsp;optimal&nbsp;for building robust biometric systems for the &nbsp;entire population including those with diabetics&nbsp;related eye complications. &nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <table> <tbody> <tr> <td width="113"> <p>&nbsp;</p> </td> <td width="113"> <p>&nbsp;</p> </td> <td width="104"> <p>&nbsp;</p> </td> <td width="113"> <p>&nbsp;</p> </td> <td width="113"> <p>&nbsp;</p> </td> <td width="4">&nbsp;</td> </tr> <tr> <td width="113"> <p>(a)</p> </td> <td width="113"> <p>(b)</p> </td> <td width="104"> <p>(c)</p> </td> <td width="113"> <p>(d)</p> </td> <td width="113"> <p>(e)</p> </td> <td width="4">&nbsp;</td> </tr> <tr> <td colspan="5" width="557"> <p><strong>Fig. 1, Preprocessing and segmentation of normal image in HRF database (a) original normal, (b) &nbsp;morphological preprocessing, (d) back ground removed, &nbsp;(e) MDGWO based Kapur MLT segmented and (f) ground truth image.</strong></p> </td> <td width="4">&nbsp;</td> </tr> <tr> <td width="113"> <p>&nbsp;</p> </td> <td width="113"> <p>&nbsp;</p> </td> <td width="104"> <p>&nbsp;</p> </td> <td width="113"> <p>&nbsp;</p> </td> <td colspan="2" width="118"> <p>&nbsp;</p> </td> </tr> <tr> <td width="113"> <p>(a)</p> </td> <td width="113"> <p>(b)</p> </td> <td width="104"> <p>(c)</p> </td> <td width="113"> <p>(d)</p> </td> <td colspan="2" width="118"> <p>(e)</p> </td> </tr> <tr> <td colspan="6" width="562"> <p><strong>Fig. 2, Preprocessing and segmentation of abnormal image in HRF database (a) original abnormal, (b) morphological operations, (c) back ground removed (d) MDGWO based Kapur MLT segmented,(e) disease conditions removed from the image (e).</strong></p> </td> </tr> </tbody> </table> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 SHEEJA V FRANCIS, Dr https://spast.org/techrep/article/view/1193 Digital Forensics Investigation for Attacks on Artificial Intelligence 2021-09-24T11:09:04+00:00 Manasa Sanyasi sanyasi.manasa@res.christuniversity.in Pradeep Kumar K kukatlapalli.kumar@christuniversity.in <p>Digital forensics is a branch of science that mainly focus on the recovery and investigation in digital devices. The term digital forensics were mostly used for computer forensics. [Artificial Intelligence abbreviated AI is family of computational models which deploy black box techniques amongst the family of Neural Networks for classification, prediction and optimization tasks for wide variety of applications such as Computer Vision, Medical Imaging, Natural Language Processing, Autonomous Vehicles, Robotics etc., Cyber-Data Science deals with class of problems where the computational models are tampered for intentional mal-effects such as Mal-ware Classifiers, Adversarial Attacks, Back-end attacks. Adversarial attacks (Induced attacks negatively impacting the computational efficiency of AI and hence seeking malicious outputs) limit the applications of artificial intelligence (AI) technologies in key security fields. Therefore, improving the robustness of AI systems against adversarial attacks is an essential process for further advancement of AI for secured systems and hence warrants applications of Digital Forensics towards Artificial Intelligence attacks and is quite a challenging task. Therefore, it is important that new research approaches are needed to be adopted to deal with these security threats. This research is aimed at investigating the Artificial Intelligence (AI) attacks that are “malicious by design”. It also deals with conceptualization of the problem and strategies for attacks on Artificial Intelligence (AI) using Digital Forensic tools. A specific class of problems in Adversarial attacks are tampering of Images for computational processing in applications of Digital Photography, Computer Vision, Pattern Recognition (Facial Mapping algorithms). State-of-the-art developments in forensics such as 1. Application of end-to-end Neural Network Training pipeline for image rendering and provenance analysis [4], 2. Deep-fake image analysis using frequency methods, wavelet analysis &amp; tools like – Amped Authenticate [1-3], 3. Capsule networks for detecting forged images 4. Information transformation for Feature extraction via Image Forensic tools such as EXIF-SC, SpliceRadar, Noiseprint 5. Application of generative adversarial Networks (GAN) based models as anti-Image Forensics [6], will be studied in great detail and a new research approach will be designed incorporating these advancements for utility of Digital Forensics. &nbsp;Below fig.1. represents graphical abstract for DFI on AI Attacks.</p> 2021-09-24T00:00:00+00:00 Copyright (c) 2021 Manasa Sanyasi, Pradeep Kumar K https://spast.org/techrep/article/view/1489 Hybrid Action-Allied Recommender Mechanism: An Unhackneyed Attribute for E-commerce 2021-09-30T19:27:10+00:00 S GOPAL KRISHNA PATRO GOPAL sgkpatro2008@giet.edu <p><span class="fontstyle0">The users of electronic commerce (e-commerce) otherwise known as internet commerce<br>portals, most commonly depend upon the customer reviews when they make any purchase<br>decisions. But it is observed that, one product may have more than hundreds of<br>miscellaneous reviews which leads to an overload of information on customer. This<br>information overload tends one to work on the objective of developing a recommender<br>mechanism to recommend a review subset having high content score as well as various<br>aspects of products with associated sentiments. Therefore, these recommendation systems<br>(RSs) have been established parallel to web networks. Initially, the mechanisms of these<br>techniques were based on content-based, collaborative filtering and demographic methods<br>and now-a-days, these techniques are used to incorporate social information. Furthermore,<br>they use personal, implicit and local information from the IoTs. This contribution delivers an<br>orderly explanation for hybrid RS along with a novel method with slight modification of the<br>contemporary techniques such as collaborative filtering. It also describes their evolution,<br>progression, fruitfulness and also identifies various future implementation areas selected for<br>future, present and past importance.<br></span><span class="fontstyle0">Social media can be defined as a computer-based tool which succours the sharing of<br>thoughts, ideas and information over the blocks of communities and virtual networks. This is<br>an Internet- based system which makes a rapid electronic communication and circulation of<br>content such as photos, videos, personal information and documents among the users[1].<br>This circulation of data can be done by smartphone, computer, tablet or through web-based<br>applications or software[2]. This technique has been originated to engage people in<br>interacting themselves with their family and friends but later on this enabling feature is being<br>introduced in the business for implementing a good communication between the users[3],<br>[4].<br></span><span class="fontstyle0">With thriving social networking platforms and technologies, the e-commerce companies are<br>mostly observed to create their own profiles for social networking. E- Commerce integrates<br>the social media into e-retail websites and adds its functionality to society via social<br>networks. In current era, numerous companies have achieved/exploited these deep learning<br>techniques for the improvement of diversity as well as performances of their own<br>recommender mechanisms. With a number of brilliant achievements in this field of<br>recommendation system based on deep learning technique, various research works have<br>exponentially been developed. As a result of which several conferences and workshops<br>have been organized to further move on to the depth with this field[5].<br>Generally, recommendation systems assemble information about the user’s preferences for<br>a set of products such as jokes, songs, movies, books, applications, gadgets, e-learning<br>materials, travel destinations and websites. Researchers have revealed that RS use’s users<br>demographic features such as gender, nationality and age etc. along with accuracy, novelty,<br>stability and dispersity. In this mechanism collaborative filtering plays a significant role<br>though they are also applied with the applications of content-based, knowledge-based or<br>social techniques[6], [7]. In this study, hybridised application of recommendation system have been proposed keeping in view the various advantages and disadvantages of<br>traditional filtering methods. Also this report summarizesthe theoretical background of<br>recommender system. <br></span></p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 S GOPAL KRISHNA PATRO GOPAL https://spast.org/techrep/article/view/2179 A Study of Covid-19 And Its Detection Methods Using Imaging Techniques 2021-10-01T16:24:15+00:00 bharathasreeja bharathasreejaece@rmkcet.ac.in <p>The coronavirus disease 2019 (COVID-19) is spreading everywhere now a days in the world. The affected people are having different symptoms include&nbsp;<a href="https://en.wikipedia.org/wiki/Fever">fever</a>,&nbsp;<a href="https://en.wikipedia.org/wiki/Cough">cough</a>,&nbsp;<a href="https://en.wikipedia.org/wiki/Fatigue">fatigue</a> and <a href="https://en.wikipedia.org/wiki/Shortness_of_breath">breath</a>ing problem. It is difficult to control the spreading of corona virus. Artificial Intelligence plays a vital role in detection of COVID-19. Various methodologies available for the detection of COVID-19 has been discussed. The efficiency of the different methods also analysed. &nbsp;We made a summary of different methods in detection of corona virus. The analysis proved that the Artificial Intelligence is strengthening and supporting to detect the virus.&nbsp; Methods based AI can be developed for early detection of COVID-19 and it can save the time of medical experts.</p> <p>&nbsp;</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 bharathasreeja https://spast.org/techrep/article/view/2882 AN IMPROVING PERFORMANCE DISTRIBUTED FRAMEWORK FOR DETECTION OF CROSS WEBSITE SCRIPTING ATTACK 2021-10-19T13:20:11+00:00 Balika J Chelliah kannanarchieves@gmail.com Karunya Raghavan kannanarchieves@gmail.com Ankit Prajapati kannanarchieves@gmail.com Sreenidhi G kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p><strong>Purpose:</strong> The purpose of this project is to develop an intrusion detection system that can detect XSS attacks. To detect an XSS threat, an attack signature is used.</p> <p>Methodology: The framework is divided into 3 levels: software testing service, XSS finder, and XSS elimination. It detects XSS attacks using Techniques of template matching, unit testing and taint-based analysis. The XSS attack elimination process eliminates the escape of untrusted knowledge by abusing the industry norm for bar rules through the sanitisation technique.</p> <p>Findings: <strong>M</strong>ost people are now wishing on the internet for our endless hours of hard work; this has increased the opportunity for criminals to corrupt data and create compromised systems. Today, a variety of rational attacks are being launched in cyberspace, with Cross-Site Scripting (Web Application Attack) being one of the most prominent. designed function, propose an outline for a device that could be vulnerable to a An Intrusion Detection System (IDS) is attacked with a Cross-Site Scripting (XSS) attack. An XSS (cross-site scripting) attack is a crucial flaw that jeopardises the security of web services. It is a form of security breach where an aggressor injects hazardous script into a software server, either on the client-side inside the consumer’s browser or on the service handside. This well-thought-out method is focused on the case and maintains a log of multiple data breaches. That are variables and are mainly concerned with malicious tags and attributes.</p> <p>Originality/value: This study provides how XSS attacks continue to target web application vulnerabilities in order to capture user credentials. Future analysis will focus on developing a defence concept that employs mining of data and machine algorithm strategies to locate and avoid the grip on DOM-based XSS attacks, reducing pessimist and optimistic results.</p> 2021-10-19T00:00:00+00:00 Copyright (c) 2021 Balika J Chelliah, Karunya Raghavan, Ankit Prajapati, Sreenidhi G, Mayakannan Selvaraju https://spast.org/techrep/article/view/2920 An Empirical Comparison of Attack Detection using Machine Learning Algorithm in Internet of Things Edge 2021-10-22T16:30:27+00:00 MANOKARAN manoraj3@gmail.com <p>This research work is aimed to perform a comparative analysis of different machine learning algorithms for attack detection at the Internet of Things (IoT) edge. Due to the rising development of IoT, the attack detection has become extremely important in network security, as it protects the IoT network from suspicious activities. The self-configuring and open nature of the IoT devices is vulnerable to both internal and external attacks [1]. The statistical method of attack detection is not suitable for fast and accurate detection due to the multi-dimensional nature of attacks. Machine learning based edge computing can rectify these issues through automated response and by shifting the computation physically closer to the device edge where the information is generated [2]. In this paper, we compared the performances of eight machine learning algorithms to identify the best suitable for attack detection in IoT Edge. The machine learning algorithms which we have applied are logistic regression (LR), support vector machine (SVM), K- nearest neighbor (KNN), decision tree (DT), random forest (RF), eXtreme Gradient Boosting (XG Boost), Gradient Boosting classifier(Gr Boost), and adaboost classifier [3]. The work was tested with UNSW-NB15, the latest network attack dataset. The experimental result shows that the ensemble learning algorithm performs well compared to the classical machine learning algorithms.</p> 2021-10-22T00:00:00+00:00 Copyright (c) 2021 MANOKARAN https://spast.org/techrep/article/view/220 A Securing Data Packets in MANET using R-AODV and ECC and justifying it using MATLAB 2021-09-17T14:11:26+00:00 Fahmina Taranum ftaranum@mjcollege.ac.in <p>Securing Data Packets in MANET using R-AODV<br>and ECC and justifying it using MATLAB</p> 2021-09-20T00:00:00+00:00 Copyright (c) 2021 Fahmina Taranum https://spast.org/techrep/article/view/3038 Machine Learning Techniques for true and fake Job posting 2021-11-06T11:09:42+00:00 Kamakshi Mehta 1990uditmamodiya@gmail.com Navaneetha Krishnan Rajagopal 1990uditmamodiya@gmail.com Mr. Sagar Balu Gaikwad 1990uditmamodiya@gmail.com Prof.(Dr.) Sachin Yadav 1990uditmamodiya@gmail.com <p>According to researches, there are around 188 million unemployed people around the globe. We may find many job vacancies on job portals and across the internet to help the job seekers. India alone has more than a hundred job portals. One major issue people face here is that the job seekers are not sure if the employer is real or fake. Most of these portals do not have a system that could check if the employer, posting a job is real or fake. Scammers are making use of this opportunity to post fake job offers which might look genuine to the job seekers applying for it. This way the poor job seekers might lose a large amount of money and time. A best possible solution for this problem would be that the job portal itself being able to identify if the job being posted is real or fake. This paper suggests using a machine learning model to achieve this goal. The idea here is to use natural language processing to understand and analyze the job posting and then making use of a machine learning model to predict if the job posting is real or fake. The first step is to import a dataset which has real life real and fake job posting. In this project, Employment Scam Aegean Dataset provided by University of Aegean Laboratory of Information and Communication system Security is being used. This dataset contains 18000 samples containing real life job postings. Various text cleaning techniques like lemmatization, stop words removal and special characters and punctuation removal is done on the data. Once the text data is processed, various algorithms like Random Forest, Linear SVC, Gradient Boosting Classifier, Gaussian naïve Bayes classifier and XGB classifier is used to test the performance of the model. The best two algorithms with respect to the percentage of accuracy with which the models could classify real and fake job posting was taken into consideration. Random Forest and Linear SVC could give accuracy close to 98%. Both of these algorithms were tuned using GridSearchCV, a library function which is a part of sklearn’s model selection package. After tuning, the performance of both these algorithms increased and Linear SVC gave a better accuracy score of 99%. Hence Linear SVC is being used in this project for predicting real and fake job posting on a job portal.</p> 2021-11-06T00:00:00+00:00 Copyright (c) 2021 Kamakshi Mehta, Navaneetha Krishnan Rajagopal, Mr. Sagar Balu Gaikwad, Prof.(Dr.) Sachin Yadav https://spast.org/techrep/article/view/1764 A Machine Learning Methodology for Diagnosing Chronic Kidney Disease 2021-09-30T10:19:53+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com A. Abirami abirami.a@eec.srmrmp.edu.in Rithanya S kannanarchieves@gmail.com Suvetha Sri RP kannanarchieves@gmail.com Adharsya S R kannanarchieves@gmail.com <p><strong>Purpose:</strong> The objective of this paper is to diagnosing chronic kidney disease using machine learning.</p> <p><strong>Methodology:</strong> The study's data was acquired from a variety journal paper based on kidney disease. The data was gathered through a variety of chronic kidney failure journals and some renal disease dataset.</p> <p><strong>Findings: </strong>We propose an AI-based approach to diagnosing chronic kidney disease in this experiment. The unreadable learning file was accessed from the University of California's Irvine (UCI) AI store. Quantitative modelling is used to pick two complete models from insufficient data to complete missing pieces of information Missing characteristics are almost always diagnoseable since they may be excused reasons, and may in general not be present in patients. after a series of fixes and after reasonable weighting the things that are missing, six models (Instructive/ Vital Slope, Bayesifier, Vector Network, K-Ne Network, Beloved Neighbor, The X subjective models of the Vector Machine and Virtuous Partaker [er, Vital Believer, Belief Network]) received 99.75 percent or greater of the anticipated output with no exaggeration. After the instability and consistent propagation, we established a joined model that integrates key trees and irreplaceable forests by perceptron, which was capable of delivering perfect accuracy after only one season</p> <p><strong>Originality/value: </strong>In this study, Vector Machine, and Virtuous Partaker [er,Vital Believer, Belief Network]) X subjective models received a 99.75% or greater of the desired output with no exaggeration.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, A. Abirami, Rithanya S, Suvetha Sri RP, Adharsya S R https://spast.org/techrep/article/view/2441 Stock Market Prediction Techniques: A Systematic Review and Taxonomy 2021-10-12T13:00:59+00:00 Pradeep Kumar Tiwari pradeeptiwari.mca@gmail.com Ashish Kumar aishshub@gmail.com Sai Santosh Malladi saisantosh120599@gmail.com Phaneeswar Nuney n.phaneeswar@gmail.com Vivek Kumar Verma vermavivek123@gmail.com <p>Humanity has always been interested in predicting what lies ahead. Also, when financial benefits are involved the quest becomes quite intense and interesting. One such area is the prediction of the stock market price movements and analysis. In this paper we present a review of various prediction approaches ranging from Fundamental Analysis to the modern Machine Learning and Hybrid models. As this is a very dynamic topic on which research activities are conducted round the globe, it is particularly challenging to classify a technique completely belonging to a certain paradigm. There exists some intersection in the techniques of various paradigms. We consider the broad spectrum of techniques under Traditional and Millennial groups to present the review.</p> <p>&nbsp;</p> 2021-10-12T00:00:00+00:00 Copyright (c) 2021 Pradeep Kumar Tiwari, Ashish Kumar, Sai Santosh Malladi, Phaneeswar Nuney, Vivek Kumar Verma https://spast.org/techrep/article/view/2641 Deep Learning based Face Hallucination: A Survey 2021-10-17T15:20:35+00:00 Savitha S savitha.s@vit.ac.in <p>In this digital era, images, specifically face images, play a significant role in various real-world applications such as image enhancement, image compression, face recognition, modeling 3D faces, cybercrime, biometric, surveillance monitoring [1-2] and many more. But, since the main resource for public images used for security purpose is from the surveillance cameras, the major challenge is that, in many cases, it is difficult to get face images of the expected quality and dimension. Hence, the naïve concept called “<strong>face hallucination”</strong> emerged that remains to be a hot topic in the field of computer vision and pattern recognition. Face hallucination (FaceH), also referred as face super resolution (SR), is a domain specific technique which aims to recuperate a high-resolution (HighR) face image from a given single or series of low-resolution face images (LowR). &nbsp;This method enhances the facial appearance of the LowR face image in detail all along with individual face features and produces a HighR face image. &nbsp;FaceH plays a vital role in various real-world applications such as face recognition, face modeling, criminal detection, surveillance monitoring, security control, digital entertainment and many more. Recently, FaceH has received significant attention and made advances in deep learning techniques.</p> <p>The main objective of this presentation is to review such FaceH techniques and present them in an organized manner. First, we will discuss the challenges that exist in hallucinating a low resolution image. Second compare various face hallucination techniques. Third, we will discus the existing hallucination models and there functionality. Then the performance metrics, the commonly used datasets, and future enhancement that can be carried out presented.</p> <p>The concept of Face Hallucination was first introduced by Baker and Kanade[3] in the year 2000. They developed a model based on Gaussian pyramids and Bayesian maximum a posteriori (MAP) to enhance the LowR frontal face image into a HighR image, and hence-forth it stood as an initiative for the development of various FaceH techniques. Later position-patch based methods in which the basic idea is to divide the training images into number of small patches and make use of these patches to hallucinate the LowR image patchs into HighR patches at the same position of the input images.[4-6]. With rapid development of deep learning in the field of computer vision, Deep Learning based face hallucination came into existence. Various identity-preserving methods that used the concept of CNN are developed[7-9].</p> <p>The major drawback of these existing methods is that they do not support non-frontal images. They fail to provide better results on global reconstruction and varied poses. These methods also ignore non-facial regions with high frequency details on the face image, mainly complex occlusions that change the facial appearance. As a result, improvements in face hallucination can be focused on first improving face frontalization and then integrating it with the cascaded CNN model for hallucination.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Savitha S https://spast.org/techrep/article/view/1469 MACHINE LEARNING TECHNIQUES IN BUSINESS FORECASTING - A PERFORMANCE EVALUATION 2021-09-29T12:26:42+00:00 Guna Sekhar Sajja abhishek14482@gmail.com Harikumar Pallathadka ieeemtech@gmail.com Khongdet Phasinam ieeemtech@gmail.com Myla M. Arcinas ieeemtech@gmail.com <p>Business forecasting is the act of evaluating previous performance in order to use the knowledge gathered to forecast future business situations so that business strategies can be created to achieve goals. Recent computing and technological advancements have eased the routine capture and storing of corporate data that may be utilised to support business decisions. In recent years, no corporate function has grown as quickly as forecasting. Business forecasting is a planning tool that is used to aid in decision making and planning within a business. It empowers people to identify critical parameters and variables that may be controlled in advance to provide management-oriented results in the future. In layman's terms, business forecasting is the practise of estimating or projecting future patterns based on company data. Forecasting is becoming increasingly crucial in today's business world as companies strive to increase customer happiness while lowering the cost of providing goods and services. When a man takes on the job of operating a business, he automatically takes on the burden of attempting to anticipate the future, and his success or failure is heavily reliant on his ability to properly forecast the future course of events. Forecasting tries to reduce the uncertainty that has surrounded management decision making in terms of cost, profit, sales, production, pricing, and so on. [1][2]</p> <p>Machine learning is one of the most active research topics in artificial intelligence today, involving the study and development of computational models of learning processes. A lot of intriguing work has recently been done in the field of implementing machine learning algorithms. Machine learning is the most basic method of making a machine intelligent.[3][4]</p> <p>The goal of machine learning is to acquire new knowledge or skills, arrange knowledge structures, and gradually enhance its own performance. Machine learning is a critical component of artificial intelligence [5]. Learning and intellect are inextricably intertwined. Learning is always about self-improvement of future behaviour based on previous experiences. In circumstances where we cannot immediately inscribe a computer code to answer a given, but instead require example data or experience, we require learning.</p> <p>Machine learning is a highly interdisciplinary field that draws and expands on ideas from statistics, computer science, engineering, cognitive psychology, optimization theory, and many other scientific and mathematical disciplines. We may build a learn model using example data or past experiences by merging all of these fields. This model could be predictive in order to make future forecasts, descriptive in order to gather knowledge from data, or both. Machine learning based on data is a critical component of modern intelligent techniques; it primarily studies how to obtain rules that cannot be obtained through theoretical analysis from observed samples, and then how to apply these rules to recognise objects and predict future data or unobserved data. In a nutshell, machine learning is an efficient method for recognising new samples by learning from previous samples. [6][7]</p> <p>This article discusses how machine learning techniques can be used to forecast business outcomes.</p> <p>&nbsp;</p> <p>&nbsp;</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Guna Sekhar Sajja, Harikumar Pallathadka, Khongdet Phasinam, Myla M. Arcinas https://spast.org/techrep/article/view/727 A Comprehensive Survey on Lightweight Asymmetric Key Cryptographic algorithm for Resource Constrained Devices 2021-09-15T15:43:44+00:00 Rajashree R rajashree.r2019@vitstudent.ac.in ananiah Durai ananiahdurai.s@vit.ac.in <p><strong>Abstract </strong></p> <p>Security algorithm designs for specific requirements such as user, availability and constraints of resources were on demand. Lightweight cryptographic algorithm became popular among researchers for resource constrained devices such as IoT devices, sensors, PDA’s, Mobile devices, Wearable devices, RFID and portable devices, Public key cryptography or asymmetric key cryptography plays a vital role for developing many such lightweight cryptographic algorithms. In this paper, many lightweight asymmetric key cryptographic algorithms such as Rivest Shamir Adleman (RSA), Elliptic Curve based Elgamal cryptosystem; Elliptic Curve Digital Signature Algorithm (ECDSA), Elliptic Curve Diffie Hellman Key Exchange Algorithm (ECDHE) and Elliptic Curve Cryptosystem (ECC) have been comprehensively reviewed with its characteristics and preferred applications. In addition, few related works are analyzed and suggestions for suitable target applications were provided. Hardware and software requirements for efficient implementation of such algorithms were also explored. Moreover, asymmetric key cryptographic techniques such as RSA and ECC are modeled using Vivado tool for target implementation on various FPGA devices. Further, techniques to enhance throughput, area and computation time were on the rise over the recent past, to improve frequency of the point multiplier module. Design strategies to overcome the bottleneck of complex computation in the multiplier block prove that the latency of ECC has significantly reduced when implemented on the advanced ZYNQ board. Also resource sharing techniques utilized to reduce the area suggests that it is a suitable candidate for IoT device application.</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Rajashree R, ananiah Durai https://spast.org/techrep/article/view/2235 Transfer Learning: A paradigm for machine assisted knowledge transfer 2021-10-08T07:20:15+00:00 Aparna Gurjar gurjara@rknec.edu <p>This paper surveys transfer learning as a knowledge transfer mechanism. Conventional machine learning algorithms require huge amount of labeled data for supervised learning. In absence of such data, the models suffer from performance degradation.&nbsp; Transfer learning enables the prior knowledge gained in doing a particular task to be reused or transferred to another new task of similar nature. This can speed-up and improve the learning curve of the tasks in the new domain. The paper gives a brief overview of the transfer learning process. The literature survey highlights widely used mechanisms of Transfer Learning like homogeneous, heterogeneous, as well as instance based, parameter based and relational based implementation of transfer learning. It discusses how these mechanisms are utilized to create applications in various AI systems.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Aparna Gurjar https://spast.org/techrep/article/view/876 IOT-BASED INTELLIGENT ASSISTANT MIRROR FOR SMART LIFE & DAILY ROUTINE USING RASPBERRY PI. 2021-09-15T19:03:03+00:00 Poornima Lankani poornima_2019@kln.ac.lk WLSV Liyanage sayurivliyanage@gmail.com <p>Humans start their day by looking in the mirror at least once before leaving their homes every morning. In addition, they waste some considerable time of their busy workload in front of the mirror. To make this time more productive and useful, there ought to be a system that can be readily conducted, user-friendly, and smart according to the instant progress on the Internet of Things. The intelligent mirror is a new addition to the smart device family, which is a straightforward concept that is a screen placed behind a two-way mirror. The Intelligent Mirror turns our room or bathroom mirror into a personal assistant with artificial intelligence. The purpose is to develop a smart mirror that can automate working humans' busy daily routines and manage their tasks when they prepare in front of a mirror. To make the most of this moment, users can securely access all the relevant details of the day by looking in the mirror simultaneously. The intelligent mirror, which a single voice command can activate, will significantly help disabled persons and the general [1]. Raspberry Pi has been used to build the proposed intelligent mirror, which will be linked to the digital world via the Internet. The mirror can communicate with the user through voice commands and reply appropriately [2]. Interestingly, CNN facial expression classification model showed the highest performance evaluation, with an accuracy of 0.721 and validation accuracy around 0.665 after the 20 epochs. The emotion monitoring and health measuring function was able to provide a distinctive experience to the users. The mirror will reflect important elements such as home workout, meal plans, date &amp; time, local news, To-do list, reminders, and weather [3]. The mirror can also handle specialized functions such as automate and controlling home IoT devices.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Poornima Lankani, WLSV Liyanage https://spast.org/techrep/article/view/1782 Dr Quality Check based on Fault Detection using Image Processing Algorithms 2021-10-08T15:17:20+00:00 Arunachalam U arunachalem_u@yahoo.com Vairamuthu J vairamuthuj@yahoo.com PARISA M hodece@sethu.ac.in <p>In mechanical Industry, it is necessary to have a best quality in all the components used for better production. Throughout the production of components like bolts and nuts, various stages are involved. All the stages in the production are manually inspected and the quality is verified with human dependency. During this inspection, some of the components are not identifies properly, as it is a manual check. Also the manual involved process is a time consuming process and there will be a presence of error in the fault detection identification of the components. So it is a needed research for identification of fault in the industry. Even though lot of algorithms are developed, still an accepted algorithm that is fully automatic is a challenge. In this work an automatic method for fault detection in turn quality check is proposed. The digital images of bolts and nuts are collected from the camera positioned in the production zone and created a separate database. Both normal and defected components (bolts and Nuts) are collected in the database. The images are pre-processed and are enhanced for better detection. Segmentation algorithms are used for detecting the Region of Interest (RoI). The geometric features like area, diameter and thickness are measured as salient features to discriminate between normal and defected components. In addition the texture features are also estimated. The proposed method gives the classification accuracy of 98%. By identifying the defects and recovering it at the early stage increases our quality in the production.</p> <p>Figure 1: Overall Flow diagram of the Proposed Method</p> <p>Table 1. Classification Accuracy based on Knn algorithm</p> <table> <tbody> <tr> <td width="85"> <p><strong>S.NO</strong></p> </td> <td width="217"> <p><strong>K Value</strong></p> </td> <td width="208"> <p><strong>Accuracy (%)</strong></p> </td> </tr> <tr> <td width="85"> <p>1</p> </td> <td width="217"> <p>2</p> </td> <td width="208"> <p>92</p> </td> </tr> <tr> <td width="85"> <p>2</p> </td> <td width="217"> <p>3</p> </td> <td width="208"> <p>93</p> </td> </tr> <tr> <td width="85"> <p>3</p> </td> <td width="217"> <p>4</p> </td> <td width="208"> <p>98</p> </td> </tr> </tbody> </table> <p>&nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Arunachalam U, Vaira, Parisa https://spast.org/techrep/article/view/518 Biometric Steganography Using MPV Technique 2021-09-16T11:15:42+00:00 Satya Krishna Vallabhu 17wh1a0544@bvrithyderabad.edu.in <p>Biometric data, is prone to attacks and threats from cyber criminals to conduct identity theft, and its economic value makes it a product that can be traded in underground marketplaces such as the dark web. Securing it is the need of the hour. A steganographic approach is proposed as a solution to this. Biometrics are hidden inside other biometrics for safe storage and secure transmission. Steganography is a process of hiding data in a transmission medium. The main objectives while hiding data are its undetectability, robustness against image processing and other attacks, which steganography can easily achieve. This paper focuses on a process of hiding an image in another image by using mid position value(mpv) technique. Here we have to choose the secret biometric on which Arnold transform will be applied resulting in a scrambled version of the secret biometric. This will be embedded into the cover image resulting in a stego image. Lastly, the hidden secret biometric will be decoded from this stego image, which will first result in a scrambled secret biometric. Inverse Arnold Transform will be applied on this to finally result in the decoded secret biometric. The paper further explains the working and processes in detail.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Satya Krishna Vallabhu https://spast.org/techrep/article/view/2656 A Modern Approach on Movie Recommendation Systems 2021-10-15T09:57:12+00:00 Kunal Khandelwal khandelwalkn@rknec.edu Gurdeep Singh singhg_1@rknec.edu Ekta Gandhi gandhiea@rknec.edu Aarushi Tiwari tiwariaa@rknec.edu Vishakha Rathi rathivv@rknec.edu Dilipkumar Borikar borikarda@rknec.edu <p><span style="font-weight: 400;">In a rapidly changing world, recommendation systems have become a part of our daily lives, since it is used in many websites and has its applications in vast fields of movies, music, social media, books and also on e-commerce websites. Filtering &amp; efficiently searching for the right things saves a lot of time.[4] Many people – including teenagers and adults – nowadays often use recommendation systems whether it is Movie Recommendation Systems or Music Recommendation systems or they may not even realize that they are using it embedded in any application or website that they use to watch movies or listen to songs or for any other purpose. Recommendation systems are significant because they help to choose the right items and services[3].&nbsp; The present paper analyses the consequences of introducing personality into recommendation systems. To deliver more tailored movie suggestions, concepts like collaborative filtering are integrated with personality traits. Collaborative filtering &amp; content-based filtering are two common approaches in recommendation systems[1,3]. The recommendation systems vary the formula for recommending products to the user. They have sentiments, age group, time spent in surfing something over the Internet/social media, likes/dislikes on surveys taken on various entertainment platforms (YouTube/Spotify for videos &amp; songs respectively) as their attributes in the dataset. This study investigates the operation of numerous recommendation systems with the goal of combining two or more attributes to create a new recommendation system with enhanced efficiency &amp; providing the best way to give recommendations to users.</span></p> <p><span style="font-weight: 400;">The concept that taking personality into consideration can increase recommendation quality is tested in this study. Our data suggest that personalisation improves recommendations, despite the fact that it necessitates some additional user input upfront. The film industry is no longer only industry or a source of entertainment; it has evolved into a global business hub. The box office success, popularity, &amp; other aspects of a film are today celebrated all over the world. There is a wealth of information regarding the success and popularity of these films.</span></p> <p><span style="font-weight: 400;">Unlike current recommendation systems, which are based on a fixed set of attributes, this study offers the idea of combining two sets of attributes. However, these methods have several drawbacks, for example, the requirement of prior details of the user’s history and habits in order to fulfil the work of recommendation. To mitigate the impact of such dependencies, a hybrid recommendation system that includes collaborative filtering, content-based filtering, and sentiment analysis of movie tweets can be implemented. Where movie tweets are gathered from microblogging services in order to better understand current trends and user reactions to the film. Recommendation systems are a popular and valuable area for people to make informed automated judgments. It is a mechanism that allows a user to find information that is useful for an individual from a large amount of data.</span></p> <p><span style="font-weight: 400;">When it comes to the Movie Recommendation System, recommendations are made based on user similarities (Collaborative Filtering) or by taking into account a specific user’s activity (Content-Based Filtering). Various firms, such as Facebook-which promote friends, LinkedIn-which advises jobs, Pandora, Netflix, and Amazon are all employ recommendation systems to enhance their profits parallelly benefiting customers too[1,2]. The focus of the following study is on various tactics and methods which can be implemented so as to improve and enhance the quality of recommendation of movies.</span></p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Kunal Khandelwal, Gurdeep, Ekta Gandhi, Aarushi Tiwari, Vishakha Rathi, Dilipkumar Borikar https://spast.org/techrep/article/view/2030 Sophisticated Face Discovery Attendance Monitoring System 2021-10-01T13:44:04+00:00 Dr. Soma Sekhar G somasekharonline@yahoo.co.in Radha Mothukuri radha@kluniversity.in Basavaraj D braj.d@staff.vce.ac.in Durga Kalyani K durgakalyani.cse@gcet.edu.in <p>On present day, the Internet of things (IOT) has entered a splendid time of fast improvement. The Internet of things is a thought that hopes to grow the benefits of the customary Internet relentless system, remote control limit, data sharing, and so to products in the physical world. Ordinary things are getting connected with the Internet. This thought can be used to deal with the security concerned issues in a monetarily way. In this paper, a framework is being created to interface any entryway with the Internet, so the entire system framework can be controlled from anyplace on the globe. System will be turned on with a raspberry pi. Whenever the person comes in front of the camera near the entrance of the class, it recognizes the face, on recognizable proof of an enlisted face on the gained picture assortments, the participation register is set apart as present in any case missing the complete data is stored in an Excel file further the file is forwarded to the faculty.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Dr. Soma Sekhar G, Radha, Basavaraj, Durga Kalyani https://spast.org/techrep/article/view/1190 CONGESTION AWARE MANET ROUTING USING EVOLUTIONARY GAME THEORY AND CROSS LAYER DESIGN 2021-09-24T13:00:35+00:00 Ramesh Thanappan trcsebu@gmail.com Thambidurai Perummal ptdurai58@gmail.com <p><strong>Abstract </strong></p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; A wireless network without infrastructure is called MANET. In MANET, a node can move any direction, any speed and at any time. There is no separate router in this network, instead of all, the nodes act as a router.</p> <p>To transfer data in the network in efficient manner, the network should be congestion free or less congestion. If the network has congestion it would affect the performance of the network. In the routing process, if a node aware about the neighbor node’s congestion and select congestion free path, it leads to better performance of the network.</p> <p>In this paper we design two approaches for improving performance of the network. One is&nbsp;&nbsp;&nbsp; cross layer protocol and the other is evolutionary game theory approach to find the least congestion path. Cross layer involves transport layer and MAC layer of MANET to avoid congestion. Liner rank selection method used to find the least congested node. We implemented this techniques in the GPSR protocol.&nbsp; Based on the simulation, the performance of the proposed protocol is better than the GPSR protocol.</p> 2021-09-24T00:00:00+00:00 Copyright (c) 2021 Ramesh Thanappan, Thambidurai Perummal https://spast.org/techrep/article/view/2769 An Enhanced Item-based Collaborative Filtering Approach for Book Recommender System Design 2021-10-17T18:32:12+00:00 Monika Verma monika04verma@gmail.com Arpana Rawal monika04verma@gmail.com <p>The massive amounts of web-sourced data have made accessibility of precise (tailored) information for end users, a challenging task. If this personalized information-filtering technology, also called Recommender System (RS)[1] is possibly thought to get automated at the machine level, this necessitates the design of an appropriate machine-assisted recommender system that suffices both system and end-user requirements. This paper attempts to make use of an innovative hybrid approach to build a prototype of machine-assisted recommender that can be used as a tool in physical library of Universities, organizations, and institutions. The real-time utility of this system shall assist Physical Library users as recommend-beneficiaries who visit library for their book issue/return transactions and find it cumbersome to search for appropriate books in order to read their topics of their interest or in their course-defined academics. The prescribed courseware books (also referred to as items in recommender system design terminology) shall be used as input criterion in collaboration with demographic attributes in order to recommend books to library users from different subject-domains. Our hybrid approach uses a variant of collaborative filtering (CF)[2] – the latter is a baseline technique having achieved widespread popularity in state-of-the-art RS tools being used across the web in electronic commerce.</p> <p>The paper takes a case study of an issue / return transactional database of any University-sized library where the quantity of content and books is immense and cannot be easily located by the users who predominantly belong to student communities. The readers find it difficult to locate exact or popular books for fetching the exact content (or topics) of their interest in respective domains. Even if they are able to locate the popular books it is again cumbersome to locate another similar book (item) under the mentioned subject domain, if at all the preferred book is not available on the shelf for an issue transaction. By having this model implemented we can narrow down the set of books (item) choices for students belonging to different semesters and (or) disciplines.</p> <p>Of the two main sub-categories of Collaborative Filtering technique, namely, User based CF and Item Based CF types [3], the former method cannot be scaled for RS design if larger end-user counts. Hence, the idea is to use Item-based CF method to build a high-quality recommendation tool for a non-commercial domain as undertaken in the current case study. [4], [5]. &nbsp;The book-issue, book-return and their shelving patterns exhibited by the library users are treated as demographic attributes in the experiments henceforth.</p> <p>Another salient feature of our book recommendation system is that here demographic attributes of items (library books participating in library transactions) is considered rather than widely used demographic features of the users themselves like user location, language age, gender etc [6].</p> <p>The demographic attributes of RS-items considered here as book-tag indicator – text flag or reference flag, issue-frequency and issue span can be used for precise calculation of the item-item similarities by discovering all significant associations rules in the formulated item set. This is followed by ranking step of RS prototype design which implements Association Rule Mining to generate association rules on participating library books (based on item-item similarity results obtained in previous step) [7].</p> <p>Apart from the above-mentioned problem objective, the library book (item) issue/return transaction databases can also help the organizational management in resolving inevitable tasks to write-off the in-shelved books with nil issue / return counts past longer time spans. This can be anticipated as extension to build an advanced data analytical-cum-estimation model in later stages for consistent library book usage-monitoring and efficient utilization insights.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Monika Verma, Arpana Rawal https://spast.org/techrep/article/view/667 Road Traffic Monitoring System 2021-09-16T08:40:00+00:00 Angayarkanni Annamalai S angayarkanni.s.a@gmail.com <table> <tbody> <tr> <td width="-72.6">&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> </tr> </tbody> </table> <table> <tbody> <tr> <td width="407.7">&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> </tr> </tbody> </table> <p>&nbsp;Vehicles are one of the major needs for transportation in day to day life. Due to the increment in vehicle numbers, the situation turns out to be a challenging to route from a source to a destination in time. Thus it needs significant investigation, analysis, and maintenance. Hence a model using an urban traffic simulation that would help reduce the traffic in highly busy areas of the cities leveraging the SUMO tool is proposed. In&nbsp; these regards, an Intelligent Traffic Management System (ITMS) with a Deep-Neuro-Fuzzy model was proposed and implemented. A supervised Deep Neural Network is trained to formulate the road weight using the implications from the fuzzy rules. Algorithms are used to select optimum path from source to destination on the basis of calculated road segment weights from framework. Different built-in routing algorithms are also used to prove the workability of this model. Thus we are therefore tasked with developing sustainable transportation and infrastructure systems for current and future traffic demand. Cities that contain &nbsp;heterogeneous &nbsp;networks like Chennai require this simulation &nbsp;for better transportation. &nbsp;Our results demonstrate that a complete modeling of&nbsp; a very wide area is possible at the expense of minor simplifications and would reach a very good level of approximation.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Angayarkanni Annamalai S https://spast.org/techrep/article/view/178 Application of Blockchain Technology based secured online voting system 2021-09-03T09:10:42+00:00 R Savitha savithar@rvce.edu.in Ashwini K B ashwinikb@rvce.edu.in Prashanth K prashanthk@rvce.edu.in <div> <p>Election is a very important part of the modern democracy. But most of the people around the world do not trust the current flawed electoral system. There are many issues in current voting system such as vote manipulation, central database hacking, EVM hacking and many more. So in this paper we are trying to solve the above mentioned issues by proposing e-voting model which give fundamental benefit over current voting system. Blockchain technology is used to implement the proposed model which promises the security, transparency and cryptography. There are many blockchain framework which give blockchain as service, and some are explained in the paper. The paper presents the details of proposed system and how it is implemented using Ethereum blockchain.</p> </div> 2021-09-08T00:00:00+00:00 Copyright (c) 2021 R Savitha, Ashwini K B, Prashanth K https://spast.org/techrep/article/view/2366 - PARKINSON’S DISEASE CLASSIFICATION USING FUZZY-BASED OPTIMIZATION APPROACH AND DEEP LEARNING CLASSIFIER. 2021-10-08T10:12:28+00:00 Sabeena B 19pheof002@avinuty.ac.in <p>ABSTRACT<br>Limited care is provided to PD (Parkinson’s disease) affected individuals due to inadequate, irregular monitoring of symptoms, occasional care taken, light involvement of clinicians that leads to reduced effective decision and sub-optimal patient health-based results. In the starting period of PD, individuals commonly have vocal impairments. Hence, vocal problem based diagnosis method was the foremost research for PD. The irrelevant and/or redundant features are eliminated in feature selection method. These chosen features provide the best result using the objective function. For most of the cases, it is a NP-hard (Nondeterministic Polynomial-time hard) problem. From last 5 years, the database size has been increased and hence there is need for feature selection before performing any classification method. To solve this problem, Fuzzy Monarch Butterfly Optimization Algorithm (FMBOA) feature selection algorithm is introduced in this work. This algorithm selects most important features from the dataset and increases the PD detection rate. Firstly, KPCA (Kernel based Principal Component Analysis) dimensionality method is introduced for reducing dimension in the dataset. Secondly FBOA based feature selection; weight value is the essential factor that is used for searching optimal features in the PD classification. In the proposed FMBOA algorithm, weight value is computed via the Gaussian fuzzy membership function. A new event is performed in the proposed Fuzzy Monarch Butterfly Optimization Algorithm where the weight value of Butterfly Optimization Algorithm is modified while performing the optimization process to enhance the results. The classification algorithms are used for varied feature set that are obtained from ABOA and each set have different combinations. The FCBi-LSTM (Fuzzy Convolution Bi-Directional Long Short-Term Memory) is developed for PD classification. The introduced framework was evaluated using UCI repository of machine learning and LOPO CV is used for performance validation. The measures that are considered for performance evaluation are MCC, f-measure and accuracy.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Sabeena B https://spast.org/techrep/article/view/2401 Device For Facilitating Remote Interactive Lessons 2021-10-11T10:57:32+00:00 Naveen Kumar naveen.sharma@chitkara.edu.in Rajesh Kumar Kaushal rajesh.kaushal@chitkara.edu.in Mamta Janagal mamta.janagal@chitkara.edu.in Simranjeet Singh simranjeet.singh@chitkara.edu.in Akhilendra Khare akhilendra.khare@chitkara.edu.in <p>The pandemic COVID-19 has influenced every single area from one side of the planet to the other. On a similar track, UNESCO (United Nations Educational, Scientific and Cultural Organization) report on education has revealed that more than 157 crore students all around the world were affected because of this pandemic [1]. The schools and universities have been closed in about 191 nations across the world. As a prompt action, the educational institutions has immediately adopted the online teaching/ learning approach. This new teaching/ learning approach has come-up with few challenges as well as opportunities for the educator and learner. In past blended learning has already prove that it is more flexible, cost effective in comparison with the traditional classes and has been growing in popularity [2], [3]. In blended learning some e-modules are integrated with the traditional face-face courses. Such a change from conventional teaching to web based/ online teaching was a challenge for the teachers [4].</p> <p>A very few innovations have been noticed in the area of online teaching learning. In view of current pandemic situations (COVID-19), this area is of extreme importance but yet remains unexplored. The novelty in this area can produce the positive outcomes in the teaching learning experiences.&nbsp; This research work is proposing a device for facilitating remote interactive communication between an educator and a learner. A portable and flexible device has been developed in order to ensure the uninterrupted and effective teaching and learning process. The objective of the study is to proposed a device that enables the the teacher to draw free hand drawing or write mathematical equations at a given canvas space.</p> <p>A survey was conducted before finalizing the design of the device. This survey was conducted among the faculty members conducted online classes and total 120 faculty members participated has participated in the survey. It has been found from the survey that faculty members are very much familiar with the existing online system but they are unable to draw the diagrams/ equations. Therefore, the proposed solution makes them enable to draw or write on the canvas just like offline mode of teaching. Moreover, the current work also provides a cost effective and user friendly solution for above mentioned problem. An aggregate of twenty samples resembling to the prototype were provided to the educators of various fields, in order to get the initial feedbacks regarding the prototype. There after a questionnaire was shared with the users to get the feedback of the designed model. Several parameters related to the effectiveness and usability were listed and users were asked to quantitatively provide their reviews under each of them. The questionnaire was designed based upon five point Likert scale (strongly disagree=1, disagree=2, neutral=3, agree=4, strongly agree=5) to measure the responses. In terms of effective usage of the device mean test score (M= 4.450, SD= .60) shows that this device is effective in every facet specially for the online classes as shown in Figure 1.</p> 2021-10-11T00:00:00+00:00 Copyright (c) 2021 Naveen Kumar , Rajesh Kumar Kaushal, Mamta Janagal, Simranjeet Singh, Akhilendra Khare https://spast.org/techrep/article/view/2510 DESIGN OF NOVEL KEY GENERATION TECHNIQUE BASED RSA ALGORITHM FOR EFFICIENT DATA ENCRYPTION AND DECRYPTION 2021-10-14T06:05:18+00:00 Anshu Joshi abhishek14482@gmail.com Dr.Vijay Anand ieeemtech@gmail.com <p><strong>Abstract</strong></p> <p>Cryptography is a Greek word for providing disguised information. It includes transformation of information (Plaintext) into some other form (Ciphertext). &nbsp;The main feature of cryptography is to solve the problems, which are associated with verification, integrity and privacy. A protocol is the sequence of actions, which is designed with two or more sides, through which a goal can be fulfilled. Cryptography also, is associated with the meaning of protocol. &nbsp;Thus, a cryptographic &nbsp;protocol &nbsp;is &nbsp;a &nbsp;protocol &nbsp;that &nbsp;deals &nbsp;with &nbsp;the &nbsp;use &nbsp;of &nbsp;cryptography. &nbsp;This protocol &nbsp;uses &nbsp;cryptographic &nbsp;algorithm &nbsp;and &nbsp;intends &nbsp;to &nbsp;halt &nbsp;attempts &nbsp;of &nbsp;thefts &nbsp;and invasions [1-2].</p> <p>The network security becomes more important with the development of various techniques of&nbsp; network development. With the growth in the use of world wide web, this has become even more important as the users can access tools and edit the information. The global society has faced many changes because of the digital revolution. Along with all, this has also increased the number of hackers and viruses.[3-4]</p> <p>With the increase in the content on the web, the increase of viruses and bad eyes in the form of hackers, privacy has become an important issue among many. [5-10]</p> <p>In today's world, security is a major problem especially when it comes to hiding secret information from total strangers. So, converting a message into a form that cannot be easily cracked is an ultimate option for all. Due to the new and improved techniques used by hackers, sharing information on the internet is less secure now days. To overcome such problems have evolved techniques like steganography and cryptography. Encrypting and decrypting keys are different. This article presents a novel key generation method based variant of RSA algorithm. This novel key generation method produces strong keys in less computation time.</p> 2021-10-16T00:00:00+00:00 Copyright (c) 2021 Anshu Joshi, Dr.Vijay Anand https://spast.org/techrep/article/view/2708 Detection of Diabetic Retinopathy using Convolutional Neural Networks 2021-10-17T11:02:08+00:00 Jaichandran R rjaichandran@gmail.com Varshni kannanarchieves@gmail.com Vithiavathi Sivasubramanian kannanarchieves@gmail.com Kanaga Suba Raja S usha.kiruthika@gmail.com Jayaprakash kannanarchieves@gmail.com Varshni kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p><strong>Purpose:</strong> The objective of this paper is to train the machine learning algorithms in detecting diabetic retinopathy and to evaluate the performance of machine learning algorithms in detecting diabetic retinopathy.</p> <p><strong>Methodology:</strong> In this proposed methodology, diabetes retinopathy is being detected using computer vision techniques which are invoking image processing and machine learning techniques. Thus the input retinal fundus images are obtained from the public repository. From the input retinal images, image processing steps such as preprocessing, segmentation, blood vessel extraction have been performed. Machine learning techniques such as CNN is applied to predict the normal abnormalities and recommend medicinal measures.&nbsp; In preprocessing, the image acquisition is performed in which the obtained RGB image would be transformed into the grayscale, and also the image would be enhanced for further image processing steps. After preprocessing, optical disk segmentation has been performed. The optical disk is edged and segmented for analysis. Thus this step is very important for the feature extraction process. Canny edge detection techniques are applied to edge and extract the optical disc. The blurred edges are also improved and transmitted for the feature extraction step. In the feature extraction, blood vessel features are extracted. Feature extraction is an important step as based on the features, only classifier model accuracy is dependent. The objective of the feature extraction algorithm is that it should able to extract meaningful features, objects assisting the normal, abnormal recognition process. The feature vectors consist of the measure values. The convolutional neural network obtains the feature vector as the input and performs the classification phase. The CNN has the training model set, which is trained with the colour, feature vector values from the input retinal images. The training model is trained until the error is minimized to an optimized level. This stage is termed the maximum trained phase. After this, the normal diabetic retinopathy is predicted efficiently from the input test images with promising results.</p> <p><strong>Findings: </strong>we have compared the accuracy of diabetic detection from retinal fundus images among the SVM and CNN using 1900 test images. Figure 2 shows diabetic retinopathy images for training machine learning algorithms Table 1 shows the accuracy of SVM and CNN in predicting DR. Figure 3 shows CNN reports better results compared to SVMS..</p> <p><strong>Originality/value: </strong>In the experimental results, we have used python programming language to develop the project and have used the google collab tool to develop the program. In this, we have used normal, diabetic affected patient's retinal images for analysis. Three thousand six hundred retinal images of both non-diabetic and diabetic patients are used to train the machine learning models. Fig 3 briefs the retinal images labelling, the image features of pixels are extracted and matched with the deep learning classifier to identify the normal diabetic stages. For classification, we have used a convolutional neural network-based deep learning model.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Jaichandran R, Varshni, Vithiavathi Sivasubramanian, Kanaga Suba Raja S , Jayaprakash, Varshni, Mayakannan Selvaraju https://spast.org/techrep/article/view/1430 Identification of Drunk People using Thermography and Machine Learning 2021-09-29T09:50:56+00:00 Sivakumar Rajagopal rsivakumar@vit.ac.in Deepikaa Balaji deepikaabalaji@gmail.com Kanishka S kanishkaselvi.2001@gmail.com Vidhyalakshmi Venkatesh ayhdiv.01@gmail.com Rahul Soangra soangra@chapman.edu <p>Ever since medical imaging and medical data analysis techniques have been discovered and put into practice, it has been one of the rapidly developing applications in the biomedical industry [1]. With various complex Artificial Intelligence (AI), machine learning, and deep learning models taking over, it is now possible to reconstruct or analyze any kind of complex conditions by training the model with a very dynamic and realistic dataset. Infrared Thermal Imaging (IRT) is one of the widely used types of medical imaging techniques that is used in various medical domains such as diagnosis of breast cancer, minute tumors, diabetes neuropathy, and other peripheral vascular disorders, etc [3]. By using the recent technologies, now it is possible to reproduce thermographic images with a higher quality which can be used to further innovate in real-time applications. Infrared thermal (IRT) imaging is a methodology that allows non-invasive and non-ionizing monitoring of skin surface temperature distribution, providing underlining physiological information on peripheral blood flow, autonomic nervous system, vasoconstriction/vasodilatation, inflammation, transpiration, or other processes that can contribute to skin temperature. It is a non-contact method that can be used to identify the superficial temperature of any object or surface[5]. This paper attempts to present a novel and safe, non-contact method to classify drunk and sober people, unlike the conventional techniques followed. Using a Breathalyzer during this COVID-19 situation is not a coherent approach that should be followed and can lead to high risks of infection as well. Other existing technologies which are present are only using anti-drunk driving systems which use the electrical signal impulses which pass from one’s heart to the brain [4]. Instead, by using the methods of thermal image processing, we can detect the drunk by analyzing the facial features (Figure 1). Thermal cameras have the capability to identify various features that indicates alcohol consumption. They are efficient enough to differentiate a temperature difference that is as small as 0.12 degrees Celcius even on a minute surface[6]. Studies have shown that the thermal behavior of the forehead and around the eyes widely increases when one is intoxicated with alcohol. This is because alcohol causes motor disturbances and also increases eye temperature [1].&nbsp; To begin with, we gather an image dataset of people which includes the faces of both sober and drunk people who consumed alcohol in a wide range of quantities. The images are then appropriately augmented focussing mainly on the forehead and the eyes of the faces and further Face Detection process is been performed. When we train feature extraction using our proposed machine learning model, we will be able to identify a significant difference in the temperature while comparing a sober and drunk person. For future works, the same prototype can be infused into a real-time Thermal gun, which is similar to the Infrared Thermometers which are widely used in current practice since the pandemic situation. The device can be of great use for the Traffic Police to catch drunk drivers and can simultaneously generate the person’s Identity to verify their criminal history as we are going to integrate both Face recognition and Image Classification techniques here (Figure 2).</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Sivakumar Rajagopal, Deepikaa Balaji, Kanishka S, Vidhyalakshmi Venkatesh, Rahul Soangra https://spast.org/techrep/article/view/2898 Performance-Aware Management of Cloud Resources: A Taxonomy and Future Directions 2021-10-21T06:14:37+00:00 GOPAL gopalshyambabu@gmail.com <p>Cloud service providers encounter a challenge in managing remote resources due to the dynamic nature of the cloud environment. The complexity of the process is increased by the requirement of maintaining service quality in line with customer expectations, as well as the extremely dynamic nature of cloud-hosted apps. As a result of developments in big data learning methodologies, traditional static capacity planning solutions have given way to intricate performance-aware resource management systems.</p> <p>&nbsp;</p> <p>In the existing studies, it is shown that the resource adjustment decision-making process is intimately linked to the system's behaviour, including resource utilisation and application components. The most essential requirements and restrictions in cloud resource management, as well as workload and anomaly analysis approaches in the context of cloud performance management, are discussed in this paper. A taxonomy of related works is provided, with major methodology in current studies ranging from data analysis to resource adjustment techniques.&nbsp; Finally, a list of new solutions are compiled, taking into account the identified gaps in the overall direction of the tasks under consideration.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 GOPAL https://spast.org/techrep/article/view/2933 Deep Learning based Gender Responsive Smart Device to Combat Domestic Violence 2021-10-26T14:20:52+00:00 Deepa Jose deepa.ece@kcgcollege.com <p>Around the clock, women face harassment and violence. Most of the time it involves harmful weapons. As this is increasing in an enormous rate, police officers are challenged to bring the situation under control time after time. In some rural areas, there are no laws regarding weapon prohibition, which questions the security of the citizens residing there. Hence, this paper aims to provide a unique solution that can prevent any mishaps as well as predict the crime well in advance. The basic idea is to detect harmful weapons such as knives and pistols as well as any suspicious activities in the surroundings. Deep Learning and transfer learning have proven to produce significant results in the field of image processing. The agenda of this paper is to develop a thoroughly automated computer-based system to detect any harmful weapons mainly pistols and knives. This is done [3] by using YOLO (You Only Look Once), a deep learning algorithm, for successful real time detection of weapons. Although there are other algorithms for object detection namely CNN (Convolutional Neural Networks), whose variants include R-CNN (Region based convolutional neural networks) and F-CNN (Faster convolutional neural networks) and SSD (Single Shot Multi-box detector, YOLO is highly preferred because of its speed and accuracy, and also its ability to pass only one image once through the neural network. The dataset used for object prediction consists of two classes, that is knives and pistols. Once any weapon or suspicious activity is detected, an alert message along with the location coordinates as well as a link to live stream the video of the crime scene is sent to the concerned pre-defined contacts.&nbsp; Hence, this helps in crime reconnaissance thus mitigation.</p> 2021-10-26T00:00:00+00:00 Copyright (c) 2021 Deepa Jose https://spast.org/techrep/article/view/195 Paddy Pathogens Classification: A Comparative Analysis of Deep Learning Optimizers 2021-09-07T04:37:35+00:00 malathi velu malathi21cse@gmail.com <p>The pathogens are the key element that leads to fewer yields of up to 16% globally, Pathogens are the biotic agents that cause diseases in crops such as viruses, bacteria, and fungus. Currently, crop disease is classified at an earlier stage by state-of-the-art deep learning techniques. The development of a computational approach for diagnosing diseases of the crops is an emerging research zone in precision agriculture. This research proposed a classification of paddy leaf diseases namely Bacteria leaf blight, blast, hispa, leaf spot, leaf folder based on deep learning techniques. The data set is collected directly from the agricultural field using the IoT device. ResNet-50 architecture is utilized to develop the neural network framework and, the features are extracted using the convolutional block. Seven different kinds of optimizers namely ADAM, SGD, RMSProp, Adagrad, Adamax, Nadam, and, Adadelta were analyzed then, interpretation was carried out based on processing time, classification accuracy and, error rate to recognize which model performs the best. Our research finding states that Adagrad optimizers produce better accuracy of 0.96 and less error of 0.19, then learning rate of the Adadelta is updated to 0.0001 and achieved an accuracy of 0.97 and loss of 0.14.</p> 2021-09-08T00:00:00+00:00 Copyright (c) 2021 malathi velu https://spast.org/techrep/article/view/1744 CLASSIFICATION OF DIFFERENT PLANT LEAF DISEASES USING MULTIPLE CONVOLUTIONAL NEURAL NETWORK AND IMAGE PREPROCESSING 2021-09-30T09:43:51+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com A.T.Madhavi madhavimadan@gmail.com A.P.Suprajaa apsuprajaa134@gmail.com S.Shivali shivalisaravanan1999@gmail.com B.Sofia Farheen sofiafarheen45@gmail.com <p>Since it guarantees food stability, agriculture is a vital part of the global economy. Plants, on the other hand have recently been found to be heavily afflicted by a variety of diseases. Detecting and tracking plant diseases used to be done manually with the help of experts in the field. However this process was time consuming, unreliable, and laborious. Experience, knowledge of plant is essential for making the right decision and selecting the most appropriate method for the treatment of the diseases. This research paper discuss a variety of bacterial and fungal diseases[4] as well as how to recognize and classify them using image pre-processing techniques[1]. With this method,we will identify the type of plant, decide whether the plant has diseases, and classify the disease type. Batch normalization used in this approach is a technique for avoiding network overfitting while also increasing the model's robustness. The Relu activation function and Adam optimizer are used to improve convergence and accuracy of the model.</p> <p><strong>Purpose:</strong> The objective of this paper is to identify the type of plant, decide whether the plant has diseases, and classify the disease type.&nbsp;</p> <p>Methodology: Diseases and disorders affect plants in a variety of ways. Environmental factors like temperature, humidity, nutrient surplus or loses, light, and the most common diseases like bacterial, viral, and fungal[4] diseases are all potential causes. The system identification of leaf disease for different plants like cherry, tomato, potato, peach, and strawberry is done using the&nbsp; plantvillage&nbsp; dataset&nbsp; as well as real-time datasets, as these plant&nbsp; diseases can show different characteristics on the leaves, such as a change in form, scale, and colour. The recent detectors such as convolutional neural networks[2] and image pre-processing for the identification and classification of plant diseases is used. Models are trained using images from an open database containing various plants. Previously, onlya single plant disease couldbe detected at a time,butnowusing different layers of the convolutional neural network, multiple plant diseases can be detected at the same time.Image preprocessing is a technique for improving the quality of an image by eliminating noise or distortions. In image preprocessing technique the images collected from the dataset are resized into default size and labelling of the images are done. The photos collide during image pre-processing. MaxPooling2D is used to max pool the value from a given size matrix. Flatten layer&nbsp;&nbsp; is&nbsp;&nbsp; used&nbsp;&nbsp; to flatten the dimensions of a picture after it has been convolved. Steps involved are getting datasets, pre-processing, labelling of images, augmentation phase, build a model and validation. Collecting the required dataset which is used as the input to the network. Data augmentation is carried out here by performing various operations on the training images, such as rotation, distance, height, shear, and zooming. The model is trained using the augmented data as input. Conv2D layer is used to divide an image into several images. Flatten layer is used to flatten the dimensions of a picture after it has been convolved and finally the trained model is obtained. A total of 3900 sample images have been used.</p> <p>Findings: The Plant village dataset's sample picture arrangements and meta-information conveyance play an important role in the proposed model's efficient planning and operation. The exhibition and exactness of the profound neural organization to be generated will be directly affected by morphological highlights, colouring form, and surface-based highlights.</p> <p>Originality/value: It is focused in, how picture from given dataset (prepared dataset) in field and past informational collection utilized foresee the example of plant sicknesses utilizing CNN model. This brings a portion of the following experiences about plant leaf sickness forecast. Likewise, this framework thinks about the previous creation of information which will enable the rancher to get understanding the interest and the expense of different plants in market.</p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, A.T.Madhavi, A.P.Suprajaa, S.Shivali, B.Sofia Farheen https://spast.org/techrep/article/view/2455 Bypassing confines of feature extraction in brain tumor retrieval via MR images by CBIR 2021-10-12T16:09:40+00:00 Amar Saraswat amar.amity30@gmail.com <p>To check out the health of the patient, digital images are generated every single day and are <br>used by the radiologist for extracting out the details and anomalies. The complicated part is to <br>figure out the disease in those images. And the most difficult task is to automate this process <br>and extract the abnormalities or diseases in the images with same anatomic locations in <br>similar images [1].<br>By the manual diagnosis of the images through the radiologists, the doctors can get to know <br>the exact scenario of the abnormalities in the images, but is considerably more difficult with <br>Content based image retrieval (CBIR) to get those finer details from MR Images [2].<br>Some brain tumours can be healed with merely medical treatment by prescribing specific <br>medicines, while others undergo surgery to eliminate the formation of extra cells, which is <br>referred to as a brain tumour.<br>The tumour is formed with the enormous proliferation of the aberrant cells in the brain or in <br>the central spine of the patient, which can disrupt the brain's functioning. Some of these <br>abnormal tissues grows very rapidly in the brain and are cancerous in nature. The worst part <br>is that they lack clear borders and making invasion of the surrounding brain tissues quite <br>possible in them. While others are less aggressive and grows slowly. They are not marked as <br>cancerous, and do not invade into other tissues of the brain. The former ones are categorized <br>as malignant whereas the later ones are referred as Benign Tumours.<br><br>(a) (b)<br>Figure 1(a): Benign Tumour with clear borders (b) malignant tumour in brain in T1-weighted <br>CE-MRI</p> <p>In total, we have hundred twenty different forms of brain tumours and central nervous system <br>tumours. It is visualized that these tumours, both of brain and spinal cord occurs at variousspots, and starts with sprouting distinct type of cells. As a result, the diagnosis and treatment <br>differ from one patient to the next.<br>In recent years, content-based image retrieval (CBIR) approaches have grown in popularity <br>and are now frequently employed in the automatic diagnosis of disease from MR images, <br>mammograms [3][4], and other sources. It requires the image dataset in order to find the query <br>image from the image dataset, which is based on the visual content of the photos. This CB-IR <br>technique is an advancement of text-based image retrieval (TBIR), in which we normally utilize <br>a structured text dataset that is indexed and tagged with keywords [4]. In TBIR, similar features <br>are extracted from each query image, and database features are generated with the use of an <br>index. Finally, a retrieval set is created using the dataset images with the highest resemblance<br>[5][8].<br>Human evaluators assess higher level aspects, while MR images collect lower-level visual <br>information [6]. The major limitation is that conventional feature extraction framework focuses <br>on either low-level traits or high-level elements, and tries to reduce this semantic gap by the <br>manual intervention of the radiologist [7]. <br>Bridging this gap can be done with the help of the deep learning feature extraction algorithm <br>and the canny edge detection technique we propose, and accuracy closer to the manual <br>results of a human evaluator can be achieved to a significant extent.</p> 2021-10-13T00:00:00+00:00 Copyright (c) 2021 Amar Saraswat https://spast.org/techrep/article/view/979 Precise Management for Smart Farming in Hydroponics based on IoT using Supervised Machine Learning Approach 2021-09-17T13:11:03+00:00 Jayant Mehare jayant.mehare@raisoni.net Mahip Bartere mahip.bartere@ghru.edu.in Shraddha Utane shraddha.utane@gmail.com Shankar Amalraj shankar.amalraj@ghru.edu.in <p>This paper presents a framework for automated smart farming in hydroponics using Internet of Things. The difficulties to be addressed with this framework are the expanding food interest on the planet, the need for a market of new supportable techniques for cultivating utilizing the Internet of Things. The proposed architecture is characterized into mainly four components, hydroponics farm site, device layer, communication layer, fog layer and finally cloud layer. Data Analytics is deployed at the fog layer for effective computation over the cloud layer and implemented using supervised machine learning algorithms for two different scenarios, precision and intelligence by using regression and classification algorithms respectively. The framework improved its exhibition and permits it to effectively accomplish the point of the whole framework executed</p> 2021-09-18T00:00:00+00:00 Copyright (c) 2021 Jayant Mehare, Mahip Bartere, Shraddha Utane, Shankar Amalraj https://spast.org/techrep/article/view/2493 Genre of a Song identification using Deep learning 2021-10-13T17:47:40+00:00 Dr. Nagaratna Hegde nagaratnaph@staff.vce.ac.in D. Kruthi dasari.kruthi@gmail.com V. Sireesha v.sireesha@staff.vce.ac.in <p>Music genre identification has applications in song recommendation systems which are usually a part of music playing apps. . GTZAN Genre Collection dataset is famous in Music Information Retrieval (MIR). TheDataset comprises 10 genres namely Hip Hop , Disco, Blues, Classical, Country, Jazz,Metal, Pop, Reggae, Rockand and each genre comprises 100 audio files (.wav) of 30 seconds each. Deep Neural Networks is used to identify the genreofasong. Thefirst step would be to extract featuresand components from the audio files whichwillbedoneusing the Python library librosa and Mel-frequency cepstral coefficients (MFCC) will be used. MFCC values is similar to human hearing, and used for detection of music genre using Deep Neural Network</p> 2021-10-14T00:00:00+00:00 Copyright (c) 2021 Dr. Nagaratna Hegde, D. Kruthi, Dr. https://spast.org/techrep/article/view/388 A Survey on Various Approaches used in Named Entity Recognition for Indian Languages 2021-09-14T09:00:45+00:00 Rekha Vijayvergia udit.mamodiya@poornima.org <p><span style="font-weight: 400;">“Named Entity Recognition (NER)” is a application of Artificial Intelligence and “Natural Language Processing (NLP”).In NER, various classes of Named entity such as name of a person , an organization name, name of location, name of designation etc., are find out which is required in many NLP activities like question-answering system ,machine translation,&nbsp; artificial intelligence, summarization of documents, academics, robotics, Bioinformatics etc. Mostly NER task was evident for foreign languages but for Indian constitutional languages, due to some challenges present for example scarcity of resources, ambiguity present in languages, morphologically rich behavior of languages etc. ,NER work has been done for few of languages. In our paper, we presented several challenges available in NER for Indian languages and compared them by measuring various standard evaluation metric values like precision, recall and F-measure. In future extension, we would develop a efficient system, which would be more accurate, and which will cater many more Named Entity tags than existing systems ,for Indian languages NER tools.</span></p> <table> <tbody> <tr> <td> <p><strong>S. No.</strong></p> </td> <td> <p><strong>Language</strong></p> </td> <td> <p><strong>Method Used</strong></p> </td> <td> <p><strong>Performance Measure %</strong></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">1</span></p> </td> <td> <p><span style="font-weight: 400;">Hindi,Marathi, Gujarati, Bengali &amp;Tamil</span></p> </td> <td> <p><span style="font-weight: 400;">Rule Based</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score:45.00</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">2</span></p> </td> <td> <p><span style="font-weight: 400;">Mizo</span></p> </td> <td> <p><span style="font-weight: 400;">SVM</span></p> </td> <td> <p><span style="font-weight: 400;">Recall: 93.91</span></p> <p><span style="font-weight: 400;">Precision:95.32F-Score: 94.59</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">3</span></p> </td> <td> <p><span style="font-weight: 400;">Assamese</span></p> </td> <td> <p><span style="font-weight: 400;">CRF &amp; Rule Based</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score: 93.22</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">4</span></p> </td> <td> <p><span style="font-weight: 400;">Telugu</span></p> </td> <td> <p><span style="font-weight: 400;">Lexicon Lookup Dictionary And HMM</span></p> </td> <td> <p><span style="font-weight: 400;">Recall:84.35 Precision: 88.40</span></p> <p><span style="font-weight: 400;">F-score:&nbsp; 86.30</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">5</span></p> </td> <td> <p><span style="font-weight: 400;">Manipuri</span></p> </td> <td> <p><span style="font-weight: 400;">CRFs &amp;Rules</span></p> </td> <td> <p><span style="font-weight: 400;">Recall:92.26 &nbsp; &nbsp; &nbsp; Precision:&nbsp; 94.27&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; F-score:93.30</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">6</span></p> </td> <td> <p><span style="font-weight: 400;">Assamese</span></p> </td> <td> <p><span style="font-weight: 400;">CRFs &amp;Rules</span></p> </td> <td>&nbsp;</td> </tr> <tr> <td> <p><span style="font-weight: 400;">7</span></p> </td> <td> <p><span style="font-weight: 400;">Hindi</span></p> </td> <td> <p><span style="font-weight: 400;">Bi-LSTM-CNN-CRF</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score: 70.00</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">8</span></p> </td> <td> <p><span style="font-weight: 400;">Hindi-English &amp; Tamil-English</span></p> </td> <td> <p><span style="font-weight: 400;">Gazetteer List</span></p> </td> <td> <p><span style="font-weight: 400;">Average values of&nbsp; Recall:&nbsp; 11 Precision: 58&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; F-Score: &nbsp; &nbsp; &nbsp; 19</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">9</span></p> </td> <td> <p><span style="font-weight: 400;">Malayalam</span></p> </td> <td> <p><span style="font-weight: 400;">CRF</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score: 92.30</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">10</span></p> </td> <td> <p><span style="font-weight: 400;">Hindi-English</span></p> </td> <td> <p><span style="font-weight: 400;">CRFs&amp; LSTM</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score: 95.00</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">11</span></p> </td> <td> <p><span style="font-weight: 400;">Telugu</span></p> </td> <td> <p><span style="font-weight: 400;">Naïve Bayes Classifier</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score: 88.87</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">12</span></p> </td> <td> <p><span style="font-weight: 400;">Gujarati</span></p> </td> <td> <p><span style="font-weight: 400;">Rule Based</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score: &nbsp; &nbsp; 70.00</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">13</span></p> </td> <td> <p><span style="font-weight: 400;">Kannada</span></p> </td> <td> <p><span style="font-weight: 400;">Bi-LSTM</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score: &nbsp; &nbsp; 78.10</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">14</span></p> </td> <td> <p><span style="font-weight: 400;">Sindhi</span></p> </td> <td> <p><span style="font-weight: 400;">Rule Based</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score: &nbsp; &nbsp; 98.71</span></p> </td> </tr> <tr> <td> <p><span style="font-weight: 400;">15</span></p> </td> <td> <p><span style="font-weight: 400;">Hindi</span></p> </td> <td> <p><span style="font-weight: 400;">HMM</span></p> </td> <td> <p><span style="font-weight: 400;">F-Score: &nbsp; &nbsp; 97.14</span></p> </td> </tr> </tbody> </table> <p><strong>Comparative Study to Identify Named Entities In Various Indian Languages</strong><span style="font-weight: 400;"> : Following table compares the standard values of measures of Precision, Recall and F-Score achieved by different authors to find named entities in various Indian languages.</span></p> 2021-09-17T00:00:00+00:00 Copyright (c) 2021 Udit Mamodiya https://spast.org/techrep/article/view/1326 A Survey on Hadoop Security and Comparative Analysis on Authentication frameworks in Hadoop Clusters 2021-09-28T08:34:54+00:00 Hena M henashabeebvit@gmail.com Jeyanthi N njeyanthi@vit.ac.in <p><em>Bigdata demands huge processing and storage capabilities for its innovative analysis to make strategic business decisions. Apache Hadoop is such a platform which offers parallel processing and distributed storage of voluminous data.&nbsp; However, Hadoop comes with many security vulnerabilities in its architecture and implementations.&nbsp; This papers presents a survey on various security frameworks which aims to secure Hadoop clusters and the files stored there (Table 1). The study analyses present security mechanisms and associated risks.&nbsp; Most of the methods has one or other pitfalls. Most of the Hadoop platforms relies on Kerberos Authentication Protocol for user authentication. It is known that Kerberos itself has some vulnerabilities like Password Guessing Attacks, Single Point of Failure, Insider Attacks and Time Synchronisation problems. It is understood from the study that there is a need to develop efficient security protocols for Hadoop Clusters to provide authentication, authorization and auditing support. Compromise can happen at any node in the cluster and this can adversely affect the entire system. Also, data at transit as well as in rest need to be secured. Researchers and IT organizations across the globe are collaborating nowadays to improve the Hadoop infrastructure. Hadoop comprise of different modules from different sources with different levels of security. Integrating these modules increases the security risks.&nbsp; For this reason many prominent IT companies like Intel, Hortonworks, Cloudera and IBM have developed security frameworks to integrate with Hadoop components. Thus, the compatibility issue is addressed without compromising security. </em>At present, the Hadoop components use a mix of these solutions. <em>For example, Apache Ranger integrates with Kerberos for authentication and Knox for authorization and auditing. It is seen that Hadoop lacks a single solution that can overcome all security concerns so that the vulnerabilities caused due to integration of technologies can be avoided. None of the solutions turned to be complete solution that can overcome all the issues. A new authentication framework that can address most of the identified issues and is computationally feasible than other schemes is introduced in this paper.&nbsp; The proposed framework (Fig 1) is based Secure Remote Password Protocol (SRP), Threshold Cryptography and Blockchain Technology. The system focus on to eradicate Password Guessing Attacks and Single Point of Failure in Kerberos Enabled Hadoop Clusters. &nbsp;The user instead of sharing the password or any details about it to the Key Distribution Center (KDC) of the Kerberos Server, shares a salted hash of the password. The Authentication Server in the Key Distribution Center (KDC) computes a verifier using the received information and stores in the blockchain network along with user identity information. &nbsp;The Authentication Server (AS) sends a pre-shared key also to the user to confirm the registration. When user logs in, the Key Distribution Center (KDC) gets the mined user information from the blockchain. A common secret is computed at the Key Distribution Center (KDC) and the user side, if the user is a valid user. &nbsp;This is hashed to compute the session key for user to communicate with the Authentication Server (AS). &nbsp;This session key is further verified using a method similar to&nbsp; E-OTP method described in </em>[1]<em>. Then user gets the Ticket Granting Server Ticket (TGT) and multiple Ticket Granting Serves are deployed to address the problem of Single Point of Failure. A comparative analysis of the proposed scheme with&nbsp; two recent authentication mechanisms viz., E-OTP based Authentication Framework (EAF) </em>[1]<em> and </em>Distributed Authentication Framework [2] <em>is also presented in this paper. The analysis results strengthens the claim that the proposed scheme beats the other two in terms of computational efficiency and feasibility while offering the same security features.&nbsp; It incurs almost same cost as EAF but offers more security than EAF. EAF fails when the local storage or the TGS at the KDC is compromised. TAF has opted blockchain storage for dealing with the storage compromise issues and deployed multiple TGS to handle the availability or denial of Service issue. The paper is concluded with directions for future work. </em></p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Hena M, Dr. Jeyanthi https://spast.org/techrep/article/view/1448 An approach to detect DDoS attack in IoT using Machine Learning Techniques 2021-09-29T12:37:44+00:00 DEVPRIYA PANDA devpriya.panda@giet.edu Dr. Brojo Kishore Mishra bkmishra@giet.edu Dr. Kavita Sharma kavitasharma_06@yahoo.co.in <p>Internet of Things (IoT) devices has enabled ubiquitous computing, which has made everyday life simpler in a variety of ways. IoT enables us to not only collect data on the go, but to infer some information out of it. Day by day the number of things connected to IoT is increasing in multifold. So maintaining the huge number of devices connected to the IoT network in various sectors is becoming a challenge. One of the most common attacks on IoT network is DDoS. It can be performed in various ways, such as using Botnets. Machine Learning is a technology which has been supporting the computing in great ways. Machine learning can help us in designing efficient models for identifying attacks. In this work we try to detect DDoS attack performed on IoT network using some Machine learning techniques. We have used recent standard datasets and applied Decision Tree, Random Forest and KNN techniques to detect DDoS attack. We have compared them by considering the confusion matrix based on true positive, false positive, true negative and false negative.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 DEVPRIYA PANDA, Dr. Brojo Kishore Mishra, Dr. Kavita Sharma https://spast.org/techrep/article/view/1263 Forecast and Analysis of Stock Market Volatility using Deep Learning Algorithms 2021-09-27T17:39:48+00:00 Pratham Nayak prathamnayak@outlook.com Shravya Suresh shravya995@gmail.com <p>Stock markets serve as a platform where individuals and institutional investors can come together to buy and sell shares in a public venue. With the advent of digital technology these markets or exchanges exist as electronic marketplaces. These markets are generally very volatile thus making the stock market prediction a highly challenging problem.</p> <p>These predictions of stock value offer abounding arbitrage profits which serve as a huge motivation for extensive research in this area. Identifying and predicting a stock value beforehand by even a fraction of a second can result in very high profits. Similarly, a near to precise prediction can be extremely profitable in the amortized case. This attractiveness of finding a solution has motivated researchers, in both industry and academia to devise techniques despite the complications due to volatility, seasonality and time dependency, economy and other such factors. Lately, AI/ML techniques - like Fuzzy Logic and Support Vector Machines (SVMs), have been used to arrive at different solutions for this problem.</p> <p>Deep learning has recently received growing interest and attention. It has been successfully applied to many fields. In this paper, we explore and develop an ensemble predictive system to forecast the market prices using deep learning algorithms. Here we consider the fractional change in Stock value and the intra-day high and low values of the stock to train the and employ a neural network for obtaining the trading strategy that leads to relatively superior market returns. The focus here is on the use of Regression and LSTM based deep learning strategies used to predict stock values. Factors considered are open, close, low, high and volume.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Pratham Nayak, Shravya Suresh https://spast.org/techrep/article/view/1484 Detection and Transmission of Arrhythmia Symptoms Using Portable Single-Lead ECG Devices 2021-09-29T12:33:20+00:00 Bhuvaneswari Arunachalan abh.mca@psgtech.ac.in <p>Arrhythmia is an abnormality in the heart beat rhythm that causes severe and fatal complications in personal health and well-being. This problem occurs due to irregular activities of the heart that typically maintain a steady heartbeat, a double “ba-bum” beats with even spaces in between each. One of these beats is the heart contracting to provide oxygen to blood that has already circulated, and the other involves the heart pushing oxygenated blood around the body. Atrial fibrillation (AF) is a type of arrhythmia that occurs when there are irregular beatings in the atrial chambers, and nearly always involves tachycardia. Instead of producing a single, strong contraction, the chamber fibrillates, or quivers, often producing a rapid heartbeat. This is the most common type of serious arrhythmia affecting millions of people worldwide and is associated with increased all-cause mortality, mainly adults&nbsp;over 65 years of age. Up to 20% of patients with ischemic stroke have underlying AF, and detection allows the initiation of anticoagulation, which is associated with a significant reduction in stroke recurrence [1]. But, AF is often asymptomatic in most patients with stroke. Other patients have troubling symptoms such as palpitations or dizziness, but traditional monitoring has been unable to define an AF instantly [3]. Early diagnosis of AF may have several benefits, including individualized lifestyle intervention and anticoagulation treatment, and may be associated with a reduction in complications and healthcare costs [2]. However, AF detection is difficult because it may be episodic. Therefore, in case of emergency, periodic sampling and monitoring of heart rate and rhythm could be helpful in better diagnosis. Identification of AF on time can help in providing life saving treatment.</p> <p>&nbsp;A 12-lead electrocardiography (ECG) is the most commonly used diagnostic device for identifying abnormalities in heart rhythms, especially Arrhythmia. The characteristic sign of AF is the absence of a P wave in the ECG signals. The P wave is formed when the atria (the two upper chambers of the heart) contract to pump blood into the ventricles. In presence of AF, there will be many “fibrillation” beats instead of one P wave.&nbsp;The normal duration of a QRS Complex, which is formed when the ventricles (the two lower chambers of the heart) are contracting to pump out blood, is between 0.08 and 0.10 seconds [4][5]. In case of AF, the QRS complexes are “irregularly irregular”, with varying R-R intervals. This results in chaotic T waves, measuring irregular resting period of the ventricles. Figure 1 shows sample output ECG signals in normal and AF conditions.</p> <p>Recent advances in technology have allowed for the development of single-lead portable ECG monitoring devices. A person can measure their heart rate using their pulse in different locations of the body: the wrists, the insides of the elbows, the side of the neck, and the top of the foot. Portable ECG devices use finger contact to create a single-lead ECG trace and have a high degree of sensitivity for identifying AF [6]. The in-built memory of these devices allows for single or multiple time-point screening. These devices permit multiple 30–60s recordings to be captured, and downloaded to a computer. Most interface with a web-based cloud system where ECG rhythms can be transmitted to remote specialists, allowing rapid analysis and diagnosis [7][9]. Interpretation from a healthcare specialist or by automated machine learning algorithms has achieved high sensitivity and specificity for AF detection. However, in case of continuous data collection and presence of signal noise, distinguishing the presence of AF is a real challenge.</p> <table> <tbody> <tr> <td width="44">&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td width="540"> <table width="100%"> <tbody> <tr> <td> <p><strong>Figure 1 Sample ECG ouput signals: a) normal condition b) </strong><strong>Atrial fibrillation condition</strong></p> </td> </tr> </tbody> </table> &nbsp;</td> </tr> </tbody> </table> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>To mitigate this cognitive challenge of computing multiple aspects of the ECG signals, modern machine learning algorithms and decision support tools are developed. These tools can assist healthcare professionals in time of need to identify AF signals and provide much needed treatment. This paper proposes a convolution neural network model [8] that classifies the ECG recordings from a single-channel handheld ECG device and detects four distinct categories of rhythms: normal sinus rhythm (N), atrial fibrillation (A), other rhythm (O), or too noisy to be classified (~). The samples of AF rhythm - A signals are collected and transmitted to the clinicians for instant diagnosis. &nbsp;The aim is to present a system for detecting the AF signals accurately and for predicting AF measurements through real-time data transmission for enabling life-saving treatment on time.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Bhuvaneswari Arunachalan https://spast.org/techrep/article/view/702 Design and Reliability Evaluation of Switching Element for Improving the QoS of Multistage Interconnection Network 2021-09-15T20:41:17+00:00 Shilpa Gupta shilpa1_goyal@rediffmail.com <p><strong>Introduction:</strong> Super computer systems are commonplace to most of the application where big data calculations are involved [1-2]. These applications are Smart Grids, Power Transmission and distribution, Weather Prediction, Ocean Sciences, Nuclear Weapons[3-4], Genome Sequencing for various viruses such as SARs COV-2 and Analyzing complex plasmonic vesicles of nano-particles etc. As technology continuously driving to new scale, computing power requirements are growing by many folds [1, 5-7]. Parallel/distributed processing are required to meet the challenges of computational intensive high-speed applications of supercomputers. A huge amount of data is to be communicated at very high speed between these distributed processing systems their respective memory modules. Multistage Interconnection Networks (MINs) provides a reliable and cost-effective communication path between parallel processors and memory modules for big data transfer [1-9]. &nbsp;</p> <p><strong>Background:</strong> Switching Elements (SEs) are the basic building blocks of MIN which connects all input ports to all output port in a fully cross-bar interconnection pattern. These SEs are arranged in a uniform manner in different stages to structure MINs [1-2]. Hence, cost and reliability of MINs greatly depends upon the cost efficient reliable functioning of these SEs [10-13]. Much effort has been made in literature to provide reliable MIN design but, most of the work has been done on improving fault tolerance by providing redundancy in the network by using higher size SEs [4-9]. This increases reliability of the whole network due to the improved fault tolerance but, reliability of each single path is reduced. This is due to the known fact through literature that reliability and cost of SE depends upon the number of interconnection used in it [10-13]. Hence, the higher the size of SE higher will be the cost of the network and lower will be the reliability. No work till yet, to the best of our knowledge, has been done on improving the reliability of higher sized SE by reducing the interconnections used in it and the gap is left unfilled.</p> <p><strong>Method Introduced:</strong> To fill the gap found in literature and to improve the reliability of higher sized SE a new 3 × 3 and 4 × 4 SE structures are introduced in this paper. These newly proposed SEs can be easily employed in various existing MIN to provide enhanced reliability at much reduced cost. The interconnections used to connect input and output port of proposed SEs have been reduced by using MUX of size 2 × 1 inside the SE configuration.</p> <p><strong>Results: </strong>The number of interconnection in the proposed SEs has been reduced by at least 01 Connection/SE for size 3×3 whereas, 4 Connections/SE for size 4×4. The cost of proposed SEs is enhanced by 11% for 3×3 SE/unit and 25% for 4×4 SE/unit. The improvement in reliability values at SE level achieved is 01.01% for 3×3 SE/unit and 04.10% for 4×4 SE/unit. For evaluation of MIN reliability with proposed SEs, one of the existing MIN i.e. Gamma structure [3] has been evaluated for ST-reliability. The calculations are done on network of size varying from 8×8 to 1024 × 1024. The reliability of Gamma MIN has been improved from 0.22% for network size 8×8 to 08.02% for network size 1024 × 1024.</p> <p><strong>Applicability:</strong> These proposed SEs can replace the existing SEs of same size without changing the network topology to provide higher reliability of MIN than the existing one.</p> <p><strong>Conclusion and Future Scope:</strong> The proposed SE designs provide reliability and cost improvement at switch level as well as at the network level and are well adjustable with any type of connection pattern associated with MIN. For future guidelines it has been suggested to simulate these SEs on hardware tool to analyze the area consumption, power dissipation, delay analysis, bandwidth availability etc.&nbsp;</p> 2021-09-18T00:00:00+00:00 Copyright (c) 2021 Shilpa Gupta https://spast.org/techrep/article/view/254 Satellite internet communication: race with contemporary optical fiber network with the help of SPT Algorithm 2021-09-10T19:41:56+00:00 Shiv Preet shiv.p@inurture.co.in <p>Internet is the backbone of today’s digital world. A cobweb of fiber optics/wireless towers has<br>been spread all over the globe for internet access in every nook and corner of the globe.<br>Fiber network has plunged into depth of oceans as well as on the peaks of mountains.<br>Though fiber optics over the globe is very costly affair, still major industries are opting for<br>fiber network or wireless towers connected through fiber optics instead of satellite<br>communication. Satellites can be placed in the three different orbits for the connectivity.<br>These are LEO (low earth orbit), MEO (medium earth orbit) and GEO (geostationary earth<br>orbit). Apart from that is also HEO (high earth orbit). Despite so much advancement in the<br>space technologies, satellite broadband is still in its infancy stage. There are some pioneers<br>in this field like Starlink, Oneweb, Hughesnet and so on but still viability of satellite<br>broadband seems like a distant dream. This paper reviews that why fiber optics or wireless<br>network using mobile towers has been the choice of technology giants instead of satellite<br>communications. Space Exploration Technologies (SpaceX) is test piloting its upcoming<br>venture Starlink. It promises to keep the cost low for satellite broadband while providing<br>good download and upload speeds. In the trials, it is able to provide only asymmetric<br>download and upload speed to its end users. Optical fiber on the other hand provides<br>symmetric upload and download speed to its consumers. Similarly Airtel has bought 100%<br>stack of Oneweb company for its India operations. Oneweb also is launching satellites to<br>provide affordable broadband to the consumers. Oneweb has already launched 36 satellites.<br>Its plan is to launch 110 satellites in the LEO orbit by year 2021 while total 648 satellites will<br>be launched by year 2022. Oneweb also has the plan to start satellite broadband operations<br>in India in the same year (2022). There is a plethora of information available on internet on<br>the development of satellite broadband network but there is also a school of thought which<br>advocates optic fiber over satellite broadband for a normal household internet consumer.<br>Prime reason for advocating optic fiber network over satellite broadband is the latency. A<br>geostationary satellite takes latency of 0.2 seconds (200 milliseconds) on an average for<br>uplink and downlink of data. A low earth orbit satellite takes from 1 to 4 milliseconds but only<br>in paper. In reality low earth orbit satellites gives asymmetric speed because a moving<br>satellite might not be able to provide uniform speed to the user at all times. Starlink satellite<br>broadband has fluctuating latency. It hovers between 18 milliseconds to 88 milliseconds.<br>These kind of latencies are not conducive for real time applications. Oneweb has not started<br>its commercial implementation properly yet. It is yet to see that how their latency will fare as<br>compare to optic fiber broadband. This paper discusses various difficulties occurring in<br>implementation of satellite network for commercial and home use and the implementation of<br>Starlink project for satellite broadband. This paper also sheds light on how SPT (Squeeze<br>Pack and transfer) compression algorithm can help in improving internet implementation<br>using satellite communication.</p> 2021-09-11T00:00:00+00:00 Copyright (c) 2021 Shiv Preet https://spast.org/techrep/article/view/2437 Technological change in Agriculture: From urban to rural path for Agridrones 2021-10-12T12:19:35+00:00 Pradeep Kumar Tiwari pradeeptiwari.mca@gmail.com Sai Santosh Malladi pradeeptiwari.mc@gmail.com <p>Humanity has always been interested in predicting what lies ahead. Also, when financial benefits are involved the quest becomes quite intense and interesting. One such area is the prediction of the stock market price movements and analysis. In this paper, we present a review of various prediction approaches ranging from Fundamental Analysis to modern Machine Learning and Hybrid models. As this is a very dynamic topic on which research activities are conducted around the globe, it is particularly challenging to classify a technique completely belonging to a certain paradigm. There exists some intersection in the techniques of various paradigms. We consider the broad spectrum of techniques under Traditional and Millennial groups to present the review.<strong>Fig.1.</strong> Initial Experiments and results: A. experimental setup B. Results.</p> 2021-10-12T00:00:00+00:00 Copyright (c) 2021 Pradeep Kumar Tiwari, Sai Santosh Malladi https://spast.org/techrep/article/view/1875 A Comprehensive Review on Video Watermarking Security Threats, Challenges and its Applications 2021-10-09T13:49:41+00:00 IRENE JOSEPH irene.joseph@res.christuniversity.in JYOTHI MANDALA jyothi.mandala@christuniversity.in <p><span style="font-weight: 400;">Data is a crucial resource for every business, and it must be protected both during storage and transmission. Sensitive Data Exposure or Cryptographic failure is now the second-highest priority vulnerability in the Open Web Application Security Project (OWASP). Increased technology and digitized use endanger everything associated with technology and the digital world. One efficient way of securing data and transferring it is through digital watermarking, where data is hidden inside a medium like text, audio, or video. Digital watermarking is used for ownership and copyrights, document and image security, and protecting audio and video content. Video watermarking is visible or invisible embedded data on a video in a logo, text, or video copyright disclaimer. A watermarking technique aims to identify the work and discourage its unauthorized use. Secure data transfer is necessary as there is an increase in the mishandling of data by hackers. The requirements of video watermarking like maintaining robustness, imperceptibility, and video quality result in conflicts between each other, thus making it challenging to hold all the requirements of a video watermarking approach. A generic approach to watermarking digital data involves three main processes [1]:</span></p> <ol> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Embedding a watermark data:</span></li> </ol> <p><span style="font-weight: 400;">Using an efficient algorithm, data is embedded into the original video which should be imperceptible to an outside user.</span></p> <ol> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Securely transfer the watermarked video:</span></li> </ol> <p><span style="font-weight: 400;">The watermarked video should be resistant to any form of geometric or non-geometric attacks and thus be sent to the receiver side without any tampering.</span></p> <ol> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Extracting the watermarked data:</span></li> </ol> <p><span style="font-weight: 400;">The secret data that is hidden inside the video needs to be extracted and retrieved in its fullest and original form at the receiver side.</span></p> <p><span style="font-weight: 400;">In this proposed paper, the goal is to analyze the characteristics of video watermarking algorithms and the different metrics used for them. It also deals with the extent to which the different requirements can be fulfilled, taking into consideration the conflicts between them and the practical challenges of video watermarking in terms of attacks like geometric attacks and non-geometric attacks[2-3]. Recent advances in data security indicate that employing a video watermarking technology to transmit private data will be an effective method of transmitting sensitive data, making it difficult for hackers to understand even if the encrypted video is decrypted.</span></p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 IRENE JOSEPH, JYOTHI MANDALA https://spast.org/techrep/article/view/2587 MULTIFUNCTIONAL AGRICULTURAL ROBOT OPERATED THROUGH MOBILE PHONE 2021-10-15T02:53:31+00:00 S.Karthigaiveni drsjananiece@gmail.com S.Janani kannanarchieves@gmail.com S.G.Hymlin Rose kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: To design an AGRIBOT (Agricultural Robot) that aids the people to endure so that it achieves to do multi functions such as seedling, turn up the area of land, to put a mass of small drops of liquid on crops, lawn mowing and to identify the hindrance for agricultural purpose.</p> <p>Methodology: AGRIBOT, agricultural robot designed for the purpose of farming. Connect the mobile network to the android phone so that the robot will be operated by means of it.&nbsp; Embedded program which is dumped into the microcontroller that controls the various operations performed by the robot. Based on our requirement the wheel moves in all the directions. Based on the movement of wheel, the farmer cultivates the land with equal distance between the seeds followed by the process of levelling the land. Relay which is used to connect the water pump which waters the field automatically. The crops are slash out during harvesting. In this way, this AGRIBOT performs multifunction’s to increase the output.</p> <p>Findings: Agricultural robot which works autonomously designed mainly for agricultural purpose performs tasks such as farming tasks, sowing, grass cutting and pesticides spraying. This robot reduces the complicated works performed by human beings and increases the efficiency of farming work. AGRIBOT is energy driven by sun energy sources which is renewable energy. By using this advanced technology, farmer can boost additional harvest, decreases point in time and workers wages too.</p> <p>Originality/value: In this study, The benefits of the AGRIBOT for the farmers are listed below.</p> <ul> <li>Prefabricate the seed drilling that can be handled by single user.</li> <li>Spray the fertilizer in ploughing land.</li> <li>Flatten the ground level.</li> <li>Facilitate the system for cultivating of various of seed like maize, wheat etc.</li> <li>Sustain the equal detachment among two seeds at the time of strewing process.</li> <li>Reduce the total number of workers needed to plant the seeds.</li> <li>Amplify the cultivation and yield.</li> </ul> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 S.Karthigaiveni, S.Janani, S.G.Hymlin Rose, Mayakannan Selvaraju https://spast.org/techrep/article/view/1345 Impact of ICT in Sustainable & Integral Scientific Scenarios 2021-09-30T19:09:37+00:00 Zeba zebah181@gmail.com <p>Every country progress gradually and efficiently on the strategic platforms designed by its super annutators and the technology leads. In current scenario every citizen embracing change along the futuristic technologies has everted the group of resource allocation to think twice upon the challenges its being facing in its standardised allocation. The scenic affiliation in manifestation of technology led economic development, has led to create many more faces of futuristic, but sustainable technology network that has to be a part of upcoming era and various sustainable mission developments.</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Sustainable technology scenario is now playing the key crucial role in framing the scientific scenarios of the country. Sustainable and green monitoring of interconnected information and communication technology systems, that has to be designed in green manner utilising the low energy systems, are in current demand of hour. Firm application of green and efficient energy management system creates lower energy demands and in turn for the system supportive applications and feature fulfilment technology.</p> <p>The imminent part of green technology lies behind supportive technology policies that are designed in purview of futuristic demands in proportion to the swift data availability and core material sciences and their sustainable discoveries to the science. Habituating with current system of technology adoption, there comes around the increasing cases of long term viability for the system and technology that are truly effortless to maintain. Effortless, where is measured in term of more effectual, vibrant as well as more independent systems that will understand systematic and behavioural human intelligence. In such the intelligent and efficient system, artificial intelligence systems, robotics with the use of deep learning technologies are paving ahead the way for more viable systemic technologies. Synchronisation of various ICT technologies with essential regulatory framework is required to integrate globally, in order to sustain the expected ICT policies.</p> <p>In current transformation of world where ICT has come ahead as the dominant driver of sustainability, with the potential to revolutionize many of the technology domains, it has become necessary to facilitate the initial strands of technology with the strands of sustainable anchors. This further can help in putting ahead and reconcile the economic growth, all along with the environmental protection.</p> <p>In attaining these outputs, a holistic approach is required so that&nbsp;&nbsp; a framework for the systematic and green designing for better technology outcomes can be selected in more innovative manner. As it will serve a huge challenge for the ICT equipment provider, solution disseminators, ICT research community as well as the various professionals of ICT profession market, policy influencers as well as the organisations that works dedicating towards everyday developmental activities.</p> <p>By incorporating the green initiatives within the massively developing ICT, there is a dire need of cross sectoral engagement of various technologies so that integrative technologies can then bring ahead more sustainable techno-environment.</p> <p>The sustainable technology has now become a viable part of sustainable development goals and are also in view of impacting and building the resilient ICT infrastructure as well as promoting the inclusive and sustainable industrialisation, fostering&nbsp; innovation in the most advantageous manner.&nbsp;&nbsp;</p> <p><img src="https://spast.org/public/site/images/zeba11/mceclip0.png"></p> 2021-10-09T00:00:00+00:00 Copyright (c) 2021 Zeba https://spast.org/techrep/article/view/3455 OPTIMIZATION OF MACHINE LEARNING AND DEEP LEARNING ALGORITHMS FOR DIAGNOSIS OF CANCER 2021-11-18T06:16:45+00:00 Hari Krishna harikrishna.dodde@bvrit.ac.in M. Anand manandinbox@gmail.com D. Saravanan saranmds@gmail.com K. Pushpalatha pushpalatha2987@gmail.com <p>Machine learning and artificial intelligence has recently become a prominent technology. Given its popularity and strength in pattern recognition and categorization, many corporations and institutions have begun investing in healthcare research to improve illness prediction accuracy. Using these strategies, however, has several drawbacks. One of the primary issues is the lack of huge data sets for medical pictures. An introduction to deep learning in medical image processing, from theoretical foundations to real-world applications. The article examines the general appeal of deep learning (DL), a collection of computer science advances. The next step was to learn the basics of neural networks. That explains the use of deep learning and CNNs. So we can see why deep learning is rapidly advancing in various application fields, including medical image processing. The goal of this research was to use innovative methodologies on cancer datasets to explore the feasibility of combining machine learning and deep learning algorithms for cancer detection. This study used text and picture databases to classify cancer. The datasets are the Liver BUPA disorder database and brain MRI pictures. This article provides optimization methods that outperform the suggested approaches' accuracy. Using two alternative training methods, Levenberg Marquardt (lm) and Resilient back propagation (rp), two classification algorithms were evaluated with different groups of neurons to identify benign and malignant patients. Cascade correlation utilizing the train (rp) outperformed feed forward back propagation using the train (lm). The second deep neural network model presented a technique (based on CNN) for automated brain tumour identification using MRI data. The Water Cycle Algorithm is used to optimise CNN. The established approach is very accurate. The suggested framework examines innovative texture classification algorithms using Discrete Wavelet Transform (DWT) and the Gray Level Co occurrence Matrix (GLCM) (GLCM). The texture picture is divided into three layers and a coefficient is computed. To improve accuracy, the Matrices are merged with characteristics from a set of uniform symmetrical GLCMs computed with 900 directions. The GLCM extracts the pixel's spatial connection to classify texture. The GLCM calculates contrast, correlation, energy, and homogeneity. The retrieved characteristics are sent into CNN to segment the brain tumour MRI. The recommended strategy is more accurate, according to the data.</p> 2021-11-18T00:00:00+00:00 Copyright (c) 2021 Hari Krishna, M. Anand, D. Saravanan, K. Pushpalatha https://spast.org/techrep/article/view/1281 Design and Implementation of Cloud Messaging Using IoT Trigger Devices over MQTT Protocol 2021-09-27T09:59:49+00:00 kalyan boddula kalyan1994.boddula@gmail.com Nirisha Anagandula nirisha.anagandula@gmail.com Rasagnya Aenugu nishithareddyaenugu.ajr@gmail.com <p>One of the key components of contemporary technology is communication. It’s crucial in today's world, especially with the Internet and its improvements. The internet has facilitated globalization, and the flow of information is instant and accessible for everyone. Moreover, the internet of things (IoT) has also become a major technological advancement that not only connects people socially but also gives devices access to information through data transfer. A network data transfer protocols have been separated into two groups: short-range networks, including Bluetooth, Zigbee, Wi-Fi, Z-Wave, NFC and&nbsp; the other one is low power wide area networks (LoWPAN) that include cellular networks. It also comprises third-party communication networks, signal towers, and satellites with expensive GSM modules. Traditional data transfer takes place across vast communication distances and requires excessive data transmission and power usage. The GSM devices are used to transfer data, including Short Message Service(SMS), however the hardware product criteria shows that the GSM devices and third party subscription are more expensive and not suitable for bulk messaging at an instant.</p> <p>It can be changed by IoT (internet of things) through MQTT protocol, It is a rarity for IoT systems to have a messaging protocol that supports a variety of message exchange patterns and devices.To improve and replace the existing technology, the transmission of data through the IoT (Internet of things) messaging concept has been initiated. A design that uses a combination of a microcontroller and a wi-fi module to create an IoT device was created to transfer data using MQTT messaging services. The trigger for this process can be input from any sensor interface or can be an ordinary action that transmits a text message. When the trigger is received, the device works as a sending server-client controller, and then initialises data to the cloud server(broker client). Next, the cloud service sends the alert message to the receiver client(end receiver) to the mobile phone and this process does not require any GSM modules or third-party networks. Privacy can be maintained and the information on any trigger values can be transferred easily and instantly through MQTT cloud messaging service. The process relies on internet service to transmit and receive data.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 kalyan boddula, Nirisha Anagandula, Rasagnya Aenugu https://spast.org/techrep/article/view/2783 Fog Computing and Communication Resource Management Framework (FC2RMF) for Sustainable, Adaptive, Automated Datapath Allocations in Containerized FogMicroDataCenters 2021-10-17T13:58:09+00:00 Padma Priya R padmapriya.r@vit.ac.in Rekha D rekha.d@vit.ac.in <p>Nowadays proximity based computing is becoming the de-facto standard for gathering insights from the data generated by IoT devices. In this data driven world [1], processing at close proximity is really appreciable to experience lower response times and lower network latencies. Fog computing [2] recently has redefined a plausible way for proximity based executions for Data Intensive (DI) and Compute Intensive (CI) applications. However Fog network infrastructures do regularly combat with three important challenges 1) Identifying the best Fog server (based on load availability), 2) identifying the less congested links (interms of effective available bandwidth) so-as-to reach these Fog servers in a limited time, 3) Choosing the server and the communication path with lower electricity consumptions. Thus sustaining efficiency and energy, still remains as an ever pertaining conundrum in Fog MicroDataCenters (Fog MDC) [3] where in these Fog MDCs the miniature datacenters are being available in the premises. We in this paper have proposed completely open source technologies based Fog orchestration framework namely Fog Computing and Communication Resource Management Framework (FC<sup>2</sup>RMF). In this Proof-of-Concept work we have developed FC<sup>2</sup>RMF with a) Observability b) Automated c) Monitoring based Orchestration feature being operated through software defined functions so as to achieve reduced response and lower energy consumptions. Our monitoring facilities are mainly built to ensure identifying the balanced resource loads both in communication and computation infrastructures. Our framework FC2RMF is primarily raised upon Docker containers, Software Defined Networking (SDN) (Next generation networking technology) and few other technologies like sFlow-RT with smart REST API based software-driven automated services. In this work we have proposed an adaptive Fog resource management and service placement algorithm known as <strong>TRA</strong>ffic routi<strong>N</strong>g with <strong>S</strong>erver lo<strong>A</strong>d and link <strong>L</strong>oad <strong>C</strong>onsidera<strong>TION</strong> (TRANSACTION). Our TRANSACTION algorithm runs in the Fog controller which tries to route continuous data-flows efficiently received from IoT Devices through Fog Gateways, so-as-to reach appropriate Fog nodes in FAT tree and simple tree topologies in Fog MDC. Also our algorithm acts in reactive manner to the current increase in the network path loads and migrate the data request to the assigned servers so as to guarantee the workloads early completion times. We have proposed the methodology for optimal application service executions in the Fog domain considering the challenges due to dynamic characteristics in Fog environment. We in this study have integrated in our framework with modules utilising sFlow-RT engine in accessing the telemetry stream about Fog container nodes resource usages. Also we have incorporated Ryu Software Defined Network (SDN) controller based integrations for monitoring device for gathering traffic flowing insights in the configured and established network topology communication paths. We have evaluated our proposed algorithm with real time traces and typical datacenter topologies with gateways, core, aggregate and edge switches supporting OpenFlow communication protocol.</p> <p>Few works in the literature [4-5] have discussed about energy consumption in Fog networks and response time reductions using SDN and Fog MDC. However either these works are evaluated only in simulators nor they have not focussed about dynamic, adaptive data path rescheduling opportunities in resource allocations. Also in the literature [6-7] there are emulated PoC frameworks developed, but either these works does not embrace SDN and containers technology nor these works do not deal with load balanced based scheduling efforts to minimize completion times.</p> <p>Fig.1a depicts the developed components in FC2RMF framework. The flow of operations in the TRANSATION algorithm can be viewed in fig.1b. Also the interaction among the components can be viewed through the sequential diagram as mentioned in fig.2.</p> <p><img src="https://spast.org/public/site/images/padmapriya_r/mceclip1.png"></p> <p><strong>&nbsp;Fig.1.</strong> (a) Fog Computing and Communication Resource Management Framework Component Diagram.(b) Flowchart for the TRANSACTION algorithm&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p> <p><img src="https://spast.org/public/site/images/padmapriya_r/mceclip0.png">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p> <p>Fig.2. Sequential diagram representing the sequence of operations between the components implemented</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Padma Priya R, Rekha D https://spast.org/techrep/article/view/1317 The Model Checking D2D and Centralised IOT authentication Protocols. 2021-09-28T07:43:57+00:00 Pradeep R Pradeep pradeepr@sit.ac.in <p class="western" style="margin-bottom: 0.28cm; line-height: 108%;" align="justify"><span style="color: #222222;"><span style="font-family: Arial, serif;">It is very difficult to develop a perfect security protocol for communication over the IoT network and developing a reliable authentication protocol requires a detailed understanding of cryptography. To ensure the reliability of security protocols of IoT, the validation method is not a good choice because of its several disadvantages and limitations. To prove the high reliability of Cryptographic Security Protocols(CSP) for IoT networks, the functional correctness of security protocols must be proved secure mathematically. Using the Formal Verification technique[4] we can prove the functional correctness of IoT security protocols by providing the proofs mathematically. In this work, The CoAP[1](constrained application protocol) and CHAP[2] Device-to-Device (D2D) authentication protocols and centralised IoT network Authentication Protocol SSH[3] (Secure Shell) used in smart city applications are formally verified using the well-known verification technique known as model checking technique and we have used the Scyther[5] model checker for the verification of security properties of the respective protocols. The abstract protocol models of the IoT authentication protocols were specified in the security protocol description language and the security requirements of the authentication protocols were specified as claim events.</span></span></p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Pradeep R Pradeep https://spast.org/techrep/article/view/2190 Blockchain-based intelligent medical IoT healthcare system 2021-10-01T16:14:00+00:00 Sachikanta Dash sachikanta_dash@rediffmail.com Rabinarayan Panda n.rabipanda2011@gmail.com Sasmita Padhy pinky.sasmita@gmail.com <p>It has seen an unavoidable interest in medical services difficulties, as well as faster and more secure patient help. Using new pattern improvements in the healthcare field could provide additional options for dealing with patients' health records while also improving health quality. Researchers are seeking for permanent and simple solutions to remotely monitor patients' records using a patient monitor system. One of these ways is the use of the Internet of Things (IoT), which allows for remote patient monitoring by healthcare providers. However, as the number of IoT devices grows, privacy and security concerns have developed. Another privacy concern is the disclosure of patient information. According to several research, blockchain technology is a trustworthy network that ensures the privacy and security of patient data sent through IoT devices. Following that, this section examines IoT advancement in the healthcare sector as a current exploration and advantageous trend. This research aims to present a new framework that works with the restoration and transfer of medical information utilising blockchain via Django by combining health records with patient monitoring systems that communicate data with various peers via smart contracts.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Sachikanta Dash, Rabinarayan Panda, Sasmita Padhy https://spast.org/techrep/article/view/1538 Monitoring Students Attention On Smart Phone for E-Learning 2021-10-02T06:36:48+00:00 Abirami A abirami.a@eec.srmrmp.edu.in Vishnupriya M vishnumanikandan.708@gmail.com Rubika A rubika2112@gmail.com Rithanya rithanyasrinivasan2002@gmail.com <p>Due to the covid19 pandemic schools are closed and students are taught to require up their categories via online mode. As most of the parents are busy with their home and workplace activities they'll not monitor their kids who are attending the online classes. Teachers often depend on detecting and responding to apparent student behaviours as indications of their concentration in face-to-face environments. Teachers may only be able to see a student's head and shoulders in an online context, limiting the amount of information available. Teachers must rely on other sources of information in these situations. Several education systems transferred activities online to ensure that instruction could continue even if schools were closed. When compared to the alternative of not going to school, online learning has proven to be a valuable tool for continuing to build skills during closures of school. However, there are still worries that e-learning may have been a sub-optimal substitute for face-to-face guidance, particularly in the lack of global access to infrastructure (hardware and software) and insufficient teacher and student preparation for the specific requirements of online teaching and learning. Students can handle some of the possible problems generated by internet education by establishing good effects on learning, such as remaining focused during online classes or retaining appropriate motivation. They are also critical in assisting students in properly using modern information and smart communication technologies and attempting to make the most of emerging learning technologies. Positive learning attitudes, self-regulation, and self‐efficacy to study all have a role in enhancing school achievement, but they may be especially crucial if online learning continues. In order to provide information and assistance to parents on successful strategies for boosting their children’s learning and development, education systems should strive to enhance communication between parents and schools. Teachers, on the other hand, require assistance in incorporating technology appropriately into their teaching techniques and approaches, as well as in assisting students in overcoming some of the challenges that come with this type of learning environment. To guarantee that ICT is effectively utilized, it is critical to support teachers' training in the use of digital resources for pedagogical practice and to develop teaching techniques that are appropriate to this context. Many tasks are allocated to the students who are attending their online categories however there's no plan concerning the completion of their activities given by their staff. It's troublesome to observe the students throughout their online exams/daily tests and online classes by their students, teacher. Throughout online classes, the staff might teach the topics and provide some activities through the classes supporting the student grade. Some students might complete however some might not. The parents who are functioning from home could also be busy with their work activities and will not consider their kids. Within the existing system, we tend to aren't have any strategies or ideas to create students to complete these activities inside the given time with the most attention. Deep learning is an application of computing (AI) that provides systems with the facility to mechanically learn and improve from expertise while not being expressly programmed. Deep learning cares about the creation of computer programmes that will access information and learn for themselves. It must be trained in victimisation machine learning algorithms to review the degree of attention of the scholar in each case. We'll track the activities of the scholar and supply results to their parents to assist the degree of attention of every student by enforcing this law.</p> 2021-10-07T00:00:00+00:00 Copyright (c) 2021 Abirami A, Vishnupriya M, Rubika A, Rithanya https://spast.org/techrep/article/view/764 Intruder minimization using Zombie Controller agent in Wireless Ad hoc Networks 2021-09-15T15:21:55+00:00 Shamshekhar S Patil shamshekhar.patil@gmail.com Jyoti Neeli jyoti.neeli@gmail.com <p>The field of Ad-hoc wireless networks, especially MANET, has attracted widespread attention in the research community due to its prevalent adoption in everyday and real-life applications. However, MANET still suffers from routing performance due to decentralized patterns of communication and dynamic route establishment between different types of mobile nodes, where infrastructure support cannot be effectively realized. Despite various research and developments in MANET technology, routing strategies still lack efficiency in ensuring optimal energy utilization, robust security, and quality of services (QoS). Therefore, MANET's communication process and its related components may be adversely affected if energy utilization is at a higher rate, and security protocol is not robust. Any forms of loopholes in the network can lead to many vulnerabilities and invite various intrusions and attacks that can mislead the routing operations and compromise nodes to agree to deplete their total energy. Therefore, an effective mechanism is required that can ensure protected communication and data transmission settings in MANET. However, ensuring secure routing operation with higher energy efficiency is a major challenge in the research. In this regard, the proposed research work has introduced a novel cost-efficient framework of intrusion detection and prevention mechanism. The modelling of the introduced security system is carried out in such a way that if any event of intrusion is found in the network, then it cannot hide from the proposed intrusion identification system. Accordingly, a suitable prevention mechanism is applied to protect the MANET.&nbsp;</p> <p>The methodology envisioned in this study aims to formulate security solutions from the view-point of making the MANET routing more defensive against any forms of security vulnerability. For this purpose the analytical Modelling as shown in Fig.1. is improvised, and two different approaches are taken into consideration such as – i) Conceptual Modeling of broker node agent to explore hop-table updation during the control data packet exchange events</p> <ol> <li>ii) Incorporation of a simplified algorithm which takes Zombie node as controller module to mitigate any form of possible intrusion in MANET.&nbsp;</li> </ol> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Shamshekhar S Patil, Jyoti Neeli https://spast.org/techrep/article/view/2931 The CRIME DATA ANALYSIS USING MACHINE LEARNING 2021-10-26T13:47:13+00:00 SUNANDA DAS das.sunanda2012@gmail.com Mohammed Yamin A 18btrcr028@jainuniversity.ac.in Kishan S Rao 18btrcr021@jainuniversity.ac.in Harsha Pavan Gopal 18btrcr017@jainuniversity.ac.in Lavan Kumar Reddy 18btrcr015@jainuniversity.ac.in <p>Crime is one of the dominant and alarming aspect of our society. Everyday huge number of crimes are committed, these frequent crimes have made the lives of common citizens restless. So, preventing the crime from occurring is a vital task. In the recent time, it is seen that artificial intelligence has shown its importance in almost all the field and crime prediction is one of them. However, it is needed to maintain a proper database of the crime that has occurred as this information can be used for future reference. The ability to predict the crime which can occur in future can help the law enforcement agencies in preventing the crime before it occurs. The capability to predict any crime on the basis of time, location and so on can help in providing useful information to law enforcement from strategical perspective. However, predicting the crime accurately is a challenging task because crimes are increasing at an alarming rate. Thus, the crime prediction and analysis methods are very important to detect the future crimes and reduce them. In Recent time, many researchers have conducted experiments to predict the crimes using various machine learning methods and particular inputs. For crime prediction, KNN, Decision trees and some other algorithms are used. The main purpose is to highlight the worth and effectiveness of machine learning in predicting violent crimes occurring in a particular region in such a way that it can be used by police to reduce crime rates in the society. In this experiment, we have collected data of crime scenario from UCI machine learning repository website. The title of the dataset is ‘Crime and Communities’. It is prepared using real data from socio-economic data from 1990 US Census, law enforcement data from the 1990 US LEMAS survey and crime data from the 1995 FBI UCR that had features such as area of crime, type of crime, number of victims and so on. Then we applied machine learning algorithms on the dataset for prediction of some attributes such as criminal age, sex, race, crime method etc. We used different algorithms for our research: K-Nearest Neighbor (KNN), Logistic Regression (LR), Random Forest Classifier (RFC), Gaussian Naïve Bayes (GNB). We ended the research by comparing and analyzing all the achieved results and visualizing it for easier reference.</p> 2021-10-26T00:00:00+00:00 Copyright (c) 2021 SUNANDA DAS, Mohammed Yamin A, Kishan S Rao, Harsha Pavan Gopal, Lavan Kumar Reddy https://spast.org/techrep/article/view/838 A Clustering and Detection of Liver Disease in Indian Patient Using Machine Learning Algorithms 2021-09-15T19:36:15+00:00 MANOJ KUMAR D P manojkumardp@gmail.com <p>In Present Days, Machine Learning plays an important role in field of disease classification and prediction<br>regarding to various organs like heart ,kidney ,liver ,stomach etc..to predict automated disease detection using various<br>algorithms,i.e., Naïve Bayes, K-means, and Support Vector Machine. The study concentrates on liver disease-related<br>health care data set and used for comparative performance measurement of the three techniques mentioned above. Kmeans is used for performing clustering on the training dataset and Naïve-Bayes, Support Vector Machine to predict the<br>test cases using training dataset. Results describe Correct Classifications, Misclassifications, accuracy performance<br>metrics to compare their prediction accuracy. Results derive that SVM classifier provides better accuracy of 81% than<br>Naïve-Bayes.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 MANOJ KUMAR D P https://spast.org/techrep/article/view/1742 KEY DATA EXTRACTION AND EMOTION ANALYSIS OF DIGITAL SHOPPING BASED ON BERT 2021-10-08T15:57:01+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com Sarika Jay kannanarchieves@gmail.com B VA N S S Prabhakar Rao kannanarchieves@gmail.com <p>Purpose<strong>:</strong> The objective of this paper is to focus on extracting the key words about the product quality and the customer experience with the same in a more efficient and accurate way by pre-training the Bidirectional Encoders Representations from Transformers (BERT) model with the quality domain knowledge and classifying the result with deep learning technique.</p> <p>Methodology: Dataset is considered to be amazon reviews which is a combination of single product-based customer reviews and several products and their reviews which is of medium – large size. This dataset is subjected to initial process of cleaning, data wrangling, Exploratory Data Analysis with pre-trained BERT along with a neural network classifier. The BERT classifier is loaded along with tokenizer in the input modules. The BERT model is configured and training for fine-tuning. The prediction is done based on the final fine tuning.</p> <p>Findings: BERT model along with TF-IDF topic extraction model was implemented to analyse the trend and theme of the outbreak which eventually helped to analyse the public concerns and appropriate health support. Fine tuning of Chinese BERT model and softmax neural network layer was used to train the model to classify into three sentiments which resulted in 75.65% accuracy. Higher accuracy was expected to obtain but was in need to improve in the modelling and more datasets from different parts of the world will lead to much more accuracy in regards with public concerns. A function is generated to output a sample permutation and thus its replication which will be a single statistic. We consider a hypothesis that the words distribution is of with same identity and setting a value of probability as with minimum value of 5.9. When the p-value values to 0.0 which gives as the null hypothesis is invalid. The baseline will be TF-IDF model with logistic regression. Here a prediction function along with prediction matrix values is generated. The model weights and tuning are interpreted with the help of Eli5 library. The pre-train model is initialized and the configuration is used with layers of encoding and pooling with dimensionality of 768.With this initialization, logits for the input sequence is generated.</p> <p>Originality/value: The BERT model which is pre-trained is enables with tokenizing the input dataset which is taken as amazon Alexa product review dataset. While the input is loaded, pre-cleaning process is done such as managing the equal negative and positive comments so that the predictions can be made easy. In order to identify how much is the difference between negative and positive comments we implement the testing of permutation and from that calculating the p-value.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju, Sarika Jay, B VA N S S Prabhakar Rao https://spast.org/techrep/article/view/1778 A SURVEY ON PUBLIC/PRIVATE TRANSPORTATION TRACKING AND SEATING INFORMATION SYSTEM 2021-10-08T15:20:29+00:00 Mayakannan Selvaraju kannanarchieves@gmail.com R.M.Bhavadharini rmbhavadharini@gmail.com S Sivakarthikeyan siva6jun@gmail.com S Suriya Narasimman suriyanarasimman@outlook.com MV Thirunavukarasu mvarasu1999@gmail.com <p><strong>Purpose:</strong> The objective of this paper is to examine and compare various methods in existence for the transport tracking and location details to display.</p> <p><strong>Methodology:</strong> Various methods are in existence for transport tracking and location information of the vehicles. Most of the existing systems in the surveyed papers use Global Positioning System (GPS) to get the coordinates of the buses. Some of the systems are using GSM/GPRS and smartphone applications for tracking the location of the buses.</p> <p><strong>Findings: </strong>The findings indicate that most of the existing vehicle tracking systems use GSM (Global System for Mobile Communication) for tracking the location of the vehicles. Using GPS is expensive which is expensive and it doesn’t have a front end module to access information dynamically and cannot be implemented. Other location tracking systems use RFID and IoT (Internet of Things) devices to track the location. But providing RFID and IoT to all has an issue with scalability. Some of the existing systems use NB-IoT (Narrowband IoT) LoRa to track the bus’s location. It is found that NB-IoT outperforms the other technologies; it has 95 %-tile uplink failure probability of less than 4 %. And it is best in terms of coverage but its drawback is a long time on-air. Machine Learning / Deep Learning algorithms are used for tracking as well as in bus recommendation systems. In terms of seat availability, there is not much relevant work related to the implementation of the seating system in public transportation. This study explores various methods available to track the bus location and compares its advantages and disadvantages.</p> <p><strong>Originality/value: </strong>In this study, the findings show the advantages and disadvantages of various existing methods available to track the bus location and seat allocation system. Most of the existing systems address the problem of bus location tracking and minimal work is available to explore seat availability in public transport. To consider the case of social distancing in accordance with the COVID-19 pandemic, consideration of seat availability when travelling in public transport is also essential.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Mayakannan Selvaraju; R.M.Bhavadharini; S Sivakarthikeyan; S Suriya Narasimman; MV Thirunavukarasu https://spast.org/techrep/article/view/2453 INSTA DEF: An Innovative Tool for Knowledge Repository and Sharing 2021-10-12T15:34:08+00:00 Goutamraj Sahu 19mca003.goutamrajsahu@giet.edu Lipsa Mishra 19mca002.lipsamishra@giet.edu Beauty Kumari 19mca010.beautykumari@giet.edu Brojo Mishra brojomishra@gmail.com <p>Due to vast use of Internet, the quantity of knowledge is huge. So for a learner it’s getting very complex and hard to remember each and every topic during learning, and to understand a topic the learner has to go through its basic definition. Time wise the learner has to cover more topics and more definitions, which is slowly getting complex and hard to remember. So for the solution of above problems we have designed an INSTA DEF web application. The INSTA DEF stands for “Instant Definition”. It&nbsp;is like a library of&nbsp;Definitions,&nbsp;the general purpose of our web&nbsp;application is to make the learning faster by giving the instant basic and understandable definitions to the user, and also users can share their knowledge.&nbsp;Here, the approach of the web application is to collect valuable and good information from different users in form of definitions, and serving those to other users, in this way everyone gets benefits by sharing and learning.</p> <p>We are working to develop a unique web application, we are focusing on quality content, that a user can get satisfaction, we are aiming to provide only topic related information without any complexity, that learners can understand properly, as well as they can share their knowledge.</p> 2021-10-13T00:00:00+00:00 Copyright (c) 2021 Goutamraj Sahu, Lipsa Mishra, Beauty Kumari, Brojo Mishra https://spast.org/techrep/article/view/2491 A PERSONALIZED SUPERMARKET PRODUCT RECOMMENDATION SYSTEM USING AUGMENTED REALITY 2021-10-13T17:40:30+00:00 Manasa Deshpande manasadeshpande10@gmail.com Likhita B likhitab1997@gmail.com Nayana Bhat nayanabhat04@gmail.com <p>The online shopping industry has exploded in recent years leading to many benefits, but people who are sticking to ordinary shopping have their reasons as well. With the advent of extensive advertising on social media, electronic platforms bombard people daily with things to buy, by making many suggestions based on the users’ purchases and market trends. This makes it easier for customers to buy products when shopping online. Online shopping does offer significant advantages such as the convenience of browsing and ordering from various sites at any given time but there is also a notable drawback: Customers cannot directly see the products while ordering them. While supermarkets allow customers to explore various products before buying them, they are large and have an extensive product range which makes it difficult for customers to find desired items straight away. Moreover, customers might not be able to access additional information such as discounts and offers easily [1]. <br>Product recommendations based on other users’ shopping patterns help users buy items easily and it decreases the time taken to look for related items. This feature is currently not available in offline stores. Therefore, A Personalized Supermarket Product Recommendation system that uses Augmented Reality is proposed. This solution consists of an interactive application that brings in both online and offline features in one place which can guide customers to make an informed decision regarding a product.</p> <p><br>To provide personalized recommendations to users, the application gives a list of products generated using Slope-One algorithm as shown in fig. 1. This list is created based on the buying patterns of the current user and other users with similar shopping trends. Since the recommender system analyses users’ buying patterns, customers providing reviews and rating products can influence other potential buyers indirectly. In order to make the recommendations engaging, the user can click on any product in the list to view an augmented version [4-5] of the product along with additional offers, discounts, and ratings, reviews as seen in fig. 2. Similarly, the user can also scan any product to find its augmented implementation and further details [5] as seen in fig. 3. These features make it easy for customers to obtain supplementary information [3] about different commodities in the store in <br>a user-friendly manner. But this functionality has not been executed in any current application on the Android platform. Furthermore, a navigation system that would provide a route to move between different sections of the store is implemented [2]. When users enter their preferred destination and the section in which they are currently located, an augmented route is displayed as seen in fig. 4. The users can follow this path to reach their target section. This would help users find the products sooner without having to spend much effort looking for the right aisles.</p> <p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <img src="https://spast.org/public/site/images/manasa9810/mceclip2.png"></p> <p>The project has a vast scope in the future. An electronic payment feature can be implemented, which gives an option to pay the bills online from the store and reduce the hassle of standing in crowded queues. The application can also be made more useful by augmenting 3D images of the products. This would allow users to view the product from different directions and understand the scale of the product.</p> <p>Bringing online shopping aspects to offline retail stores will make shopping more connected, social, engaging, and fun thereby increasing user satisfaction. This approach reduces the resources that are spent on delivering fewer products to many customers, as they could be incentivized to buy numerous products at once in a single trip to the supermarket. Consequently, this would improve sustainability and reduce the wastage of materials.</p> <p>&nbsp;</p> <p>&nbsp;</p> 2021-10-14T00:00:00+00:00 Copyright (c) 2021 Manasa Deshpande, Likhita B, Nayana Bhat https://spast.org/techrep/article/view/1014 Disease Detection from Audio-Visual Signals: Recent Advancements and Challenges 2021-10-01T16:46:07+00:00 Alwin Joseph alwin.joseph@res.christuniversity.in Chandra J chandra.j@christuniversity.in Bonny Banerjee bbnerjee@memphis.edu Madhavi Rangaswamy madhavi.rangaswamy@christuniversity.in <p><span style="font-weight: 400;">Technology is advancing in the area of disease detection from audio-visual signals. Ambient cameras, which are ubiquitous nowadays, capture audio and video signals from individuals. These signals can be used to detect an individual’s physiological and behavioural abnormalities which may play a key role in assessing his current mental and physiological condition. The signals can also aid in the diagnosis of many disorders, especially psychiatric and neurological. The problem of disease detection from audio-visual signals is relatively new and has the potential to generate exciting scientific contributions. However, a number of challenges lay ahead. The ability to identify an individual’s mental and emotional state from audio-visual footage can be a significant step towards tackling a number of difficult societal problems.</span></p> <p><span style="font-weight: 400;">For decades, researchers have developed artificial intelligence, machine learning, computer vision, and image and signal processing algorithms to learn useful features, extract contextual information, and generate high-level knowledge from audio-visual footage. These algorithms are useful in the detection of abnormalities, and classification of diseases and disorders. These methods can apply to any individual whose changes in physical appearance and vocal/physical sounds due to environmental stimuli can be captured by cameras already available everywhere. The audio-visual footage allows monitoring individuals in specific and rare situations that can provide crucial health information leading to urgent attention and care enabling a quick recovery.&nbsp;</span></p> <p><span style="font-weight: 400;">This paper summarizes the algorithmic approaches, including preprocessing techniques, for different kinds of audio-visual signals, and their applications and effectiveness in disease/disorder detection. It highlights and evaluates the widely-used state-of-the-art machine learning approaches, and identifies the scope for new methods and algorithms. The paper also summarizes the challenges in implementing such technologies in the real world.</span></p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Alwin Joseph, Chandra J, Bonny Banerjee, Madhavi Rangaswamy https://spast.org/techrep/article/view/1101 Relevance of psychophysiological and emotions features for the analysis of Human behavior-A Survey 2021-09-21T09:33:35+00:00 saba syed sabasyeda76@gmail.com Ajit Danti ajit.danti@christuniversity.in <p>With fresh development in the area of artificial intelligence and machine learning, the analysis of human physiological and psychological behavior has increased greater attention around the world. In this paper, we have provided a detail survey of the approaches used for human behavior detection considering different modalities with physiological behavior&nbsp; psychological behavior and emotion detection with the help of&nbsp; sensors EEG ,ECG ,GSR and temperature. At long last, it finish up with the results of this study and represent the thoughts for future exploration in the zone of human behavior understanding. A rundown and comparison among the ongoing investigations done, that uncovers the current existing issues and the future work has been examined</p> 2021-09-21T00:00:00+00:00 Copyright (c) 2021 saba syed, Dr.Ajit Danti https://spast.org/techrep/article/view/2591 Extractive Text Summarization using LSTM based Encoder-Decoder Classification 2021-10-17T13:40:17+00:00 Abhijeet Thakare thakarear@rknec.edu Preeti Voditel voditelps2@rknec.edu <p>Nowadays, Text Summarization is one of the most important areas to be focused. Fast<br>growing text documents (especially news articles, scientific articles, Blogs) have paved huge<br>volume of text data on the internet. So, there is a need of automatic techniques which will<br>generate concise and meaningful information from original huge document. Text<br>summarization is one of the important techniques to reduce the longer text, in such a way<br>that reduced text covers all the important highlights of original documents. Text<br>Summarization is categorized as Extractive and Abstractive Summarization [1]. Extractive<br>Summarization extracts important sentences (as shown in Figure 1) from original documents<br>and then aggregates all these sentences to generate the summary. Three steps are<br>generally involved in extractive summarization [2]: Representation of original text in<br>Intermediate way, sentence scoring, Selecting high scores sentences for summary purpose.<br>LSTM (Long Short Term Memory) or Gated Recurrent Unit (GRU) [3-4] plays a crucial role in<br>extractive text summarization. We have proposed a novel LSTM based encoder-decoder<br>model which is specially useful for summarization of documents containing long sentences.<br>Other extractive summarization techniques viz. Boltzman Machine (RBM) [5], Variation autoencoder [6], Convolution Neural Network (CNN) [7] doesn’t capture dependencies in long<br>sentences and suffers from low to nil gradient [8-9]. Our LSTM based encoder-decoder is<br>trained using i) Noun phrases of the sentences ii) Relationship between other noun phrases.<br>We have trained our model on CNN News article dataset for which the extractive summary<br>was available. At last, sentences for every article are sorted based on top N Highest<br>ROUGHE ( Recall Oriented Understudy for Gisting Evaluation) - 1 F1 Score. Our trained<br>model is utilized to classify document sentences, whether it belongs to summarized category<br>or not. Various datasets are available for evaluation purpose like DUC 2006 , DUC 2007,<br>DUC 2002, SKE, BC3 and also EASC Corpus. But we have finalized ROUGHE[10]<br>evaluation measure, which is useful for evaluating almost all approaches. Therefore, we<br>have evaluated our model on Labeled CNN News articles on three metrics: i) ROUGHE-1 ii)<br>ROUGHE-2 iii) Sentences matching with Gold Standard Summary as shown in Table 1. Our<br>Model Outperformed all most all approaches in the literature as shown in Table 2.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Abhijeet Thakare, Preeti Voditel https://spast.org/techrep/article/view/2652 The Design of a MRI-BIC system applied to 3D images for practical applications 2021-10-21T14:29:53+00:00 Dheeraj Kumar Vtu13540@veltech.edu.in Shailendea Kumar Mishra shailendra@veltech.edu.in Gaurav Kumar vtu12929@veltech.edu.in Sujeet Kumar vtu11426@veltech.edu.in <p>A brain tumour is an abnormal growth of brain cells inside the skull. It reduces the patient's survival chance if the patient does not get appropriate treatment at an earlier stage. There is two types of brain tumour benign and malignant tumour. Benign is noncancerous but malignant is a cancerous tumour. Early detection followed by early removal of the malignant tumour can save a patient's life. Manual classification of brain tumours is less effective and there is a greater chance to make mistakes. Manual classification is also time-consuming. Brain tumour classification using the latest technology such as machine learning, deep learning and image processing plays an important role in clinical diagnosis and effective treatment. Brain tumour classification is usually done by a radiologist and it depends on the experience of the radiologist that how accurately he/she can predict the type of tumour. The wrong prediction can lead to the death of a patient. The improvement of machine learning and deep learning technology can help radiologists in tumour classification without brain surgery. This paper proposes an effective method for brain tumour classification using deep learning and machine learning algorithm. The proposed classification model adopts the concept of deep transfer learning and uses a pre-trained efficientNet to extract features from brain MRI images. EfficientNet can uniformly scale all dimensions such as width, depth and resolution using a compound coefficient and classify brain tumours into four tumour types with high precision. The transferred learned algorithm uses softmax as a classifier but the proposed model is also tested with two additional classifiers namely SVM and KNN. KNN perform better than softmax and SVM for this proposed model. The developed network is simpler and less time consuming than existing pre-trained networks. Two different MRI datasets are used and it is collected from Kaggle. The first dataset contains a 2D MRI of four tumour types namely gliomas, meningiomas, pituitary and no tumours. The second 3D MRI dataset is converted into 2D using a machine learning algorithm and added with the first dataset for achieving better feature extraction and higher accuracy. The proposed model classify the brain tumour in one of the four classes (gliomas, meningiomas, pituitary and no tumours)&nbsp; The training and testing accuracy of the proposed model is 99.52% and 92.17% respectively. The proposed convolutional neural network(CNN) model can be used to assist physicians and radiologists in validating brain tumour type.</p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Dheeraj Kumar, Shailendea Kumar Mishra, Gaurav Kumar, Sujeet Kumar https://spast.org/techrep/article/view/2101 TOWARDS APPLICABILITY OF ARTIFICIAL INTELLIGENCE IN HEALTHCARE, BANKING AND EDUCATION SECTOR 2021-09-30T18:33:06+00:00 Harikumar Pallathadka abhishekthommandrumtp@gmail.com Mutkule Prasad Raghunath abhishekthommandrumtp@gmail.com <p>Computational and artificial intelligence work together to create machine learning. As a model, they look to the human mind, from which they hope to create intelligent machines that can deal with real-world issues. For the design and deployment of intelligent tools it includes neuro-computing and fuzzy logic as well as genetic computing. It also includes probabilistic reasoning, including genetic algorithms, belief networks, and learning theory. Classification of medical and genomic data is a major challenge in biomedical informatics. A patient's disease status can be predicted using small medical data classification. This is in addition to a reduction in dimension, i.e. using feature extraction or feature selection methods to derive reduced feature sets and then classifying them with effective classifiers. [1] [2]</p> <p>The term "educational data mining" refers to a field of study that uses data mining, machine learning, and statistics to analyze data gathered specifically from the teaching and learning process. It's a great area for data mining because the data is so readily available. In the teaching-learning process, scientific criteria and objective data are used to monitor and judge the overall quality of teaching and learning. Student performance can be forecasted and students at risk identified using educational data mining. Important concerns in learning patterns of different groups of students can be identified, increasing pass out rates can be increased using educational data mining, and subject curriculum renewal can be optimized using educational data mining. For example, fraud detection, customer satisfaction improvement, and identifying a financial product's target audience are all possible uses for machine learning in banking.[3][4][5]</p> <p>This article provides an investigation over various applications of machine learning in healthcare, education and banking sector.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Harikumar Pallathadka, Mutkule Prasad Raghunath https://spast.org/techrep/article/view/2764 Privacy Preserving Semantic interoperability model for healthcare Internet of Things 2021-10-17T18:32:46+00:00 Sony P spsony@gmail.com Sureshkumar N sureshkumar.n@vit.ac.in <p>While considering the medical documents, privacy is another major concern. According to HIPAA(Health Insurance Portability and Accountability Act), Protected Health Information (PHI) of the medical documents should be preserved. So, in this paper we propose a privacy preserving semantic interoperability solutions for healthcare IoT. All the existing interoperability work in healthcare field does not focus on privacy issues of healthcare field. To the best of our knowledge, this is the first work which address both privacy and interoperability issues. The electronic transmission of medical documents can be done while ensuring the privacy of sensitive information of patient and the doctor. Privacy is another significant concern when it comes to medical records. Protected Health Information (PHI) from medical documents should be completely protected, according to HIPAA (Health Insurance Portability and Accountability Act). As a solution, we offer a privacy-preserving semantic interoperability solution for healthcare IoT in this work. All of the existing interoperability works[8-11]&nbsp;in the healthcare area isn't targeted on the &nbsp;privacy concerns. To the best of our knowledge, this is the first work which address both privacy and interoperability issues. The electronic transmission of medical documents can be done while ensuring the privacy of sensitive information of both patient and the doctor.</p> 2021-10-19T00:00:00+00:00 Copyright (c) 2021 Sony P, Sureshkumar N https://spast.org/techrep/article/view/1298 Application of the Application of the MDA approach to graph-oriented NoSQL databases, a vertical transformation from PIM to PSM 2021-09-27T18:46:45+00:00 Aziz SRAI aziz.srai.dev@gmail.com <p>Relational databases, closely associated with the SQL language, intrinsically include a certain<br>number of organization rules - normal forms for example, to guarantee the robustness of the<br>relational scheme - and essential security. These rules are particularly effective for a common<br>mode of data management. However, they are proving to be a real obstacle for the deployment<br>of large-scale and redundant databases, as required by big data, storage and analysis. We<br>must therefore adopt another mode of data management to facilitate massive analyses. This<br>is the reason why today we have seen the emergence of NoSQL. In this article, we show how<br>to design and apply transformation rules to migrate from a relational SQL database to a big<br>data solution within NoSQL. For this, we use the Model Driven Architecture (MDA) and the<br>transformation languages MOF 2.0 QVT (Meta-Object Facility 2.0 Query-View<br>Transformation). The transformation rules defined in this book are used to generate, from the<br>class diagram, a NoSQL graph-oriented database PSM model.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Aziz SRAI https://spast.org/techrep/article/view/2207 A framework for Sea breeze Front Detection from Coastal Regions of India Using Morphological Snake Algorithm 2021-10-01T13:33:20+00:00 vasavi S vasavi.movva@gmail.com <p>Sea breezes are the most common winds experienced by people living in coastal regions. In general, the sea breeze is a flow of winds that particularly takes place in coastal areas. Sea breezes cause irregular climatic conditions. The salty air coming from the sea breeze has got many adverse effects like deterioration. The collision of two powerful sea breezes fronts can cause severe thunderstorms across the coastal regions. Therefore, it is important to know the location of the sea breeze front to define the regions affected by sea breezes. In order to detect the sea breeze front from satellite images, it is important to segment the satellite images. Image segmentation which helps in extracting the objects of interest and make the image more meaningful for further processing. Then using contour detection, the outline of sea breeze which is sea breeze front can easily be extracted. A proper methodology with user interface is proposed in this paper for detecting the sea breeze front from the satellite images.</p> <p>&nbsp;</p> 2021-10-03T00:00:00+00:00 Copyright (c) 2021 vasavi S https://spast.org/techrep/article/view/2875 Dr Securing Data Packets in MANET using R-AODV and ECC and justifying it using MATLAB 2021-10-19T13:15:19+00:00 Fahmina Taranum ftaranum@mjcollege.ac.in <p><strong><em>Abstract</em></strong><strong>—MANET is a wireless communication network with a set of mobile nodes that are temporarily connected without any infrastructure. Nodes in MANETs can either serve as a host or as a router and can travel independently in any direction because of their dynamic topology. The clex nature of MANETs makes the network vulnerable i.e., unstableness and accessibility to attacks, thereby making it insecure at node and transmission level. Thus, seeking a safe and trustworthy end-to-end path in a MANET is a real challenge for secure and successful transmission. The R-AODV routing protocol along with the Elliptic Curve Cryptography algorithm is implemented to detect and to prevent the blackhole attack by securing the data packet transmission against the Blackhole attack in MANET using encryption and decryption techniques. The purpose of the proposal is to increase the security of the data transmitted over the network through this encryption approach. The performance of this proposed system is simulated by using the NS-2.35 network simulator.&nbsp; As AODV is the most widely used protocol for routing in Adhoc networks, providing security to this insecure protocol becomes objective of this proposal. The Simulation results show that the proposed protocol provides good experimental results on various metrics. Most of the recent research is on making AODV secure by using cryptographic techniques. The proposal in this paper is to use the reverse AODV technique for data transmission and ECC for secured data delivery. The generated results are then tested and trained in MATLAB for justification using machine learning classifiers to check the appropriateness of the results.</strong></p> <p>&nbsp;</p> <p><strong>&nbsp;</strong></p> 2021-10-21T00:00:00+00:00 Copyright (c) 2021 Fahmina Taranum https://spast.org/techrep/article/view/2914 Shoulder Surfing Attack Trusted Vertification – A Survey 2021-10-22T16:37:37+00:00 K. Valarmathia valarmathi-1970@yahoo.co.in S. Hemalatha pithemalatha@gmail.com P. Perumal perumalp@srec.ac.in G. Puthilibai puthilibai.che@sairam.edu.in <p>Individual performance such as select an mistaken code word or inflowing a code word in a unsure of yourself manner result in the easiest linkage in authentication. An invader be able to harm the hardware, software, or information by exploiting the weakest link. As a result, our goal is to give a elegant method for users to validate their bank accounts. In computer or IT security, authentication-based passwords are used. as an alternative of using alphanumeric as a code word, the user be able to choose to use an image as a password. The customer be capable of depiction the shoulder surfing attack using the mobile application. Passwords can be observed by attackers using spyware or shoulder surfing. To solve the problem, the Pass Matrix authentication system is proposed, which is based on graphical passwords and can withstand a shoulder surfing attack. Pass matrix does not provide any suggestion or stature, still when conducting camera-based attacks, through a just the once suitable login display and navigation button cover the whole range of pass-image. As a result, the future system outperforms the competition in provisos of shoulder surfing resistance.</p> 2021-10-22T00:00:00+00:00 Copyright (c) 2021 K. Valarmathia, S. Hemalatha, P. Perumal, G. Puthilibai https://spast.org/techrep/article/view/1679 IOT Based Shrewd Monitoring Framework for Children and Women Safety 2021-09-30T07:37:04+00:00 REVATHI K P revamrocks@gmail.com Manikandan T manikandan.t@rajalakshmi.edu.in <p><strong>Abstract</strong></p> <p><strong>Purpose:</strong> The Objective of this paper is to develop a smart system to ensure the Children and Women safety in real time and assisting the parents to monitor the child’s condition using IOT framework and Mobile application</p> <p><strong>Methodology: </strong>We provide a reliable security system for the safety of children and women. In case of emergency and help the user will provoke help by setting the circle within the app. We’ve designed a watch which is interconnected with mobile application. This helps to search out their location where they were left within the circle. Additionally adding further features to the watch is a heartbeat device and GPS model that tells the present standing of the person once they forget to make a circle or not in a very long circle. If the user hurts in any case, it’ll send the alert messages to the pre-elite contacts.</p> <p><strong>Findings: </strong>In the system, we have developed a smart watch that can be used to locate missing or lost children and also tracking the child movements outside from the home as well as for facilitating the women safety. Here the user itself can create his own circle in a mobile app with some radius of distance according to their comfort. When the person is out of the location which means out of the radius, immediately the message has been sent to the emergency contacts which are already selected before by the user in the mobile app. This process can be controlled by the end user. If the user hurts in any case, it’ll send the alert messages to the pre-elite contacts. GPS (Global Positioning System) is employed to urge the position of a widget in terms of latitude and meridian. Latitude and meridian values are extracted from NMEA sentences. In our system, GPS helps to send the latitude and meridian values to the list of contacts elite by the user, once the user is not within the range of the circle. This can also be used for children as well, but when it comes to children the complete process will be done by their parents. App will be under parent control and they create the radius of their children to know his presence or location. This device gives the solution for knowing their location faster and facilitates to take the necessary action immediately.</p> <p><strong>Originality/value: </strong>In this study, the empirical results show that alerting message and call sent to Parents/Care-Taker to prevent the victim’s positively. The test results proved to be positive and henceforth the application is feasible and test approved.To promote future research and practical applications, a framework has been developed to Instead of using the hotspot for internet can use the GPRS module inside the watch for the purpose of Internet.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 REVATHI K P, Manikandan T https://spast.org/techrep/article/view/853 Multimodal Classification on PET/CT Image Fusion for Lung Cancer 2021-09-15T19:17:10+00:00 Kaushik Pratim Das kaushik.das@res.christuniversity.in Chandra J chandra.j@christuniversity.in <p>Lung cancer incidence and mortality are rapidly increasing worldwide. According to the American Cancer Society, the 5-year survival rates for patients in the metastasis stages are significantly lower [1], implying the need for early detection of lung cancer for effective treatment and improving the quality of life of a lung cancer patient [2]. Lung cancer detection is challenging as the symptoms do not appear until the disease is at an advanced stage [3]. Moreover, recent research has highlighted that early detection is rare in almost 75% of all lung cancer cases, whereas only 15% have a 5-year survival rate out of 2.2 million cases having late-stage detection annually [4]. There are several obstacles in the current lung cancer diagnosis methods due to internal organ motion and external variations, which causes multiple artifacts in the medical images [5]. Medical image fusion has become essential for accurate diagnosis as the differences in medical imaging principles provide more emphasis to particular characteristics of a patient body, i.e., in the context of structural information, imaging modality such as Computed Tomography (CT) images are used for finding the details of the body parts and tissues surrounding the organ. In contrast, Positron Emission Tomography (PET) image provides the functional information related to cell activity in the lung and the radioactive distribution in the organ [6]. The current lung staging procedures are conducted with the help of multimodality image fusion such as PET/CT to find anatomical and functional information about the tumor and additional metabolic measurements to identify the lung cancer stage and metastatic information of the disease [7].</p> <p>&nbsp;</p> <p>The primary use of multimodalities in medical imaging is to gain insights on the prognosis of the disease and for accurate tumor definition, visualization, and localization [8]. In recent years, PET/CT imaging has been increasingly used to diagnose, stage, and restage lung cancer. The success of multimodality imaging is due to the combination of the advantages of PET and CT imaging and minimizing their weaknesses [9]. However, several PET image artifacts are characterized by the long duration of scans, patient-related motion, and attenuation artifacts [10]. Moreover, CT images have shortcomings in terms of patient-based artifacts, physics-based artifacts, scanner-based artifacts, and helical and multisection artifacts, which diminishes the image quality and the spatial resolution in the image [11]. Therefore, the mere fusion of medical images without appropriate image processing can hamper the diagnosis as complementary information can be disrupted by various artifacts or noise. In addition, medical image fusion involving the registration of two different modalities is time-consuming and technically challenging, which is a cause of concern in a clinical setting with multiple cancer patients [12].</p> <p>Given below in fig.1, are the illustrations to differentiate a clinically acceptable image fusion of PET/CT images and the PET/CT image fusion comprising of noise and ring artifacts that disrupt the clinical information for diagnosis.</p> <p><strong>&nbsp;</strong><img src="https://spast.org/public/site/images/kaushik_20/fusn201.jpg" alt="PET/CT Image Fusion with Noise and Artifact" width="197" height="205"><img src="https://spast.org/public/site/images/kaushik_20/nwfusd.png" alt="Clinically Usable PET/CT Image Fusion " width="197" height="206"></p> <p><strong>&nbsp;</strong><strong>Fig.1.</strong> Image Fusion:&nbsp; A.PET/CT Image Fusion with Noise and Artifact B.Clinically Usable PET/CT Image Fusion</p> <p>The paper's main objective is to provide a comprehensive survey of the efficient medical image fusion techniques and the recent advances by conducting a detailed literature review. In addition, the study will delve into the impact of deep learning techniques for image fusion and their effectiveness in automating the image fusion procedure with better image quality while preserving essential clinical information. Finally, the study will identify modern methods and algorithms for improved co-registration for a definitive clinical evaluation of the disease and treatment and the challenges and limitations associated with image fusion.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Kaushik Pratim Das https://spast.org/techrep/article/view/3032 Data Mining and Machine Learning Techniques for Credit Card Fraud Detection 2021-11-06T11:30:25+00:00 Dr. Satish Kumar Kalhotra 1990uditmamodiya@gmail.com Dr. Shivprasad Vaijnathrao Dongare 1990uditmamodiya@gmail.com A. Kasthuri 1990uditmamodiya@gmail.com Daljeet Kaur 1990uditmamodiya@gmail.com <p>In the recent era, everybody is dealing with the digital data. In such scenario individual one heavily depends on credit card. Therefore, the demand of online transactions and usage of e-commerce sites are rising at the rapid rate. The online payments are the main cause of increasing crime rate heavily. Hence, it is the biggest challenge for the IT Sector to identify and solve such critical problems. This critical issue can be tackled with the help of machine learning. This paper mainly emphasis on various data mining algorithms such as like C4.5, CART algorithms, J48, Naïve Bayes algorithm, EM algorithm, Apriori algorithm, SVM and so on and also inform the accuracy and precision of the result.&nbsp; The machine learning finds the genuine and non-genuine transition using learning pattern matching and classification technique. The machine learning also normalized the data, identify the anomalies in transaction and provide appropriate results.</p> 2021-11-06T00:00:00+00:00 Copyright (c) 2021 Dr. Satish Kumar Kalhotra, Dr. Shivprasad Vaijnathrao Dongare, A. Kasthuri, Daljeet Kaur https://spast.org/techrep/article/view/958 Supply Chain Innovation With IOT 2021-09-17T12:43:05+00:00 saru Dhir sarudhir@gmail.com <p>The main objective of this work is to place light on the role of IoT in supply chain management (SCM) and the way IoT helped in improving the full process of SCM. Frequency identification (RFID) has been recognized jointly of the emerging technologies in SCM. IoT in SCM helps to trace the objects and accustomed remotely control the weather of the transport process. This also reduces the wastage of time. This paper also puts light on the role of RFID-IoT in SCM like monitoring, identification, logistic tracking and checking product quality. In future, RFID-IoT is anticipated to mix with other advanced technologies to seek out the solutions to the issues per SCM. This work specifies the implementation of IoT in various stages of SCM like manufacturing tracking, shipping and distribution, retail tracking and inventory tracking. A survey questionnaire was distributed among 50 businessmen to know their opinion about how they are managing their supply chain firms effectively.</p> 2021-09-18T00:00:00+00:00 Copyright (c) 2021 saru Dhir https://spast.org/techrep/article/view/2471 A Study of Digital Forensics with Machine Learning and Security Information and Event Management 2021-10-13T12:37:36+00:00 Manisankar Sannigrahi manisankar.sannigrahi2020@vitstudent.ac.in <p>Digital World, where electronic devices like computers, smart phones, tablets are increasing on a regular basis. Cybercrimes are also increasing at the same rate as the use of internet. Digital forensic, a branch of traditional forensic science has become an important aspect to deal with the challenges of digital world.&nbsp; There are number of techniques and tools available for digital forensics investigation. In this paper, how Machine Learning algorithms and Security Information and Event Management framework can be used in digital forensics is discussed. Machine Learning can be used to analyse huge amount of data, in recognizing patterns. SIEM can be used for log collection, correlation of logs data, identifying the techniques and activities that are used by the criminal. &nbsp;ML and SEIM both can be used to analyse previous historic data to predict the behaviour of the criminal.</p> 2021-10-13T00:00:00+00:00 Copyright (c) 2021 Manisankar Sannigrahi https://spast.org/techrep/article/view/2781 THE ADOPTION OF MOBILE BANKING SERVICES IN JORDANIAN BANKS AND FACTORS AFFECTING THE CUSTOMERS 2021-10-17T18:04:27+00:00 Malik Mustafa abhishek14482@gmail.com <p><strong>Abstract</strong></p> <p>This examination pointed toward distinguishing the main considerations that influence client's reception of portable banking (m-banking) administrations in Jordan. The examination followed the quantitative exploration technique and embraced the Unified Theory of Acceptance and Use of Technology (UTAUT) model. A theoretical model was detailed and tried utilizing the Partial Least-Squares Structural Equation Modeling (PLS-SEM) strategy. Also, connection and numerous direct relapse investigations were performed to evaluate the attack of yields of this model with the outcomes got from the overview. It was tracked down that a person's expectation to embrace m-banking administrations in Jordan is altogether affected by a few factors whose impacts decline in the request: individual imaginativeness, working with conditions, social impact, exertion hope, and execution anticipation. The consequences of this examination are relied upon to add to an inside and out comprehension of what segment and different variables mean for the reception of m-banking administrations in non-industrial nations and assume a basic part in working fair and square of their reception. [1][2]</p> <p><strong>&nbsp;</strong></p> <p>The rise and expansion of Information and Communication Technology (ICT) assumed a crucial part in the improvement of various areas by setting out open doors for them to foster advances, apparatuses, and benefits, and give data. This innovation improved the existence of contemporary networks and made it simpler and more quick and agreeable for individuals to perform numerous day-by-day exercises. The Information Technology (IT) upheaval is respected to have been the greatest and quickest pattern of the previous century. It actually remains so in the current century despite the new financial defeat. Be that as it may, by and large, the infiltration of the advancements of the different areas has not occurred at a similar level or even speed in light of geological, social, and segment&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; contrasts between nations. In this manner, the degree of the entrance of innovation in agricultural nations isn't like that in the created nations. Additionally, there are even contrasts in innovation entrance among the agricultural nations in a similar area like the instance of Yemen, for instance, comparative with the Arabic Gulf States. [3][4]</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;A few past investigations have effectively assessed the effects of segment factors on the reception and utilization of innovation by utilizing diverse reception speculations and models. Yet, most of these examinations were acted in created nations, without evaluating the status in non-industrial nations, where low degrees of schooling and pay win and various societies are found. For example, the social and social elements of Arab nations contrast from those of Western nations. [5][6]</p> <p>&nbsp;</p> 2021-10-19T00:00:00+00:00 Copyright (c) 2021 Malik Mustafa https://spast.org/techrep/article/view/229 CT-Scan Based Identification & Screening Of Contagiously Spreading Disease 2021-09-09T06:55:07+00:00 JASPREET KAUR jaspreet.sliet@gmail.com <p>COVID-19 pandemic spread globally threatens the health and economy. The disease effected the world quickly, the early and fast detection attracted potentially the medical researchers to avoid further outbreak. RT-PCR testing is very time consuming laboratory technique, therefore alternatively radiological imaging techniques are used for fat and accurate diagnosis. Scarcity of exert medical man power, screening of huge volume of population into COVID and normal classes becomes problematic. A novel computer aided technique based on deep learning was proposed in this paper for high rate of screening without the aid of expert radiologists. The pre-trained CNN were trained for dataset-1 with CT-images patch size 16X16 and dataset-2 with patch size 32X32.VGG-16 and GoogLenet with inception V1 were used as pre-trained networks. Data in each set equally distributed between COVID and Non-COVID classes and the data used for training and testing&nbsp; as 75% and 25%. All the four values of true and false of positive and negative classes obtained in the confusion matrix presented high performance on datset-2 in terms of accuracy in measurement, precision, specificity, sensitivity, F1 score and MCC. The experimental results encouraged the use of deep learning in the field of diagnosis of the disease.</p> 2021-09-09T00:00:00+00:00 Copyright (c) 2021 JASPREET KAUR https://spast.org/techrep/article/view/2413 A systematic review on prognosis of Autism using Machine Learning Techniques 2021-10-26T13:46:51+00:00 meenakshi Malviya minaldk25@gmail.com Dr J Chandra chandra.j@christuniversity.in <p>Quality of life (QoL) and QoL predictors have become crucial in the pandemic. Neurological anomalies are at the highest level of QoL threats. The world is evolving more stressful, and toxin and mental health have become the primary goal for a healthy life. Autism is a multisystem disorder that causes behavioral, neurological, cognitive, and physical differences in Autistic people. All the levels relate and influence and affect each other at distinct ages. The recent studies state that neurological disorders can be dysfunction of the brain or dysfunction of the whole nervous system, which may cause other symptoms of Autism. Autism is a heterogeneous neurodevelopmental disorder concerning diversity in the symptoms, risk factors, severity level, and response to the treatment [1]. The findings exhibit a significant change in the brain region at the occurrence of Autism [2]. The study of brain Magnetic resonance imaging&nbsp;(MRI) provides astute knowledge of brain structure that helps to study the minor to significant changes inside the brain that emerged due to any disorder. The brain MRI of Autistic subjects exhibited a large volume of the brain and increased head circumference size, which have expected at the time of birth, but a significant increase started at the age of 12 to18 months [3]. The paper focuses on reviewing various Machine Learning (ML)&nbsp; techniques used for diagnosing Autism at an early age with the help of brain MRI Images. The diagnosis contributes to the Autistic subjects leading a healthy if they get treatment and training if required on time. "Early diagnosis of Autism Spectrum Disorder" is an objective and one of the main goals of health organizations worldwide. The work supports the goal and contributes to the betterment of the quality of life of Autism patients.</p> 2021-10-26T00:00:00+00:00 Copyright (c) 2021 meenakshi Malviya, Dr J Chandra https://spast.org/techrep/article/view/1775 Mammogram Image Segmentation Technique Using A Modified-Gaussian Mixed Model Algorithm 2021-10-08T13:05:43+00:00 Dakshya Prasad Pati Dakshya Prasad Pati dakshya_prasad@yahoo.com <p>In today’s time, a breast cancer diagnosis is one of the important issues in biomedical engineering. An early diagnosis can up-shot to reduce and manage the exponential growth of breast cancer. There is a necessitate for a highly precise diagnostic system for correct treatment. Cancer statistics say that breast cancer is one of the major problems of women's death globally]. In the undeveloped and developing nations, there is a need for an adequate, accurate, and pocket-friendly facility for the early stage as well as a later-stage breast cancer diagnosis. Mammogram images are the most effective tool for detecting the abnormalities responsible for breast cancer. It is advised to all middle-aged women that have a routine check-up for any types of abnormalities in the breast, which can become a potential cause for future breast cancer. In order to detect these abnormalities, a specially designed imaging technique is used, which acquires the images with high resolution and contrast. For the correct diagnosis, it is of utmost importance to accurate detection of the shape and size of the lesion in the abnormal image . This research paper contributes in the direction of an accomplishment of a computer-aided diagnosis system as well as deals with the mammographic images for the segmentation of the affected part. For the segmentation, we have proposed a Modified-Gaussian Mixture Model. A standard database of Mini-MIAS and MICI-DDSM was used for the implementation and testing of the proposed algorithm. The DICE coefficient and metrics of Structural Similarity Measure (SSIM) were calculated to validate the quality of segmentation with the help of ground truth images. The proposed method is accurate and robust as for two different data sets it is giving the appropriate results.<strong>&nbsp;</strong></p> <p><em>&nbsp;</em></p> <p><em>Keywords</em>: Mammogram images, breast cancer, automatic segmentation, Gaussian</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Dakshya Prasad Pati Dakshya Prasad Pati https://spast.org/techrep/article/view/936 Ms Automated Pneumonia Detection Method Using Hybrid Semantic Segmentation Network 2021-09-16T12:07:54+00:00 M.Uma Maheshwari uma@sethu.ac.in Tamilselvi Rajendran tamilselvi@sethu.ac.in Parisa Beham M parisabeham@sethu.ac.in Amjath hasan M amjathhasan.5@gmail.com <p>Pneumonia&nbsp;is a potentially life-threatening condition that causes the air sacs in the lungs to become full of pus or fluid. Pneumonia is one of the lung infection diseases that can sometimes lead to severe or life-threatening illness and even death. Chest X-rays are mainly used for the diagnosis of pneumonia. Early detection of pneumonia is a challenge in the X-ray imaging due to the limited color scheme of x-ray imaging. Another major drawback in the early diagnosis of pneumonia is the human-dependent detection. Thus it is the need of the hour to diagnose pneumonia at an early stage. Inspired by this issue, in this work, a novel hybrid semantic segmentation network is proposed for early detection and classification of pneumonia. &nbsp;Various performance metrics have been used to analyse the performance of the proposed network. Experimental results prove the efficacy of the hybrid semantic segmentation network compared with the other existing approaches in the recent works.</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 M.Uma Maheshwari, Tamilselvi, Parisa Beham, Amjath https://spast.org/techrep/article/view/975 IoT based Blockchain Secure framework for Various Applications- a Survey 2021-09-17T12:55:16+00:00 Jayant Mehare jayant.mehare@raisoni.net Shraddha Utane shraddha.utane@gmail.com Mahip Bartere mahip.bartere@ghru.edu.in Shankar Amalraj shankar.amalraj@ghru.edu.in <p>BlockChain (BC) has pulled in enormous consideration because of its changeless nature and the related security and protection benefits. BC can possibly defeat security and protection difficulties of Internet of Things (IoT). In any case, BC is computationally costly, has restricted versatility and acquires critical transfer speed overheads and postpones which are not fit to the IoT setting. Web of Things (IoT) is currently in its underlying stage yet very soon; it will impact pretty much consistently to-day things we use. The more it will be remembered for our way of life, more will be its danger being abused. There is a critical need to make IoT gadgets secure from getting split. In future, IoT will extend its province for the computerized attacks on network by varying things that were utilized to be detached into online structures. Existing security propels are adequately not to deal with this issue. Now, Blockchain rise as the possible response for making more secure IoT structures in the future time. In this paper, we present the examination pattern among different applications in which the framework of IoT which depends on Blockchain organization, and how they apply blockchain with IoT innovation to satisfy their goals</p> 2021-09-18T00:00:00+00:00 Copyright (c) 2021 Jayant Mehare, Shraddha Utane, Mahip Bartere, Shankar Amalraj https://spast.org/techrep/article/view/384 Prediction Of Overall User Gratification In European Continent Tourism Domain 2021-09-13T20:49:31+00:00 Venkata daya sagar Ketaraju sagar.tadepalli@kluniversity.in <p>The hotel sector business is on its knees across the globe with the effect of COVID-19 in tourism could set the global tourism industry back 20 years. Online information is vital in the tourism hotel sector. This post-covid world is now even more essential for hotel management to increase digital interface and technology-centric businesses in a tie with E-Tourism platforms, which uses recommender systems to capture user views. A big hurdle for any hotel management in this peculiar pandemic is how to retain the business market by gaining users' faith bounce back. User opinions change pretty often now seek for hotels that offer comfortable stay and safety measures. This study tries to capture the user view upon the segments, which leads to high gratification levels earlier, valuable for E-Tourism travel platforms to recommend hotels in new dimensions.</p> 2021-09-14T00:00:00+00:00 Copyright (c) 2021 Venkata daya sagar Ketaraju https://spast.org/techrep/article/view/1889 Banknote Recognition Mobile Application for the Blind 2021-10-08T14:17:36+00:00 Sohan M C mcsohan.cs18@rvce.edu.in Akanksh A M akanksham.cs18@rvce.edu.in Anala M R analamr@rvce.edu.in Hemavathy R hemavathyr@rvce.edu.in <p><span style="font-weight: 400;">The usage of electronic devices to detect denomination is prominent as an effective solution to provide monetary independence to the visually underprivileged. Many applications have been designed while mainly the mobile deployments have largely been able to become successful. With improvements in AI technology and mobile hardware, mobile applications have become a feasible solution for most problems. Neither of the existing applications allows for detecting multiple notes in a single frame and relaying the total denomination. Neither is there a dataset available for the new Indian currency notes, annotated for object detection training. Our article incorporates the findings from our previous design thinking work[1] to develop the best available mobile application for banknote denomination detection. The mobile application developed is capable of detecting multiple notes present in the same frame with an extremely low false-positive percentage, provides instant audio feedback, and has an extremely simple user interface.</span></p> <p><span style="font-weight: 400;">YOLO v4 has been used to train an object detection model on the Indian currency note dataset, custom created for the purpose. The model is deployed through an android application to provide the denomination detection functionality. The model was trained on Google colab, the application was developed on Android Studio and tested directly on a physical mobile device. The existing solutions use an image recognition approach to detecting denomination, we describe</span> <span style="font-weight: 400;">an object detection approach to allow multiple notes to be detected in the same frame.</span></p> <p><span style="font-weight: 400;">The developed application provided no false positives during testing and performed accurate, real time detection of denomination. Each unit was tested separately for expected behavior, the logical computations were specially tested for all possible corner cases. Upon integration, system tests were performed by installing the debug mode application on a physical device whilst connected to the Android Studio debugger. The application ran smoothly and provided accurate audio output as desired. Additionally, behavioral tests were performed to identify the extent to which the presence of false-positive output impacts the user’s behavior. The application provides constant audio feedback about the contents of the camera frame, providing the blind user an indication of how many notes are being detected in the frame and the total denomination. The count allows the blind user to have confidence about the number of notes being detected and the total denomination saves the user’s time. As it would be used in a real-time use case, the application has to do the processing quickly and efficiently, the model used for inference processes a frame in 600ms, and the application provides immediate audio feedback, allowing the user to quickly verify multiple times if needed.&nbsp;</span></p> <p><span style="font-weight: 400;">The application developed is by far the best available solution for banknote denomination detection for the blind. The insights from design thinking along with state-of-the-art AI object detection methods combined into a simple application resulted in the simplest user experience and detection results seen till date for Indian currency notes. The creation of larger object detection datasets to enable detection in various different orientations is to be undertaken in the future.&nbsp;</span></p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Sohan M C, Akanksh A M, Anala M R, Hemavathy R https://spast.org/techrep/article/view/476 Demystifying the Applications of Artificial Intelligence in Disaster Management: A review 2021-09-15T11:56:53+00:00 Dr.M.A.Jabbar jabbar.meerja@gmail.com Ruqqaiya Begum ruqqaiya1224@gmail.com Koti Tejasvi Kotitejasvi@gmail.com <p>A disaster is an unexpected event that disrupts a society's functioning while also harming the human environment and causing financial and material losses. It can be caused by either natural or human factors. In today's society, disasters are seen as the product of good planning, which leads to hazards and vulnerabilities. The term "disaster management" refers to the planning and management of disasters.</p> <p>Artificial intelligence (AI) is the ability of computers to perform the tasks that are usually done by humans. Artificial intelligence has been used in many specialized industry applications and is also used in everyday interactions with technology. Artificial intelligence is being used in sustainable development, humanitarian assistance, and disaster risk management.</p> <p>Machine Learning and Artificial Intelligence models can be used in both ways in disaster management, i.e., pre-in Disaster Management and post-in Disaster Management. Prediction of disasters can be done with the help of IoT devices, i.e., machines connected to other things to collect data over a network without human interference, like sensors that measure features such as temperature, CO level, greenhouse gases, etc. Machine learning algorithms process the data collected by IoT devices and provide accuracy.</p> <p>&nbsp;Post-disaster management is used for detecting changes and analyzing them in order to calculate the loss to the economy and plan specific measures for its recovery from disasters and rehabilitation measures.</p> <p>The goal of this paper is to provide a concise, demystifying review of the applications of artificial intelligence in disaster management. The use of artificial intelligence in disaster management will minimize the loss of human life and rescue operation time by using robotics, drones, sensors, etc.</p> <p>&nbsp;</p> <p>&nbsp;</p> 2021-09-15T00:00:00+00:00 Copyright (c) 2021 Dr.M.A.Jabbar, Ruqqaiya Begum, Koti Tejasvi https://spast.org/techrep/article/view/2060 SVM Classifier: Identify Linear Separability of NAND, NOR Logic Gates 2021-09-30T18:40:43+00:00 Ratna S. Chaudhari Ratna kadamnehaaa@gmail.com Smita J. Ghorpade Smita smita.ghorpade@gmail.com Seema S. Patil Seema sima.patil1969@gmail.com <p>Artificial Neural Network (ANN) plays vital role to resolve several real-life problems. ANN solves problems that would be impossible or difficult to solve by human or statistical principles. It produces better results on large data set as it has self-learning capabilities. The main characteristics of neural networks are that they have the ability to learn complex nonlinear input-output relationships, use sequential training procedures, and adapt themselves to the data. The most commonly used family of neural networks for pattern classification tasks [21]. Generally, patterns are obtained from real world and get together based on definite attributes in specific regions. Data is classified based on knowledge by recognizing patterns. From real world dimensions different patterns are classified as per specific properties. To intimate linearly separable and non-linearly separable problem, many different approaches exist. Among different approaches, to verify linear separability Support Vector Machine (SVM) classification is implemented. SVM has emerged as a promising technique for classification. It is the most widely used and robust classifiers for linear as well as non-linear boundaries. The real-world applications of SVM are Speech Recognition, Traffic Analysis and control, Stock exchange forecast, Classification of rocks, Image processing etc. Apart from these, there are unlimited applications of SVM.One thing related to SVM is that it is more versatile for new data. This makes it simple to apply in the practices where we require flexibility in training and testing data. On account of generation of large margin, huge data can be perfectly fitted and classified. Therefore, Support vector machine is one of the excellent and most efficient classification algorithm. Usually, Support Vector Machines gives tremendous performance as compare to other Machine Learning classifiers methods. Based on hyper plane, SVM classifies the data points. An SVM doesn't merely find a decision boundary; it finds the most optimal decision boundary. Here our challenge is to check whether NAND and NOR Boolean logic gates are linearly separable or not. To inspect this, we considered Boolean functions of NAND, NOR logic gates for linear classification task using SVM classifier. The null hypothesis is, “There is no significant difference between the performance of classifier regarding NAND, NOR Boolean logic gates using Support Vector Machine”. An alternate hypothesis is, “There is significant difference between the performance of classifier regarding NAND, NOR Boolean logic gates using Support Vector Machine”. This research study is based on testing of linear separability of Boolean logic gates using Zoo data set. NAND, NOR Boolean logic gate functions will be implemented. The results will be visualized using scatter plot and according to it model will be fitted with SVM to measure accuracy score, classification report and confusion matrix. Proposed method reveals higher classification accuracy.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Ratna S. Chaudhari Ratna, Smita J. Ghorpade Smita, Seema S. Patil Seema https://spast.org/techrep/article/view/1257 Forecast and Analysis of Stock Market Volatility using Deep Learning Algorithms 2021-09-27T17:34:56+00:00 Pratham Nayak prathamnayak@outlook.com <p>Stock markets serve as a platform where individuals and institutional investors can come together to buy and sell shares in a public venue. With the advent of digital technology these markets or exchanges exist as electronic marketplaces. These markets are generally very volatile thus making the stock market prediction a highly challenging problem.</p> <p>These predictions of stock value offer abounding arbitrage profits which serve as a huge motivation for extensive research in this area. Identifying and predicting a stock value beforehand by even a fraction of a second can result in very high profits. Similarly, a near to precise prediction can be extremely profitable in the amortized case. This attractiveness of finding a solution has motivated researchers, in both industry and academia to devise techniques despite the complications due to volatility, seasonality and time dependency, economy and other such factors. Lately, AI/ML techniques - like Fuzzy Logic and Support Vector Machines (SVMs), have been used to arrive at different solutions for this problem.</p> <p>Deep learning has recently received growing interest and attention. It has been successfully applied to many fields. In this paper, we explore and develop an ensemble predictive system to forecast the market prices using deep learning algorithms. Here we consider the fractional change in Stock value and the intra-day high and low values of the stock to train the and employ a neural network for obtaining the trading strategy that leads to relatively superior market returns. The focus here is on the use of Regression and LSTM based deep learning strategies used to predict stock values. Factors considered are open, close, low, high and volume.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 Pratham Nayak https://spast.org/techrep/article/view/2171 A Novel Real Time task scheduling algorithm for Fog Computing Paradigm 2021-10-01T16:42:21+00:00 Sanjib Kumar Nayak Sanjib scansanjib@gmail.com <p>In this era of Data-centric Computing, Devices used for Computing, Sensing and Communication are powered by Internet of Things (IoT). Real time Systems collect raw data in large streams through these IoT devices. Real time environments require intelligent paradigms for processing these raw data and generating valuable and credible information for intelligent decision making, but within a specific deadline. Upcoming decade will witness rapid surge in enormous numbers of internet powered devices, resulting in a world wide distributed network of devices over internet capable of analysing huge volumes of data with improved system performance and reduced network latency. High speed multimedia contents for medical, entertainment, Automotive, Education have the requirements for rapid data processing and information dissemination among the nodes spread geo-spatially. For achieving computational speedup, better communication and enhanced storage capabilities, we have to integrate the concept of edge computing in our geographically distributed networks and devices over the internet. In our proposed approach, we introduce a novel scheduling algorithm for real time computationally intensive task processing at edge nodes using fog computing for two categories of real-time tasks such as: Interactive tasks and Batch tasks. Two different types of workloads are analyzed with comparison between existing and the proposed algorithm for scheduling deadline sensitive tasks on real time environment using fog computing.</p> <p><em>&nbsp;</em></p> <p><strong>Index Terms</strong>: Internet of Things, Fog Computing, Intelligent Paradigm, Edge Devices, Real-time tasks</p> <p>&nbsp;</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Sanjib Kumar Nayak Sanjib https://spast.org/techrep/article/view/125 The Suitability of Location for Rehabilitation of Elephants in Chhattisgarh 2021-08-23T11:22:07+00:00 Usha Sharma usha28383@gmail.com <p>The elephant is a locomotive creature and identification of the location of an elephant is an important task. It is mostly done by GPS and collar radio process. According to the behavior of elephant’s rehabilitation of elephants is important. For rehabilitation identification of a suitable location is necessary. In our work, we have studied the number of elephants in Chhattisgarh, the cause of human death due to elephant behavior, and the reason for the elephant. We have found the criteria and think to remember them before selecting the location for the rehabilitation of elephants. In the first section of our work, we have given an introduction of the forest, the behavior of elephants, rescue operations to be performed. In the second section, we have discussed datasets related to elephants. In the third section requirements for selecting a location. Lastly, the conclusion section.</p> <p>&nbsp;</p> 2021-08-23T00:00:00+00:00 Copyright (c) 2021 Usha Sharma https://spast.org/techrep/article/view/172 Examining the role of Enterprise Resource Planning (ERP) in improving business operations in companies 2021-09-02T15:19:26+00:00 Ashish Kumar Pandey ashishkpandey9@gmail.com Raghvendra Kumar Singh raghvendra2309@gmail.com Dr. G. S. Jayesh drjayeshgs@gmail.com Neha Khare khareneha0511@gmail.com SHASHI KANT GUPTA raj2008enator@gmail.com <p>The aim of this research is to identify how ERP system can effectively improvise business operations of business organizations. Besides, this research will also identify issues and challenges regarding implementation of ERP system and will provide recommendation (strategies) to deal with those issues. The objectives are as follows: the business operations improvised with respect to organization or company, to share knowledge about the ERP system and its concept including its usages, to identify impact of ERP system on different operations of business organizations, to analyze risks and challenges of implementing ERP system in business organizations, and to recommend strategies to successfully implement ERP system in a workplace.</p> 2021-09-02T00:00:00+00:00 Copyright (c) 2021 Ashish Kumar Pandey, Raghvendra Kumar Singh, Dr. G. S. Jayesh, Neha Khare, SHASHI KANT GUPTA https://spast.org/techrep/article/view/1756 Enhancement of Imbalance Data Classification with Boosting Methods: An Experiment 2021-10-08T13:10:40+00:00 Smita Ghorpade smita.ghorpade@gmail.com Ratna Chaudhari kadamnehaaa@gmail.com Seema Patil sima.patil1969@gmail.com <p>In data mining and machine learning area, the expansion of ensemble methods has achieved a good attention from the scientific community. Scientist has proven increased efficiencies of ensemble classifiers in various real world problems such as Image analysis and classification, deep learning, speech emotion recognition, sentiment analysis, forecasting crypto-currency, prediction of gas consumption. Ensemble methods integrate several learning algorithms which gives better predictive performance as compare to any of the basic learning algorithms alone. Combining several learning models shows better performance as compare to single base learners. The idea of boosting emanates from the area of machine learning. Classification of imbalanced data set is a broader research area where the data classification is skewed or biased. It is challenging task for imbalance data set to have appropriate distribution of data samples in each class by machine learning algorithm. The distribution of classes can depart from small bias to extreme imbalance which leads to minority class and majority class. Minority class is a class in which very few data samples are predicted by the model. Majority class is a class in which large numbers of data samples are predicted by the model. Standard machine learning algorithm gravitates towards the majority class data samples which out-turn in imperfect predictive accuracy over the minority class. In the learning algorithms, several approaches have introduced which strengthen them towards the minority class samples.One of the well-known methods is ensemble method.</p> <p>However, ensemble method is one of the most well-known approaches. Ensemble method combines the collection of best classifiers for classification to improve the performance. There are five popular advanced ensemble techniques such as boosting, bagging, blending, voting and stacking. In ensemble learning, boosting is one of the most promising techniques in which many weak classifiers are aggregated and constructs a strong classifier. The beauty of boosting is its serialized learning nature, which intends to minimize the errors of the previously modelled classifier. Most of the popular boosting algorithms are AdaBoostM1, Logitboost, Gentle Adaboost, GradientBoost, XGboost, LightGBM, CatBoost, SMOTEBoost, RUSBoost, MEboost, AdaCost, AdaC1, AdaC2 and AdaC3[15]. In classification task, Boosting of the ensemble learning has made prominent progress.</p> <p>&nbsp;In this study initially, problem domain is analysed for imbalanced data set classification. Then this problem is formulated by framing null and alternative hypothesis. The null hypothesis is stated as “There is no significant difference between single classifier and classifier with ensemble techniques - AdaboostM1 and Bagging”. Alternative hypothesis is stated as “Ensemble techniques AdaBoostM1 and Bagging works more superior as compare to single classifier”. To test the hypothesis, we have carried out an experiment. We have chosen three imbalanced data sets which are named as Thyroid, Glass and Ecoli3. Our main objective is to check the accuracy score of ensemble methods with mentioned classifiers. Initially we have applied four classifiers: Naïve Bayes, Multi-layer Perceptron, Locally weighted learning and REPTree on these three data sets. The accuracy score of each classifier is measured. Then we applied four boosting algorithms along with these classifiers and observed the results. To examine the performance of boosting algorithm, a comprehensive statistical test suite is used which shows evaluation metrics.</p> 2021-10-08T00:00:00+00:00 Copyright (c) 2021 Smita Ghorpade, Ratna Chaudhari, Seema Patil https://spast.org/techrep/article/view/2583 Face Mask Detection with Automated Door Entry Control using Convolutional Neural Network 2021-10-15T02:36:01+00:00 Mercy Rajaselvi Beaulah V mercyrajaselvi.v@eec.srmrmp.edu.in Prathima S, prathi.pri@gmail.com Savitha A.K savithabanu55@gmail.com Shalini D dshalini.3012@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <h1>Abstract:</h1> <p>Purpose: The objectives of this paper are to detect the presence of face masks for safety purposes due to the outbreak of covid-19 using Convolutional Neural Network and to automate the door entry control based on the presence of face mask on the face.</p> <h1>&nbsp;Methodology:</h1> <p>The proposed system uses computer vision and a deep learning algorithm to detect whether a person in the video stream is wearing a mask and to automate the door entry control system. In the functional architecture depicted below in Figure 1, the system contains three phases: training the face mask detector, applying the face mask detector, and automation of the door entry control mechanism. Initially, the face mask dataset is passed as an input and the CNN model is trained over this data set with FaceNet deep neural network model with the help of Keras/TensorFlow. The image features extracted from the region of interest of the live video stream are given to the trained face mask classifier to detect the presence of the face mask. Finally, the output of the face mask classifier is used to determine the door entry of the door entry control system.</p> <h1>Findings</h1> <p>The CNN model and the FaceNet model are trained for 20 epochs for two classes of images, namely the dataset with face masks and without face masks The solenoid door lock is used to demonstrate the implementation of door entry control. The power is applied to the door lock system when the face mask is present. The DC creates a magnetic field that drags the slug inside and keeps the door unlocked. The power is turned off when there is no face mask; the slug moves outside and locks the door. The system is tested with five distinct scenarios: a person wearing a dark-colored mask, a light-colored mask, a surgical mask, keeping hands instead of a mask, and a person without wearing a mask. In all cases, the system responds with good accuracy. Figure 4 shows the accuracy of the trained CNN model when compared with several existing classifier models.</p> <p>&nbsp;</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Mercy Rajaselvi Beaulah V, Prathima S,, Savitha A.K, Shalini D, Mayakannan Selvaraju https://spast.org/techrep/article/view/2630 A STUDY ON THE PERCEPTION OF EMPLOYERS TOWARDS JOB-READINESS OF GRADUATES AT THE ENTRY-LEVEL 2021-10-17T11:02:24+00:00 Sagayaraj K.L kasisagay@gmail.com Nisha Ashokan kannanarchieves@gmail.com N.F. James Bernard kannanarchieves@gmail.com Mayakannan Selvaraju kannanarchieves@gmail.com <p>Purpose: The demands and expectations of employers may differ from one organization to another. This study focuses on the employers’ expectations and develops sustainable human capital for the job industry.</p> <p>Methodology: The researcher implemented a descriptive design for the research study.&nbsp;</p> <p>The researcher designed an online tool with help of structured and unstructured questions.&nbsp; The primary data was collected through online mode from fifty industrial employers in and around the Chennai region of Tamil Nadu.&nbsp; The method of sampling involved selecting the sample elements using an accidental sampling technique under the non-probability sampling method.</p> <p>Findings: This article explored at the various nuances and employability skills that employers envisaged at the time of scrutiny of graduates.&nbsp; The selection of candidates for the desired job market depends on the graduates who are well equipped with all the required skills. This research study identified various job skills expected by employers from the graduates at the entry-level.</p> <p>Originality/Value: The study highlights the expectations of employers and job readiness of the graduates who are at the entry-level to get into job industries.</p> 2021-10-17T00:00:00+00:00 Copyright (c) 2021 Sagayaraj K.L, Nisha Ashokan, N.F. James Bernard, Mayakannan Selvaraju https://spast.org/techrep/article/view/1341 A Comprehensive Review of Blockchain-Enabled Security services for Finance and Banking Analysis and Techniques 2021-09-28T11:09:17+00:00 A Shailaja anandshailu.shailu@gmail.com R Thandeeswaran rthandeeswaran@vit.ac.in <p style="margin: 0cm; text-align: justify;">The consequences of blockchain technology on information security in the public sector are examined in this research paper. It covers a review of current blockchain technology uses and innovation, as well as an outline of e-government advances. In addition, the triad influences contemporary discussions on security, governance, Confidentiality (regulatory implications of secrecy technology), Integrity, and Accessibility (CIA). This article aims to inform stakeholders about the application of blockchain technology in the banking and financial industries. In addition, the benefits, prospects, costs, dangers, and concerns related to Blockchain technology in banking and financial services will be discussed. Examines how blockchain technology has the potential to transform commercial finance. Historically, commercial finance and the way merchants do business were governed by a centralized form of operation. Performance has suffered due to this reliance on central governments, as has insufficient flexibility, transparency, and awareness to potentially damaging changes. Blockchain has sparked tremendous attention as a distributed ledger technology (DLT) because it can disrupt traditional financial processes such as letter of credit (L/C) payments. Blockchains, as new technologies, can study and assess data by effectively integrating financial resources. In response to consumer demands, new formats or service models are designed to update the financial system and improve three-layer financial and service activities (data, regulations, and application). Customer loan terms, reorganization of the financial credit system, and greater cross-border payment effectiveness may aid the financial industry in automatically and correctly recognizing blockchain technology through various case studies and research aids in comprehending a paradigm shift and study beneficial. The findings may serve as a reminder of the potential future applications of blockchain finance, as well as an excellent example of academics obtaining extra financial abilities. Commercial challenges such as customs clearance, logistics applications, and insurance should be investigated more in the future in order to establish an untrustworthy setting and enable trade automation.</p> 2021-09-30T00:00:00+00:00 Copyright (c) 2021 A Shailaja, R Thandeeswaran https://spast.org/techrep/article/view/529 Design of Efficient Algorithms for Secure Communication Contextual to Internet of Things 2021-09-15T19:55:24+00:00 Jyoti Neeli jyoti.neeli@gmail.com <p>Internet of Things- (IoT) is well-known in recent years as a trending topic. Many researchers around the world are working hard to address security-related issues in IoT. However, due to the heterogeneous nature and scale of nodes and different devices in the IoT-ecosystem, addressing security issues is a major challenge. The Internet of Things is a fusion of many technologies that have their own traditional security vulnerabilities and need to be addressed in an IoT environment.&nbsp; The proposed study has reviewed existing literatures in the context of IoT security and explored security vulnerabilities in the existing techniques. The critical findings exhibited that ensuring the topmost level of resistance to a variety of threats and potential security attacks in IoT is still an open and unresolved issue, and the underlying reason behind this is the computational complexity associated with designing security mechanisms.</p> <p>The paper proposes a lightweight and responsive encryption technique that provides minimal resource consumption from sensor nodes. A significant contribution is the introduction of a novel bootstrapping of key mechanism, which has a unique secret key generation capability that can maintain forward and backward secrets simultaneously</p> 2021-09-16T00:00:00+00:00 Copyright (c) 2021 Jyoti Neeli https://spast.org/techrep/article/view/566 Various Soft Computing Based Techniques for Developing Intrusion Detection Management System 2021-09-16T11:19:54+00:00 Guna Sekhar Sajja abhishek14482@gmail.com Harikumar Pallathadka abhishek14482@gmail.com Dr. Mohd Naved abhishek14482@gmail.com