International Science Index

International Journal of Computer and Information Engineering

Importance of Ethics in Cloud Security
This paper examines the importance of ethics in cloud computing. In the modern society, cloud computing is offering individuals and businesses an unlimited space for storing and processing data or information. Most of the data and information stored in the cloud by various users such as banks, doctors, architects, engineers, lawyers, consulting firms, and financial institutions among others require a high level of confidentiality and safeguard. Cloud computing offers centralized storage and processing of data, and this has immensely contributed to the growth of businesses and improved sharing of information over the internet. However, the accessibility and management of data and servers by a third party raise concerns regarding the privacy of clients’ information and the possible manipulations of the data by third parties. This document suggests the approaches various stakeholders should take to address various ethical issues involving cloud-computing services. Ethical education and training is key to all stakeholders involved in the handling of data and information stored or being processed in the cloud.
A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process
This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.
A Multi-Dimensional Neural Network Using the Fisher Transform to Predict the Price Evolution for Algorithmic Trading in Financial Markets
Trading the financial markets is a widespread activity today. A large number of investors, companies, public of private funds are buying and selling every day in order to make profit. Algorithmic trading is the prevalent method to make the trade decisions after the electronic trading release. The orders are sent almost instantly by computers using mathematical models. This paper will present a price prediction methodology based on a multi-dimensional neural network. Using the Fisher transform, the neural network will be instructed for a low-latency auto-adaptive process in order to predict the price evolution for the next period of time. The model is designed especially for algorithmic trading and uses the real-time price series. It was found that the characteristics of the Fisher function applied at the nodes scale level can generate reliable trading signals using the neural network methodology. After real time tests it was found that this method can be applied in any timeframe to trade the financial markets. The paper will also include the steps to implement the presented methodology into an automated trading system. Real trading results will be displayed and analyzed in order to qualify the model. As conclusion, the compared results will reveal that the neural network methodology applied together with the Fisher transform at the nodes level can generate a good price prediction and can build reliable trading signals for algorithmic trading.
Restoration of Digital Design Using Row and Column Major Parsing Technique from the Old/Used Jacquard Punched Cards
The optimized and digitalized restoration of the information from the old and used manual jacquard punched card in textile industry is referred to as Jacquard Punch Card (JPC) reader. In this paper, we present a novel design and development of photo electronics based system for reading old and used punched cards and storing its binary information for transforming them into an effective image file format. In our textile industry the jacquard punched cards holes diameters having the sizes of 3mm, 5mm and 5.5mm pitch. Before the adaptation of computing systems in the field of textile industry those punched cards were prepared manually without digital design source, but those punched cards are having rich woven designs. Now, the idea is to retrieve binary information from the jacquard punched cards and store them in digital (Non-Graphics) format before processing it. After processing the digital format (Non-Graphics) it is converted into an effective image file format through either by Row major or Column major parsing technique.To accomplish these activities, an embedded system based device and software integration is developed. As part of the test and trial activity the device was tested and installed for industrial service at Weavers Service Centre, Kanchipuram, Tamilnadu in India.
Automated Trading Algorithms: Design Principles and Trading Results for German Financial Market
In the first decades of the 21st Century, in the electronic trading environment, algorithmic trading became the main tool to make profit by speculations and investments in financial markets. A significant number of traders, private or institutional investors are participating in the financial markets every day using automated algorithms. The autonomous trading software is today a major part of the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present the main design principles used to build and to integrate automated trading algorithms in the trading systems. Aspects regarding the risk and capital management, decisions about liquidity on the market, entry, and exit trading signals and the methodology to integrate all of these in centralized software will be presented. It was found that couples of general principles must be met in order to automate a trading algorithm. The paper will reveal these rules together with samples of mathematical models that can be used in order to build a reliable automated trading system. To qualify and to compare the efficiency of an automated trading algorithm, a mathematical formula was found to measure the quality level of a trade. The quality level of the obtained trades can be also used in order to compare, categorize and optimize the trading strategies, to improve the efficiency and to reduce the capital exposure. Trading results obtained for the main German financial market index will be presented in the paper in order to analyze and to compare different automated trading algorithms. In conclusion, it was found that several trading algorithms can be optimized and integrated together into an automated trading system. Some major principles must be met in order to obtain a reliable system with positive profit expectancy and low capital exposure. A method to measure the quality level of an automated trading algorithm is revealed. Trading results will be compared in order to qualify the models presented and to compare them with any other system using automated trading algorithms.
A Prediction Model Using the Price Cyclicality Bands Optimized for Algorithmic Trading in Financial Market
After the widespread release of the electronic trading, automated trading systems have become a significant part of the business intelligence system of any modern financial investment company. An important part of the trades is made completely automatically today by computers using mathematical algorithms. The trading decisions are taken almost instantly by logical models and the orders are sent by low-latency automatic systems. This paper will present a real-time price prediction methodology designed especially for algorithmic trading. Based on the price cyclicality function, the methodology revealed will generate price cyclicality bands to predict the optimal levels for the entries and exits. In order to automate the trading decisions, the cyclicality bands will generate automated trading signals. We have found that the model can be used with good results to predict the changes in the market behavior. Using these predictions, the model can automatically adapt the trading signals in real-time to maximize the trading results. The paper will reveal the methodology to optimize and implement this model in automated trading systems. After tests, it is proved that this methodology can be applied with good efficiency in any timeframe. Real trading results will be also displayed and analyzed in order to qualify the methodology and to compare it with other models. As conclusion, it was found that the price prediction model using the price cyclicality bands is a reliable trading methodology for algorithmic trading in financial market.
The Application of Neural Network in the Reworking of Accu-Check to Wrist Bands to Monitor Blood Glucose in the Human Body
The issue of high blood sugar level which its' effects might end up in Diabetes mellitus is now becoming a rampant cardiovascular disorder in our community. In recent time it might become a death traps and silent killers due to improper awareness of it within innocent people. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool like a wrist watch to give an alert of the danger a head of time to the people living with high blood glucose, and to introduce a mechanism for checks and balances. The computational ability of neural network has established in this research using a neural Architecture of 8-15-9 configuration. The eight neurons at the input stage including a bias, with 15 neurons at the hidden layer at the processing stage and nine output symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for Diabetic symptom cases. The neural algorithm is coded in JAVA language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each input neurons. The light emitting Diodes (LED) of Red, Green, and Yellow colors are used as the output for the neural network to show pattern recognition for severe case, pre-hypertensive cases and normal without the traces of Diabetes mellitus. The research concluded that neural network is efficient ACCU-Check design tool for proper monitoring of high glucose levels than the conventional methods of carrying out blood test.
Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.
When There Is Too Much of a Good Thing: A Data-Driven Approach for Large-Scale Literature Review
The volume of available literature on any given scholarly topic is daunting. It can be impossible to manually read and meaningfully synthesize information when search results uncover tens of thousands of possibly relevant articles. Recent advances in citation network analysis and text mining techniques provide new opportunities for constructing robust summaries of large bodies of literature via a purely data-driven approach. Here, we propose a novel combination of two established techniques - citation network analysis followed by latent semantic analysis – to allow data-driven summaries of the literature. Citation network analysis extracts clusters formed by groups of publications connected by mutual citation, weighting by date to account for the greater likelihood of older publications being cited more. The resultant clusters can be taken as indicative of theoretical or conceptual groupings the literature, typically reflecting different historical approaches (e.g., biological vs. sociological mindsets). Text mining of the titles, abstracts, or article contents of these publications using word frequency and nearest neighbor techniques can be used to generate simple keyword summaries that encapsulate the content of these clusters. This talk will walk through two examples of how this approach can be used to summarize large bodies of literature (11,000+ scholarly works). We begin with a Web of Science Core Collection Database search for word stems of the key terms of interest (e.g., supervis* for ‘supervisor,’ ‘supervision,’ ‘supervised’). Full records of titles, year of publications, and citations of each article (including secondary articles, those citing and cited by documents including search term topics) are imported into CiteNetExplorer for cluster analysis. The plain text of the titles, abstracts, or full texts is imported into R (language for statistical programming). A corpus for each cluster is created from the imported text by the removal of numbers and non-ASCII characters, stop words (‘and’, ‘or’ etc.) and reduction to word stems. Word frequency matrices are used to identify the most frequent words unique to each cluster, which can be taken as a summary of their unique conceptual focus. Further, latent semantic analysis can then be used to identify the ways in which the key term is conceptually embedded within each cluster. Briefly, an n-dimensional latent semantic space is constructed via single value decomposition of the corpus. The nearest neighbors (largest cosine within this semantic space) for the key term can be taken as the conceptual context of the topic within each cluster. We show that the combination of citation network analysis and latent semantic analysis is a generalizable, systematic and informative approach to summarizing large bodies of scholarly literature, which can be achieved using a pipeline of open-source and freely available software.
The Many Faces of Inspiration: A Study on Socio-Cultural Influences in Design
The creative journey in design often starts with a spark of inspiration, the source of which can be from myriad stimuli- nature, poetry, personal experiences or even fleeting thoughts and images. While it is indeed an important source of creative exploration, interpretation of this inspiration may often times be influenced by demographic and psychographic variables of the creator - Age, gender, lifecycle stage, personal experiences and individual personality traits being some of these factors. Common sources of inspiration can thus be interpreted differently, translating to different elements of design, and using varied principles in their execution. Do such variables in the creator influence the nature of the creative output? If yes, what are the visible matrices in the output which can be differentiated? An observational study with two groups of Design students, studying in the same design institute, under the guidance of the same design mentor, was conducted to map this influence. Both the groups were unaware of each other but worked with a common source of inspiration as provided by the instructor. In order to maintain congruence, both the groups were provided with lyrical compositions from well-known ballads and poetry as the source of their inspiration. The outputs were abstract renditions using lines, colors and shapes; and these were analyzed under matrices for the elements and principles used to create the compositions. The study indicated that there was a demarcation in terms of the choice of lines, colors and shapes chosen to create the composition, between both groups. The groups also tended to use repetition, proportion and emphasis differently; giving rise to varied uses of the Design principles. The study threw interesting observations on how Design interpretation can vary for the same source of inspiration, based on demographic and psychographic variances. The implications can be traced not just to the process of creative design, but also to the deep social roots that bind creative thinking and Design ideation; which can provide an interesting commentary between different cohorts on what constitutes ‘Good Design’.
Modes of Seeing in Interactive Exhibitions: A Study on How Technology Can Affect the Viewer and Transform the Exhibition Spaces
The current art exhibit scenario presents a multitude of visualization features deployed in experiences that instigate a process of art production and design. The exhibition design through multimedia devices - from the audiovisual to the touch screen - has become a medium from which art can be understood and contemplated. Artistic practices articulated, during the modern period, the spectator's perception in the exhibition space, often challenging the architecture of museums and galleries. In turn, the museum institution seeks to respond to the challenge of welcoming the viewer whose experience is mediated by technological artifacts. When the beholder, together with the technology, interacts with the exhibition space, important displacements happen. In this work, we will analyze the migrations of the exhibition space to the digital environment through mobile devices triggered by the viewer. Based not on technological determinism, but on the conditions of the appearance of this spectator, this work is developed, with the aim of apprehending the way in which technology demarcates the differences between what the spectator was and what becomes in the contemporary atmosphere of the museums and galleries. These notions, we believe, will contribute to the formation of an exhibition design space in conformity with this participant.
Participatory and Experience Design in Advertising: An Exploratory Study of Advertising Styles of Cultures
Advertising today has become an indispensable phenomenon both for businesses and consumers. Due to the conditions of rapid changes in the market and growth of competitiveness, the success of many of firms that produce similar merchandise depends largely on how professionally and effective they use marketing communication elements which also must have some sense of shared values between the message provider and the receiver within cultural and global trend. This paper demonstrates how consumer behaviour and communication through cultural values evaluate advertising styles. Using samples of award-winning ads from both author's and other professional's creative works, the study reveals a significant correlation between the cultural elements and advertisement reception for language and cultural norms respectively. The findings of this study draw attention to the change of communication in the beginning of the 21st century which has shaped a new style of Participatory and Experience Design in advertising.
Creating Emotional Brand Attachment Through Immersive Worlds in Brick-and-Mortar Stores
This paper is an analysis of the store Tarina Tarantino through an exploration of different perspectives of play. It is based on Yelp reviews where customers disclose a very positive emotional reaction toward the store. The paper proposes some general principles for designing immersive stores based on ‘possible world’ theory. The aim is to disclose essential condition for customer engagement is an overall cohesiveness in all elements in a store. The most significant contribution in this paper is that products become props for role-playing in a store, hence making them central for maintaining that role outside the store.
Cryptocurrency-Based Mobile Payments with Near-Field Communication-Enabled Devices
Cryptocurrencies are getting increasingly popular but very few of them can be conveniently used in daily mobile phone purchases. To solve this problem, we demonstrate how to build a functional prototype of a mobile cryptocurrency-based e-commerce application the communicates with NFC tags. Using the system, users are able to purchase physical items with an NFC tag that contains an e-commerce URL. The payment is done simply by touching the tag with a mobile device and accepting the payment. Our method is constructive: we describe the design, the technologies used in the implementation, and evaluate the security and performance of the solution. Our main finding is that the analysis and measurements show that our solution is feasible for e-commerce.
Anti-Forensic Countermeasure: An Examination and Analysis Extended Procedure for Information Hiding of Android SMS Encryption Applications
Empowerment of smartphone technology is growing very rapidly in various fields of science. One of the mobile operating systems that dominate the smartphone market today is Android by Google. Unfortunately, the expansion of mobile technology is misused by criminals to hide the information that they store or exchange with each other. It makes law enforcement more difficult to prove crimes committed in the judicial process (anti-forensic). One of technique that used to hide the information is encryption, such as the usages of SMS encryption applications. A Mobile Forensic Examiner or an investigator should prepare a countermeasure technique if he finds such things during the investigation process. This paper will discuss an extension procedure if the investigator found unreadable SMS in android evidence because of encryption. To define the extended procedure, we create and analyzing a dataset of android SMS encryption application. The dataset was grouped by application characteristics related to communication permissions, as well as the availability of source code and the documentation of encryption scheme. Permissions indicate the possibility of how applications exchange the data and keys. Availability of the source code and the encryption scheme documentation can show what the cryptographic algorithm specification is used, how long the key length, how the process of key generation, key exchanges, encryption/decryption is done, and other related information. The output of this paper is an extended or alternative procedure for examination and analysis process of android digital forensic. It can be used to help the investigators while they got a confused cause of SMS encryption during examining and analyzing. What steps should the investigator take, so they still have a chance to discover the encrypted SMS in android evidence?
Searching Forensic Evidence in Compromised Virtual Web Server Against Structure Query Language Injection Attacks and PHP Web Shell
SQL injection is one of the most common types of attacks and has a very critical impact on web servers. In the worst case, an attacker can perform post-exploitation after a successful SQL injection attack. In the case of forensics web servers, web server analysis is closely related to log file analysis. But sometimes large file sizes and different log types make it difficult for investigators to look for traces of attackers on the server. The purpose of this paper is to help investigator take appropriate steps to investigate when the web server gets attacked. We use attack scenarios using SQL injection attacks including PHP backdoor injection as post-exploitation. We perform post-mortem analysis of web server logs based on Hypertext Transfer Protocol (HTTP) POST and HTTP GET method approaches that are characteristic of SQL injection attacks. In addition, we also propose structured analysis method between the web server application log file, database application, and other additional logs that exist on the webserver. This method makes the investigator more structured to analyze the log file so as to produce evidence of attack with acceptable time. There is also the possibility of other attack techniques can be detected with this method. On the other side, it can help web administrators to prepare their systems for the forensic readiness.
Rapid Evidence Remote Acquisition in High-Availability Server and Storage System for Digital Forensic to Unravel Academic Crime
Nowadays, digital system including, but not limited to, computer and internet have penetrated the education system widely. Critical information such as students’ academic records is stored in a server off- or on-campus. Although several countermeasures have been taken to protect the vital resources from outsider attack, the defense from insiders threat is not getting serious attention. At the end of 2017, a security incident that involved academic information system in one of the most respected universities in Indonesia affected not only the reputation of the institution and its academia but also academic integrity in Indonesia. In this paper, we will explain our efforts in investigating this security incident where we have implemented a novel rapid evidence remote acquisition method in high-availability server and storage system thus our data collection efforts do not disrupt the academic information system and can be conducted remotely minutes after incident report has been received. The acquired evidence is analyzed during digital forensic by constructing the model of the system in an isolated environment which allows multiple investigators to work together. In the end, the suspect is identified as a student (insider), and the investigation result is used by prosecutors to charge the suspect as an academic crime.
Control Performance Simulation and Analysis for Microgravity Vibration Isolation System Onboard Chinese Space Station
Microgravity Science Experiment Rack (MSER) will be onboard TianHe (TH) spacecraft planned to be launched in 2018. TH is one module of Chinese Space Station. Microgravity Vibration Isolation System (MVIS), which is MSER’s core part, is used to isolate disturbance from TH and provide high-level microgravity for science experiment payload. MVIS is two stage vibration isolation system, consisting of Follow Unit (FU) and Experiment Support Unit (ESU). FU is linked to MSER by umbilical cables, and ESU suspends within FU and without physical connection. The FU’s position and attitude relative to TH is measured by binocular vision measuring system, and the acceleration and angular velocity is measured by accelerometers and gyroscopes. Air-jet thrusters are used to generate force and moment to control FU’s motion. Measurement module on ESU contains a set of Position-Sense-Detectors (PSD) sensing the ESU’s position and attitude relative to FU, accelerometers and gyroscopes sensing ESU’s acceleration and angular velocity. Electro-magnetic actuators are used to control ESU’s motion. Firstly, the linearized equations of FU’s motion relative to TH and ESU’s motion relative to FU are derived, laying the foundation for control system design and simulation analysis. Subsequently, two control schemes are proposed. One control scheme is that ESU tracks FU and FU tracks TH, shorten as E-F-T. The other one is that FU tracks ESU and ESU tracks TH, shorten as F-E-T. In addition, motion spaces are constrained within ±15 mm、±2° between FU and ESU, and within ±300 mm between FU and TH or between ESU and TH. A Proportional-Integrate-Differentiate (PID) controller is designed to control FU’s position and attitude. ESU’s controller includes an acceleration feedback loop and a relative position feedback loop. A Proportional-Integrate (PI) controller is designed in the acceleration feedback loop to reduce the ESU’s acceleration level, and a PID controller in the relative position feedback loop is used to avoid collision. Finally, simulations of E-F-T and F-E-T are performed considering variety uncertainties, disturbances and motion space constrains. The simulation results of E-T-H showed that control performance was from 0 to -20 dB for vibration frequency from 0.01 to 0.1 Hz, and vibration was attenuated 40 dB per ten octave above 0.1Hz. The simulation results of T-E-H showed that vibration was attenuated 20 dB per ten octave at the beginning of 0.01Hz.
Design and Implementation of Partial Denoising Boundary Image Matching Using Indexing Techniques
In this paper, we design and implement a partial denoising boundary image matching system using indexing techniques. Converting boundary images to time-series makes it feasible to perform fast search using indexes even on a very large image database. Thus, using this converting method we develop a client-server system based on the previous partial denoising research in the GUI (graphical user interface) environment. The client first converts a query image given by a user to a time-series and sends denoising parameters and the tolerance with this time-series to the server. The server identifies similar images from the index by evaluating a range query, which is constructed using inputs given from the client, and sends the resulting images to the client. Experimental results show that our system provides much intuitive and accurate matching result.
Online Information Seeking: A Review of the Literature in the Health Domain
The development of the information technology and Internet has been transforming the healthcare industry. The internet is continuously accessed to seek for health information and there are variety of sources, including search engines, health websites, and social networking sites. Providing more and better information on health may empower individuals, however, ensuring a high quality and trusted health information could pose a challenge. Moreover, there is an ever-increasing amount of information available, but they are not necessarily accurate and up to date. Thus, this paper aims to provide an insight of the models and frameworks related to online health information seeking of consumers. It begins by exploring the definition of information behavior and information seeking to provide a better understanding of the concept of information seeking. In this study, critical factors such as performance expectancy, effort expectancy, and social influence will be studied in relation to the value of seeking health information. It also aims to analyze the effect of age, gender, and health status as the moderator on the factors that influence online health information seeking i.e. trust and information quality. A preliminary survey will be carried out among the health professionals to clarify the research problems exist in the real world, at the same time producing a conceptual framework. A final survey will be distributed to five states of Malaysia, to solicit the feedback on the framework. Data will be analyzed using SPSS and SmartPLS 3.0 analysis tools. It is hoped that at the end of this study, a novel framework that can improve online health information seeking is developed. Finally, this paper concludes with some suggestions on the models and frameworks that could improve online health information seeking.
Bag of Local Features for Person Re-Identification on Large-Scale Datasets
In the last few years, large-scale person re-identification has attracted a lot of attention from video surveillance since it has a potential application prospect in public safety management. However, it is still a challenging job considering the variation in human pose, the changing illumination conditions and the lack of paired samples. Although the accuracy has been significantly improved, the data dependence of the sample training is serious. To tackle this problem, a new strategy is proposed based on bag of visual words (BoVW) model of designing the feature representation which has been widely used in the field of image retrieval. The local features are extracted, and more discriminative feature representation is obtained by cross-view dictionary learning (CDL), then the assignment map is obtained through k-means clustering. Finally, the BoVW histograms are formed which encodes the images with the statistics of the feature classes in the assignment map. Experiments conducted on the CUHK03, Market1501 and MARS datasets show that the proposed method performs favorably against existing approaches.
Multiple Images Stitching Based on Gradually Changing Matrix
Image stitching is a very important branch in the field of computer vision, especially for panoramic map. In order to eliminate shape distortion, a novel stitching method is proposed based on gradually changing matrix when images are horizontal. For images captured horizontally, this paper assumes that there is only translational operation in image stitching. By analyzing each parameter of the homography matrix, the global homography matrix is gradually transferred to translation matrix so as to eliminate the effects of scaling, rotation, etc. in the image transformation. This paper adopts matrix approximation to get the minimum value of the energy function so that the shape distortion at those regions corresponding to the homography can be minimized. The proposed method can avoid multiple horizontal images stitching failure caused by accumulated shape distortion. At the same time, it can be combined with As-Projective-As-Possible algorithm to ensure precise alignment of overlapping area.
Long-Term Tracking Algorithm with Selected Deep Features and Single Shot MultiBox Detector
In recent year, correlation filtering based algorithms have achieved significant performance in tracking. In traditional, the previous frame has been trained in order to get the prediction position of the next frame, and then the features are extracted from the current target. However, we find that if the present frame has become drifted in the tracking process, the succeeding frame is subjected to larger offset errors, which may eventually lead to the loss of tracking target, and reduce the performance of accuracy and stability. In order to enforce high accuracy of tracking results, we add deep learning into our tracking model. First, we choose deep feature as the feature of the correlation filter. Considering that the dimension of the deep feature is too high, we use a sparse representation method to filter deep features, which improves the accuracy and running speed of the algorithm. We use Siamese network to judge the similarities between the target position and the template and determine the confidence according to the similarities. Then, Single Shot Multi-Box Detector (SSD) is to start detecting the current frame once the current confident value is over a threshold to update the tracking model. In this way, once the drift occurs, the detection algorithm realizes the starting function to obtain better stability and accuracy. The experimental results demonstrate the proposed approach outperforms state-of-the-art approaches on large-scale benchmark datasets.
Jordan Water District Interactive Billing and Accounting Information System
The Jordan Water District Interactive Billing and Accounting Information Systems is designed for Jordan Water District to uplift the efficiency and effectiveness of its services to its customers. It is designed to process computations of water bills in accurate and fast way through automating the manual process and ensures that correct rates and fees are applied. In addition to billing process, a mobile app will be integrated into it to support rapid and accurate water bill generation. An interactive feature will be incorporated to support electronic billing to customers who wish to receive water bills through the use of electronic mail. The system will also improve, organize and avoid data inaccuracy in accounting processes because data will be stored in a database which is designed logically correct through normalization. Furthermore, strict programming constraints will be plunged to validate account access privilege based on job function and data being stored and retrieved to ensure data security, reliability, and accuracy. The system will be able to cater the billing and accounting services of Jordan Water District resulting in setting forth the manual process and adapt to the modern technological innovations.
Analyzing the Performance of Data Partitioning in Real-Time Spatial Big Data: The Implementation of Matching Algorithm for Vertical Partitioning
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a realtime partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.
Inner Attention Based Bi-Long-Short Term Memories with Indexing for Non-Factoid Question Answering
The paper focuses on the solving the problem of non-factoid question answering which is an important task and has applications in knowledge base construction and information extraction. We have tried to overcome the challenges in non- factoid question answering using a combination of different deep learning models. The combination of LSTMs with other deep learning models has proved very useful in the task of answering non factoid based questions. In this paper, we extend the deep learning model based on bidirectional LSTMs in two directions with different neural network models. In one direction we combined convolutional neural network with basic LSTM for more composite question answer embedding, and in other direction, we applied an inner attention mechanism, proposed by Bingning Wang, et al., to the LSTMs. We also used an information retrieval model along with these models to generate answers. Our approach showed an improvement in accuracy over baselines and the referred model in general and also with respect to the length of the answers in the datasets used.
Fuzzy Logic Based Intrusion Detection Systems as a Service for Malicious Port Scanning Traffic Detection
Port scanning is a cyber-network attack allows cyber terrorists to gather valuable information about target hosts namely defense, governmental and banks servers by trying to identify instantly open ports, which correspond to specific services on the cloud, such as HTTP, DNS, and email. The basic role of Intrusion Detection Systems (IDSs) is to monitor networks and systems for malicious activities, policy violations attacks, and unauthorized information gathering activities. In this paper, we proposed a TCP port scanning detection framework, based on the fuzzy logic controller, which uses fuzzy rules base and the Mamdani inference method. The proposed platform is a Fuzzy IDS as a Service, which enables network administrators and cybersecurity specialists to follow in real time the network traffic behavior, i.e., the Port Scanning Criticality Level (PSCL). A SaaS dynamic dashboard is implemented to quickly and efficiently identify malicious port scanning activities. Experimentations and evaluations showed the efficiency of the proposed system in multilevel port scanning detection compared to Snort and the related IDS systems.
Voice Controlled Robotic Manipulator with Decision Tree and Contour Identification Techniques
Robotic manipulators are widely employed for a number of industrial and commercial purposes today. Most of these manipulators are preprogrammed with specific instructions that they follow precisely. With such forms of control, the adaptability of the manipulators to dynamic environments as well as new tasks is reduced. In order to increase the flexibility and adaptability of usage of robotic manipulators, alternative control mechanisms need to be pursued. In this paper we present one such mechanism, using a speech-based control system along with machine perception. The manipulator is also equipped with a vision system that allows it to recognize the required objects in the environment. Voice commands issued as plain English statements by the user are converted into precise instructions after incorporating the environment sensed by the robot executed by the manipulator. This combination of speech and vision systems allows the manipulator to adapt itself to changing tasks as well as environments without any reprogramming.
Analyzing News from Electronic Media and Topics Discussed on Social Media Using Ontology
Nowadays Social Networking sites contribute significantly in representing the thoughts of public on many issues not only this many news channels also broadcast news via social media and people comment on them with their own opinions as well. Furthermore, as people are becoming the largest sensor network, trending topics are not only led by media, but also by the public, and hence it is worth pondering how they affect each other. Majority of the population are converging toward the social media to discuss news on the social platforms as well. In our paper our focus is to find out the difference in topics which are initiated and discussed by the public on Social media and the topics news media are more willing to cover hence assessing the general perception of public regarding the biasness shown by the mainstream media outlets. To perform our analysis, we made news ontology containing data from Social Media and Electronic Media and answered following two questions: Are topics discussed on social media different from mainstream news media of Pakistan? and Is mainstream media of Pakistan biased towards specific topics and tend to report them more frequently while ignoring certain topics as compared to social media? To answer these questions, we formulated SPARQL queries and performed analysis on the results obtained. Our ontology provides basic model to analyze news gathered through different sources which can be utilized in future for analyzing various subjects for example comparing different news channels and biasness among them.
Graphic User Interface Design Principles for Designing Augmented Reality Applications
The reality is a combination of perception, reconstruction, and interaction. Since Augmented Reality is the advancement that layer over consistent every day incorporates content based interface, voice-based interfaces, voice based interface and guide based or gesture based interfaces, so designing augmented reality application is a difficult task for the maker. To make it easy some interface design principles are introduced. The basic goal of introducing design principles is to match the user efforts and the computer display (“plot user input onto computer output”) using an appropriate interface action symbol(“metaphors”) or to make that particular application easy to use, easy to understand and easy to discover. In this study by looking at the augmented reality system and interfaces, few of well-known design principle related to GUI (“user-centered design”) are identify and through them, few issues are shown which can be determined through the design principles. With the help of multiple studies, we suggest different principles which help in designing augmented reality application and with the help of those principles we conclude our finding by identifying suggested princples on Augmented Reality game Pokémon Go.