International Science Index
International Journal of Computer and Information Engineering
Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening
We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.
Blockchain’s Feasibility in Military Data Networks
Communication security is of particular interest to military data networks. A relatively novel approach to network security is blockchain, a cryptographically secured distribution ledger with a decentralized consensus mechanism for data transaction processing. Recent advances in blockchain technology have proposed new techniques for both data validation and trust management, as well as different frameworks for managing dataflow. The purpose of this work is to test the feasibility of different blockchain architectures as applied to military command and control networks. Various architectures are tested through discrete-event simulation and the feasibility is determined based upon a blockchain design’s ability to maintain long-term stable performance at industry standards of throughput, network latency, and security. This work proposes a consortium blockchain architecture with a computationally inexpensive consensus mechanism, one that leverages a Proof-of-Identity (PoI) concept and a reputation management mechanism.
End-to-End Spanish-English Sequence Learning Translation Model
The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution.
Design, Development by Functional Analysis in UML and Static Test of a Multimedia Voice and Video Communication Platform on IP for a Use Adapted to the Context of Local Businesses in Lubumbashi
In this article we present a java implementation of video telephony using the SIP protocol (Session Initiation Protocol). After a functional analysis of the SIP protocol, we relied on the work of Italian researchers of University of Parma-Italy to acquire adequate libraries for the development of our own communication tool. In order to optimize the code and improve the prototype, we used, in an incremental approach, test techniques based on a static analysis based on the evaluation of the complexity of the software with the application of metrics and the number cyclomatic of Mccabe. The objective is to promote the emergence of local start-ups producing IP video in a well understood local context. We have arrived at the creation of a video telephony tool whose code is optimized.
Journey to Cybercrime and Crime Opportunity: Quantitative Analysis of Cyber Offender Spatial Decision Making
Due to the advantage of using the Internet, cybercriminals can reach target(s) without border controls. Prior research on criminology and crime science has largely been void of empirical studies on journey-to-cybercrime and crime opportunity. Thus, the purpose of this study is to understand more about cyber offender spatial decision making associated with crime opportunity factors (i.e., co-offending, offender-stranger). Data utilized in this study were derived from 306 U.S. Federal court cases of cybercrime. The findings of this study indicated that there was a positive relationship between co-offending and journey-to-cybercrime, whereas there was no link between offender-stranger and journey-to-cybercrime. Also, the results showed that there was no relationship between cybercriminal sex, age, and journey-to-cybercrime. The policy implications and limitations of this study are discussed.
Simulation of Obstacle Avoidance for Multiple Autonomous Vehicles in a Dynamic Environment Using Q-Learning
The availability of inexpensive, yet competent hardware allows for increased level of automation and self-optimization in the context of Industry 4.0. However, such agents require high quality information about their surroundings along with a robust strategy for collision avoidance, as they may cause expensive damage to equipment or other agents otherwise. Manually defining a strategy to cover all possibilities is both time-consuming and counter-productive given the capabilities of modern hardware. This paper explores the idea of a model-free self-optimizing obstacle avoidance strategy for multiple autonomous agents in a simulated dynamic environment using the Q-learning algorithm.
A Structural Support Vector Machine Approach for Biometric Recognition
Face is a non-intrusive strong biometrics for
identification of original and dummy facial by different artificial
means. Face recognition is extremely important in the contexts of
computer vision, psychology, surveillance, pattern recognition,
neural network, content based video processing. The availability of a
widespread face database is crucial to test the performance of these
face recognition algorithms. The openly available face databases
include face images with a wide range of poses, illumination, gestures
and face occlusions but there is no dummy face database accessible in
public domain. This paper presents a face detection algorithm based on
the image segmentation in terms of distance from a fixed point and
template matching methods. This proposed work is having the most
appropriate number of nodal points resulting in most appropriate
outcomes in terms of face recognition and detection. The time taken to
identify and extract distinctive facial features is improved in the range
of 90 to 110 sec. with the increment of efficiency by 3%.
Alternative Key Exchange Algorithm Based on Elliptic Curve Digital Signature Algorithm Certificate and Usage in Applications
The Elliptic Curve Digital Signature algorithm-based X509v3 certificates are becoming more popular due to their short public and private key sizes. Moreover, these certificates can be stored in Internet of Things (IoT) devices, with limited resources, using less memory and transmitted in network security protocols, such as Internet Key Exchange (IKE), Transport Layer Security (TLS) and Secure Shell (SSH) with less bandwidth. The proposed method gives another advantage, in that it increases the performance of the above-mentioned protocols in terms of key exchange by saving one scalar multiplication operation.
A Comparative Study of Medical Image Segmentation Methods for Tumor Detection
Image segmentation has a fundamental role in analysis and interpretation for many applications. The automated segmentation of organs and tissues throughout the body using computed imaging has been rapidly increasing. Indeed, it represents one of the most important parts of clinical diagnostic tools. In this paper, we discuss a thorough literature review of recent methods of tumour segmentation from medical images which are briefly explained with the recent contribution of various researchers. This study was followed by comparing these methods in order to define new directions to develop and improve the performance of the segmentation of the tumour area from medical images.
Connected Objects with Optical Rectenna for Wireless Information Systems
Harvesting and transport of optical and radiofrequency
signals are a topical subject with multiple challenges. In this paper,
we present a Optical RECTENNA system. We propose here a hybrid
system solar cell antenna for 5G mobile communications networks.
Thus, we propose rectifying circuit. A parametric study is done to
follow the influence of load resistance and input power on Optical
RECTENNA system performance. Thus, we propose a solar cell
antenna structure in the frequency band of future 5G standard in
2.45 GHz bands.
Machine Learning Development Audit Framework: Assessment and Inspection of Risk and Quality of Data, Model and Development Process
The usage of machine learning models for prediction
is growing rapidly and proof that the intended requirements
are met is essential. Audits are a proven method to determine
whether requirements or guidelines are met. However, machine
learning models have intrinsic characteristics, such as the quality
of training data, that make it difficult to demonstrate the required
behavior and make audits more challenging. This paper describes
an ML audit framework that evaluates and reviews the risks of
machine learning applications, the quality of the training data,
and the machine learning model. We evaluate and demonstrate
the functionality of the proposed framework by auditing an steel
plate fault prediction model.
Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection
With the expansion of machine learning and data
mining in the context of Big Data analytics, the common
problem that affects data is class imbalance. It refers to an
imbalanced distribution of instances belonging to each class. This
problem is present in many real world applications such as fraud
detection, network intrusion detection, medical diagnostics, etc.
In these cases, data instances labeled negatively are significantly
more numerous than the instances labeled positively. When this
difference is too large, the learning system may face difficulty
when tackling this problem, since it is initially designed to
work in relatively balanced class distribution scenarios. Another
important problem, which usually accompanies these imbalanced
data, is the overlapping instances between the two classes. It is
commonly referred to as noise or overlapping data. In this article,
we propose an approach called: One Side Behavioral Noise
Reduction (OSBNR). This approach presents a way to deal with
the problem of class imbalance in the presence of a high noise
level. OSBNR is based on two steps. Firstly, a cluster analysis is
applied to groups similar instances from the minority class into
several behavior clusters. Secondly, we select and eliminate the
instances of the majority class, considered as behavioral noise,
which overlap with behavior clusters of the minority class. The
results of experiments carried out on a representative public
dataset confirm that the proposed approach is efficient for the
treatment of class imbalances in the presence of noise.
MarginDistillation: Distillation for Face Recognition Neural Networks with Margin-Based Softmax
The usage of convolutional neural networks (CNNs) in
conjunction with the margin-based softmax approach demonstrates
the state-of-the-art performance for the face recognition problem.
Recently, lightweight neural network models trained with the
margin-based softmax have been introduced for the face identification
task for edge devices. In this paper, we propose a distillation method
for lightweight neural network architectures that outperforms other
known methods for the face recognition task on LFW, AgeDB-30
and Megaface datasets. The idea of the proposed method is to use
class centers from the teacher network for the student network. Then
the student network is trained to get the same angles between the
class centers and face embeddings predicted by the teacher network.
Hybrid Collaborative-Context Based Recommendations for Civil Affairs Operations
In this paper we present findings from a research effort to apply a hybrid collaborative-context approach for a system focused on Marine Corps civil affairs data collection, aggregation, and analysis called the Marine Civil Information Management System (MARCIMS). The goal of this effort is to provide operators with information to make sense of the interconnectedness of entities and relationships in their area of operation and discover existing data to support civil military operations. Our approach to build a recommendation engine was designed to overcome several technical challenges, including 1) ensuring models were robust to the relatively small amount of data collected by the Marine Corps civil affairs community; 2) finding methods to recommend novel data for which there are no interactions captured; and 3) overcoming confirmation bias by ensuring content was recommended that was relevant for the mission despite being obscure or less well known. We solve this by implementing a combination of collective matrix factorization (CMF) and graph-based random walks to provide recommendations to civil military operations users. We also present a method to resolve the challenge of computation complexity inherent from highly connected nodes through a precomputed process.
Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500
Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation.
Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building
The outbreak of the COVID-19 pandemic has shown
us that online education is so much more than just a cool feature for
teachers – it is an essential part of modern teaching. In online math
teaching, it is common to use tools to share screens, compute and
calculate mathematical examples, while the students can watch the
process. On the other hand, flipped classroom models are on the rise,
with their focus on how students can gather knowledge by watching
videos and on the teacher’s use of technological tools for information
transfer. This paper proposes a co-educational teaching approach for
coding and engineering subjects with the help of drone-building to
spark interest in technology and create a platform for knowledge
transfer. The project combines aspects from mathematics (matrices,
vectors, shaders, trigonometry), physics (force, pressure and rotation)
and coding (computational thinking, block-based programming,
Modeling with clara.io, where students create mathematics knowhow.
The instructor follows a problem-based learning approach and
encourages their students to find solutions in their own time and in
their own way, which will help them develop new skills intuitively
and boost logically structured thinking. The collaborative aspect of
working in groups will help the students develop communication
skills as well as structural and computational thinking. Students are
not just listeners as in traditional classroom settings, but play an
active part in creating content together by compiling a Handbook of
Knowledge (called “open book”) with examples and solutions.
Before students start calculating, they have to write down all their
ideas and working steps in full sentences so other students can easily
follow their train of thought. Therefore, students will learn to
formulate goals, solve problems, and create a ready-to use product
with the help of “reverse engineering”, cross-referencing and creative
thinking. The work on drones gives the students the opportunity to
create a real-life application with a practical purpose, while going
through all stages of product development.
Trusting Smart Speakers: Analysing the Different Levels of Trust between Technologies
The growing usage of smart speakers raises many privacy and trust concerns compared to other technologies such as smart phones and computers. In this study, a proxy measure of trust is used to gauge users’ opinions on three different technologies based on an empirical study, and to understand which technology most people are most likely to trust. The collected data were analysed using the Kruskal-Wallis H test to determine the statistical differences between the users’ trust level of the three technologies: smart speaker, computer and smart phone. The findings of the study revealed that despite the wide acceptance, ease of use and reputation of smart speakers, people find it difficult to trust smart speakers with their sensitive information via the Direct Voice Input (DVI) and would prefer to use a keyboard or touchscreen offered by computers and smart phones. Findings from this study can inform future work on users’ trust in technology based on perceived ease of use, reputation, perceived credibility and risk of using technologies via DVI.
Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System
Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.
Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder
Recently, explanatory natural language inference has
attracted much attention for the interpretability of logic relationship
prediction, which is also known as explanation generation for
Natural Language Inference (NLI). Existing explanation generators
based on discriminative Encoder-Decoder architecture have achieved
noticeable results. However, we find that these discriminative
generators usually generate explanations with correct evidence but
incorrect logic semantic. It is due to that logic information is
implicitly encoded in the premise-hypothesis pairs and difficult
to model. Actually, logic information identically exists between
premise-hypothesis pair and explanation. And it is easy to extract
logic information that is explicitly contained in the target explanation.
Hence we assume that there exists a latent space of logic information
while generating explanations. Specifically, we propose a generative
model called Variational Explanation Generator (VariationalEG) with
a latent variable to model this space. Training with the guide
of explicit logic information in target explanations, latent variable
in VariationalEG could capture the implicit logic information in
premise-hypothesis pairs effectively. Additionally, to tackle the
problem of posterior collapse while training VariaztionalEG, we
propose a simple yet effective approach called Logic Supervision on
the latent variable to force it to encode logic information. Experiments
on explanation generation benchmark—explanation-Stanford Natural
Language Inference (e-SNLI) demonstrate that the proposed
VariationalEG achieves significant improvement compared to
previous studies and yields a state-of-the-art result. Furthermore, we
perform the analysis of generated explanations to demonstrate the
effect of the latent variable.
OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text
This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication.
Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance
The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.
Bayesian Deep Learning Algorithms for Classifying COVID-19 Images
The study investigates the accuracy and loss of deep learning algorithms with the set of coronavirus (COVID-19) images dataset by comparing Bayesian convolutional neural network and traditional convolutional neural network in low dimensional dataset. 50 sets of X-ray images out of which 25 were COVID-19 and the remaining 20 were normal, twenty images were set as training while five were set as validation that were used to ascertained the accuracy of the model. The study found out that Bayesian convolution neural network outperformed conventional neural network at low dimensional dataset that could have exhibited under fitting. The study therefore recommended Bayesian Convolutional neural network (BCNN) for android apps in computer vision for image detection.
Implementation of an Associative Memory Using a Restricted Hopfield Network
An analog restricted Hopfield Network is presented in
this paper. It consists of two layers of nodes, visible and hidden
nodes, connected by directional weighted paths forming a bipartite
graph with no intralayer connection. An energy or Lyapunov function
was derived to show that the proposed network will converge to
stable states. By introducing hidden nodes, the proposed network
can be trained to store patterns and has increased memory capacity.
Training to be an associative memory, simulation results show that the
associative memory performs better than a classical Hopfield network
by being able to perform better memory recall when the input is noisy.
Antenna for Energy Harvesting in Wireless Connected Objects
If connected objects multiply, they are becoming a
challenge in more than one way. In particular by their consumption
and their supply of electricity. A large part of the new generations of
connected objects will only be able to develop if it is possible to make
them entirely autonomous in terms of energy. Some manufacturers are
therefore developing products capable of recovering energy from their
environment. Vital solutions in certain contexts, such as the medical
industry. Energy recovery from the environment is a reliable solution
to solve the problem of powering wireless connected objects. This
paper presents and study a optically transparent solar patch antenna
in frequency band of 2.4 GHz for connected objects in the future
standard 5G for energy harvesting and RF transmission.
Malaria Parasite Detection Using Deep Learning Methods
Malaria is a serious disease which affects hundreds of
millions of people around the world, each year. If not treated in time,
it can be fatal. Despite recent developments in malaria diagnostics,
the microscopy method to detect malaria remains the most common.
Unfortunately, the accuracy of microscopic diagnostics is dependent
on the skill of the microscopist and limits the throughput of malaria
diagnosis. With the development of Artificial Intelligence tools and
Deep Learning techniques in particular, it is possible to lower the cost,
while achieving an overall higher accuracy. In this paper, we present a
VGG-based model and compare it with previously developed models
for identifying infected cells. Our model surpasses most previously
developed models in a range of the accuracy metrics. The model has
an advantage of being constructed from a relatively small number of
layers. This reduces the computer resources and computational time.
Moreover, we test our model on two types of datasets and argue
that the currently developed deep-learning-based methods cannot
efficiently distinguish between infected and contaminated cells. A
more precise study of suspicious regions is required.
A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel
Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.
Exploring the Need to Study the Efficacy of VR Training Compared to Traditional Cybersecurity Training
Effective cybersecurity training is of the utmost importance, given the plethora of attacks that continue to increase in complexity and ubiquity. VR cybersecurity training remains a starkly understudied discipline. Studies that evaluated the effectiveness of VR cybersecurity training over traditional methods are required. An engaging and interactive platform can support knowledge retention of the training material. Consequently, an effective form of cybersecurity training is required to support a culture of cybersecurity awareness. Measurements of effectiveness varied throughout the studies, with surveys and observations being the two most utilized forms of evaluating effectiveness. Further research is needed to evaluate the effectiveness of VR cybersecurity training and traditional training. Additionally, research for evaluating if VR cybersecurity training is more effective than traditional methods is vital. This paper proposes a methodology to compare the two cybersecurity training methods and their effectiveness. The proposed framework includes developing both VR and traditional cybersecurity training methods and delivering them to at least 100 users. A quiz along with a survey will be administered and statistically analyzed to determine if there is a difference in knowledge retention and user satisfaction. The aim of this paper is to bring attention to the need to study VR cybersecurity training and its effectiveness compared to traditional training methods. This paper hopes to contribute to the cybersecurity training field by providing an effective way to train users for security awareness. If VR training is deemed more effective, this could create a new direction for cybersecurity training practices.
Improved Rare Species Identification Using Focal Loss Based Deep Learning Models
The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.
Evaluating the Impact of Replacement Policies on the Cache Performance and Energy Consumption in Different Multicore Embedded Systems
The cache has an important role in the reduction of access delay between a processor and memory in high-performance embedded systems. In these systems, the energy consumption is one of the most important concerns, and it will become more important with smaller processor feature sizes and higher frequencies. Meanwhile, the cache system dissipates a significant portion of energy compared to the other components of a processor. There are some elements that can affect the energy consumption of the cache such as replacement policy and degree of associativity. Due to these points, it can be inferred that selecting an appropriate configuration for the cache is a crucial part of designing a system. In this paper, we investigate the effect of different cache replacement policies on both cache’s performance and energy consumption. Furthermore, the impact of different Instruction Set Architectures (ISAs) on cache’s performance and energy consumption has been investigated.
Virtual Reality Design Platform to Easily Create Virtual Reality Experiences
The interest in Virtual Reality (VR) keeps increasing among the community of designers. To develop this type of immersive experience, the understanding of new processes and methodologies is as fundamental as its complex implementation which usually implies hiring a specialized team. In this paper, we introduce a case study, a platform that allows designers to easily create complex VR experiences, present its features, and its development process. We conclude that this platform provides a complete solution for the design and development of VR experiences, no-code needed.