International Science Index
International Journal of Computer and Information Engineering
Online Pose Estimation and Tracking Approach with Siamese Region Proposal Network
Human pose estimation and tracking are to accurately identify and locate the positions of human joints in the video. It is a computer vision task which is of great significance for human motion recognition, behavior understanding and scene analysis. There has been remarkable progress on human pose estimation in recent years. However, more researches are needed for human pose tracking especially for online tracking. In this paper, a framework, called PoseSRPN, is proposed for online single-person pose estimation and tracking. We use Siamese network attaching a pose estimation branch to incorporate Single-person Pose Tracking (SPT) and Visual Object Tracking (VOT) into one framework. The pose estimation branch has a simple network structure that replaces the complex upsampling and convolution network structure with deconvolution. By augmenting the loss of fully convolutional Siamese network with the pose estimation task, pose estimation and tracking can be trained in one stage. Once trained, PoseSRPN only relies on a single bounding box initialization and producing human joints location. The experimental results show that while maintaining the good accuracy of pose estimation on COCO and PoseTrack datasets, the proposed method achieves a speed of 59 frame/s, which is superior to other pose tracking frameworks.
Internet Optimization by Negotiating Traffic Times
This paper describes a system to optimize the use of the internet by clients requiring downloading of videos at peak hours. The system consists of a web server belonging to a provider of video contents, a provider of internet communications and a software application running on a client’s computer. The client using the application software will communicate to the video provider a list of the client’s future video demands. The video provider calculates which videos are going to be more in demand for download in the immediate future, and proceeds to request the internet provider the most optimal hours to do the downloading. The times of the downloading will be sent to the application software, which will use the information of pre-established hours negotiated between the video provider and the internet provider to download those videos. The videos will be saved in a special protected section of the user’s hard disk, which will only be accessed by the application software in the client’s computer. When the client is ready to see a video, the application will search the list of current existent videos in the area of the hard disk; if it does exist, it will use this video directly without the need for internet access. We found that the best way to optimize the download traffic of videos is by negotiation between the internet communication provider and the video content provider.
Vision Based People Tracking System
In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the “HOG” descriptors. The second part is the tracking of a moving person it’s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion.
A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
This paper presents an approach for easy creation and
classification of institutional risk profiles supporting endangerment
analysis of file formats. The main contribution of this work is the
employment of data mining techniques to support set up of the most
important risk factors. Subsequently, risk profiles employ risk factors
classifier and associated configurations to support digital preservation
experts with a semi-automatic estimation of endangerment group
for file format risk profiles. Our goal is to make use of an expert
knowledge base, accuired through a digital preservation survey
in order to detect preservation risks for a particular institution.
Another contribution is support for visualisation of risk factors for
a requried dimension for analysis. Using the naive Bayes method,
the decision support system recommends to an expert the matching
risk profile group for the previously selected institutional risk profile.
The proposed methods improve the visibility of risk factor values
and the quality of a digital preservation process. The presented
approach is designed to facilitate decision making for the preservation
of digital content in libraries and archives using domain expert
knowledge and values of file format risk profiles. To facilitate
decision-making, the aggregated information about the risk factors
is presented as a multidimensional vector. The goal is to visualise
particular dimensions of this vector for analysis by an expert and
to define its profile group. The sample risk profile calculation and
the visualisation of some risk factor dimensions is presented in the
Framework and Characterization of Physical Internet
Over the last years, a new paradigm known as Physical Internet has been developed, and studied in logistics management. The purpose of this global and open system is to deal with logistics grand challenge by setting up an efficient and sustainable Logistics Web. The purpose of this paper is to review scientific articles dedicated to Physical Internet topic, and to provide a clustering strategy enabling to classify the literature on the Physical Internet, to follow its evolution, as well as to criticize it. The classification is based on three factors: Logistics Web, organization, and resources. Several papers about Physical Internet have been classified and analyzed along the Logistics Web, resources and organization views at a strategic, tactical and operational level, respectively. A developed cluster analysis shows which topics of the Physical Internet that are the less covered actually. Future researches are outlined for these topics.
A Bacterial Foraging Optimization Algorithm Applied to the Synthesis of Polyacrylamide Hydrogels
The Bacterial Foraging Optimization (BFO) algorithm is inspired by the behavior of bacteria such as Escherichia coli or Myxococcus xanthus when searching for food, more precisely the chemotaxis behavior. Bacteria perceive chemical gradients in the environment, such as nutrients, and also other individual bacteria, and move toward or in the opposite direction to those signals. The application example considered as a case study consists in establishing the dependency between the reaction yield of hydrogels based on polyacrylamide and the working conditions such as time, temperature, monomer, initiator, crosslinking agent and inclusion polymer concentrations, as well as type of the polymer added. This process is modeled with a neural network which is included in an optimization procedure based on BFO. An experimental study of BFO parameters is performed. The results show that the algorithm is quite robust and can obtain good results for diverse combinations of parameter values.
Communication in a Heterogeneous Ad Hoc Network
Wireless networks are getting more and more used
in every new technology or feature, especially those without
infrastructure (Ad hoc mode) which provide a low cost alternative
to the infrastructure mode wireless networks and a great flexibility
for application domains such as environmental monitoring, smart
cities, precision agriculture, and so on. These application domains
present a common characteristic which is the need of coexistence and
intercommunication between modules belonging to different types
of ad hoc networks like wireless sensor networks, mesh networks,
mobile ad hoc networks, vehicular ad hoc networks, etc. This vision
to bring to life such heterogeneous networks will make humanity
duties easier but its development path is full of challenges. One
of these challenges is the communication complexity between its
components due to the lack of common or compatible protocols
standard. This article proposes a new patented routing protocol based
on the OLSR standard in order to resolve the heterogeneous ad hoc
networks communication issue. This new protocol is applied on a
specific network architecture composed of MANET, VANET, and
Content-Based Image Retrieval Using HSV Color Space Features
In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.
Blockchain Security in MANETs
The security aspect of the IoT occupies a place of great
importance especially after the evolution that has known this field
lastly because it must take into account the transformations and the
new applications .Blockchain is a new technology dedicated to the
data sharing. However, this does not work the same way in the
different systems with different operating principles. This article will
discuss network security using the Blockchain to facilitate the sending
of messages and information, enabling the use of new processes and
enabling autonomous coordination of devices. To do this, we will
discuss proposed solutions to ensure a high level of security in these
networks in the work of other researchers. Finally, our article will
propose a method of security more adapted to our needs as a team
working in the ad hoc networks, this method is based on the principle
of the Blockchain and that we named ”MPR Blockchain”.
Design of an Ensemble Learning Behavior Anomaly Detection Framework
Data assets protection is a crucial issue in the
cybersecurity field. Companies use logical access control tools to
vault their information assets and protect them against external
threats, but they lack solutions to counter insider threats. Nowadays,
insider threats are the most significant concern of security analysts.
They are mainly individuals with legitimate access to companies
information systems, which use their rights with malicious intents.
In several fields, behavior anomaly detection is the method used by
cyber specialists to counter the threats of user malicious activities
effectively. In this paper, we present the step toward the construction
of a user and entity behavior analysis framework by proposing a
behavior anomaly detection model. This model combines machine
learning classification techniques and graph-based methods, relying
on linear algebra and parallel computing techniques. We show the
utility of an ensemble learning approach in this context. We present
some detection methods tests results on an representative access
control dataset. The use of some explored classifiers gives results
up to 99% of accuracy.
Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation
The large pose discrepancy is one of the critical
challenges in face recognition during video surveillance. Due to
the entanglement of pose attributes with identity information, the
conventional approaches for pose-independent representation lack
in providing quality results in recognizing largely posed faces. In
this paper, we propose a practical approach to disentangle the pose
attribute from the identity information followed by synthesis of a face
using a classifier network in latent space. The proposed approach
employs a modified generative adversarial network framework
consisting of an encoder-decoder structure embedded with a classifier
in manifold space for carrying out factorization on the latent
encoding. It can be further generalized to other face and non-face
attributes for real-life video frames containing faces with significant
attribute variations. Experimental results and comparison with state
of the art in the field prove that the learned representation of the
proposed approach synthesizes more compelling perceptual images
through a combination of adversarial and classification losses.
Internet of Health Things as a Win-Win Solution for Mitigating the Paradigm Shift inside Senior Patient-Physician Shared Health Management
Internet of Health Things (IoHT) has already proved to be a persuasive means to support a proper assessment of the living conditions by collecting a huge variety of data. For a customized health management of a senior patient, IoHT provides the capacity to build a dynamic solution for sustaining the shift inside the patient-physician relationship by allowing a real-time and continuous remote monitoring of the health status, well-being, safety and activities of the senior, especially in a non-clinical environment. Thus, is created a win-win solution in which both the patient and the physician enhance their involvement and shared decision-making, with significant outcomes. Health monitoring systems in smart environments are becoming a viable alternative to traditional healthcare solutions. The ongoing “Non-invasive monitoring and health assessment of the elderly in a smart environment (RO-SmartAgeing)” project aims to demonstrate that the existence of complete and accurate information is critical for assessing the health condition of the seniors, improving wellbeing and quality of life in relation to health. The researches performed inside the project aim to highlight how the management of IoHT devices connected to the RO-SmartAgeing platform in a secure way by using a role-based access control system, can allow the physicians to provide health services at a high level of efficiency and accessibility, which were previously only available in hospitals. The project aims to identify deficient aspects in the provision of health services tailored to a senior patient’s specificity and to offer a more comprehensive perspective of proactive and preventive medical acts.
Optimizing Network Latency with Fast Path Assignment for Incoming Flows
Various flows in the network require to go through
different types of middlebox. The improper placement of network
middlebox and path assignment for flows could greatly increase
the network latency and also decrease the performance of network.
Minimizing the total end to end latency of all the ows requires to
assign path for the incoming flows. In this paper, the flow path
assignment problem in regard to the placement of various kinds
of middlebox is studied. The flow path assignment problem is
formulated to a linear programming problem, which is very time
consuming. On the other hand, a naive greedy algorithm is studied.
Which is very fast but causes much more latency than the linear
programming algorithm. At last, the paper presents a heuristic
algorithm named FPA, which takes bottleneck link information and
estimated bandwidth occupancy into consideration, and achieves
near optimal latency in much less time. Evaluation results validate
the effectiveness of the proposed algorithm.
Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market
In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.
Pressure-Detecting Method for Estimating Levitation Gap Height of Swirl Gripper
The swirl gripper is an electrically activated noncontact handling device that uses swirling airflow to generate a lifting force. This force can be used to pick up a workpiece placed underneath the swirl gripper without any contact. It is applicable, for example, in the semiconductor wafer production line, where contact must be avoided during the handling and moving of a workpiece to minimize damage. When a workpiece levitates underneath a swirl gripper, the gap height between them is crucial for safe handling. Therefore, in this paper, we propose a method to estimate the levitation gap height by detecting pressure at two points. The method is based on theoretical model of the swirl gripper, and has been experimentally verified. Furthermore, the force between the gripper and the workpiece can also be estimated using the detected pressure. As a result, the nonlinear relationship between the force and gap height can be linearized by adjusting the rotating speed of the fan in the swirl gripper according to the estimated force and gap height. The linearized relationship is expected to enhance handling stability of the workpiece.
Transferring of Digital DIY Potentialities through a Co-Design Tool
Digital Do It Yourself (DIY) is a contemporary socio-technological phenomenon, enabled by technological tools. The nature and potential long-term effects of this phenomenon have been widely studied within the framework of the EU funded project ‘Digital Do It Yourself’, in which the authors have created and experimented a specific Digital Do It Yourself (DiDIY) co-design process. The phenomenon was first studied through a literature research to understand its multiple dimensions and complexity. Therefore, co-design workshops were used to investigate the phenomenon by involving people to achieve a complete understanding of the DiDIY practices and its enabling factors. These analyses allowed the definition of the DiDIY fundamental factors that were then translated into a design tool. The objective of the tool is to shape design concepts by transferring these factors into different environments to achieve innovation. The aim of this paper is to present the ‘DiDIY Factor Stimuli’ tool, describing the research path and the findings behind it.
Fast Adjustable Threshold for Uniform Neural Network Quantization
The neural network quantization is highly desired
procedure to perform before running neural networks on mobile
devices. Quantization without fine-tuning leads to accuracy drop of
the model, whereas commonly used training with quantization is done
on the full set of the labeled data and therefore is both time- and
resource-consuming. Real life applications require simplification and
acceleration of quantization procedure that will maintain accuracy of
full-precision neural network, especially for modern mobile neural
network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with
quantization procedure by introducing the trained scale factors for
discretization thresholds that are separate for each filter. Using the
proposed technique, we quantize the modern mobile architectures of
neural networks with the set of train data of only ∼ 10% of the
total ImageNet 2012 sample. Such reduction of train dataset size and
small number of trainable parameters allow to fine-tune the network
for several hours while maintaining the high accuracy of quantized
model (accuracy drop was less than 0.5%). Ready-for-use models and
code are available in the GitHub repository.
Analysis of Network Performance Using Aspect of Quantum Cryptography
Quantum cryptography is described as a point-to-point secure key generation technology that has emerged in recent times in providing absolute security. Researchers have started studying new innovative approaches to exploit the security of Quantum Key Distribution (QKD) for a large-scale communication system. A number of approaches and models for utilization of QKD for secure communication have been developed. The uncertainty principle in quantum mechanics created a new paradigm for QKD. One of the approaches for use of QKD involved network fashioned security. The main goal was point-to-point Quantum network that exploited QKD technology for end-to-end network security via high speed QKD. Other approaches and models equipped with QKD in network fashion are introduced in the literature as. A different approach that this paper deals with is using QKD in existing protocols, which are widely used on the Internet to enhance security with main objective of unconditional security. Our work is towards the analysis of the QKD in Mobile ad-hoc network (MANET).
A Recognition Method of Ancient Yi Script Based on Deep Learning
Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.
Identifying Critical Success Factors for Data Quality Management through a Delphi Study
Organizations support their operations and decision making on the data they have at their disposal, so the quality of these data is remarkably important and Data Quality (DQ) is currently a relevant issue, the literature being unanimous in pointing out that poor DQ can result in large costs for organizations. The literature review identified and described 24 Critical Success Factors (CSF) for Data Quality Management (DQM) that were presented to a panel of experts, who ordered them according to their degree of importance, using the Delphi method with the Q-sort technique, based on an online questionnaire. The study shows that the five most important CSF for DQM are: definition of appropriate policies and standards, control of inputs, definition of a strategic plan for DQ, organizational culture focused on quality of the data and obtaining top management commitment and support.
Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.
Fast and Efficient Algorithms for Evaluating Uniform and Nonuniform Lagrange and Newton Curves
Newton-Lagrange Interpolations are widely used in
numerical analysis. However, it requires a quadratic computational
time for their constructions. In computer aided geometric design
(CAGD), there are some polynomial curves: Wang-Ball, DP and
Dejdumrong curves, which have linear time complexity algorithms.
Thus, the computational time for Newton-Lagrange Interpolations
can be reduced by applying the algorithms of Wang-Ball, DP and
Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong
algorithms, first, it is necessary to convert Newton-Lagrange
polynomials into Wang-Ball, DP or Dejdumrong polynomials. In
this work, the algorithms for converting from both uniform and
non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and
Dejdumrong polynomials are investigated. Thus, the computational
time for representing Newton-Lagrange polynomials can be reduced
into linear complexity. In addition, the other utilizations of using
CAGD curves to modify the Newton-Lagrange curves can be taken.
Monomial Form Approach to Rectangular Surface Modeling
Geometric modeling plays an important role in the
constructions and manufacturing of curve, surface and solid
modeling. Their algorithms are critically important not only in
the automobile, ship and aircraft manufacturing business, but are
also absolutely necessary in a wide variety of modern applications,
e.g., robotics, optimization, computer vision, data analytics and
visualization. The calculation and display of geometric objects
can be accomplished by these six techniques: Polynomial basis,
Recursive, Iterative, Coefficient matrix, Polar form approach and
Pyramidal algorithms. In this research, the coefficient matrix (simply
called monomial form approach) will be used to model polynomial
rectangular patches, i.e., Said-Ball, Wang-Ball, DP, Dejdumrong and
NB1 surfaces. Some examples of the monomial forms for these
surface modeling are illustrated in many aspects, e.g., construction,
derivatives, model transformation, degree elevation and degress
Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution
Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.
Developing a Web-Based Tender Evaluation System Based on Fuzzy Multi-Attributes Group Decision Making for Nigerian Public Sector Tendering
Public sector tendering has traditionally been conducted using manual paper-based processes which are known to be inefficient, less transparent and more prone to manipulations and errors. The advent of the Internet and the World Wide Web has led to the development of numerous e-Tendering systems that addressed some of the problems associated with the manual paper-based tendering system. However, most of these systems rarely support the evaluation of tenders and where they do it is mostly based on the single decision maker which is not suitable in public sector tendering, where for the sake of objectivity, transparency, and fairness, it is required that the evaluation is conducted through a tender evaluation committee. Currently, in Nigeria, the public tendering process in general and the evaluation of tenders, in particular, are largely conducted using manual paper-based processes. Automating these manual-based processes to digital-based processes can help in enhancing the proficiency of public sector tendering in Nigeria. This paper is part of a larger study to develop an electronic tendering system that supports the whole tendering lifecycle based on Nigerian procurement law. Specifically, this paper presents the design and implementation of part of the system that supports group evaluation of tenders based on a technique called fuzzy multi-attributes group decision making. The system was developed using Object-Oriented methodologies and Unified Modelling Language and hypothetically applied in the evaluation of technical and financial proposals submitted by bidders. The system was validated by professionals with extensive experiences in public sector procurement. The results of the validation showed that the system called NPS-eTender has an average rating of 74% with respect to correct and accurate modelling of the existing manual tendering domain and an average rating of 67.6% with respect to its potential to enhance the proficiency of public sector tendering in Nigeria. Thus, based on the results of the validation, the automation of the evaluation process to support tender evaluation committee is achievable and can lead to a more proficient public sector tendering system.
Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media
Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.
Technology Assessment: Exploring Possibilities to Encounter Problems Faced by Intellectual Property through Blockchain
A significant discussion on the topic of blockchain as a solution to the issues of intellectual property highlights the relevance that this topic holds. Some experts label this technology as destructive since it holds immense potential to change course of traditional practices. The extent and areas to which this technology can be of use are still being researched. This paper provides an in-depth review on the intellectual property and blockchain technology. Further it explores what makes blockchain suitable for intellectual property, the practical solutions available and the support different governments are offering. This paper further studies the framework of universities in context of its outputs and how can they be streamlined using blockchain technology. The paper concludes by discussing some limitations and future research question.
Main Cause of Children's Deaths in Indigenous Wayuu Community from Department of La Guajira: A Research Developed through Data Mining Use
The main purpose of this research is to discover what causes death in children of the Wayuu community, and deeply analyze those results in order to take corrective measures to properly control infant mortality. We consider important to determine the reasons that are producing early death in this specific type of population, since they are the most vulnerable to high risk environmental conditions. In this way, the government, through competent authorities, may develop prevention policies and the right measures to avoid an increase of this tragic fact. The methodology used to develop this investigation is data mining, which consists in gaining and examining large amounts of data to produce new and valuable information. Through this technique it has been possible to determine that the child population is dying mostly from malnutrition. In short, this technique has been very useful to develop this study; it has allowed us to transform large amounts of information into a conclusive and important statement, which has made it easier to take appropriate steps to resolve a particular situation.
Implementation of a Serializer to Represent PHP Objects in the Extensible Markup Language
Interoperability in distributed systems is an important feature that refers to the communication of two applications written in different programming languages. This paper presents a serializer and a de-serializer of PHP objects to and from XML, which is an independent library written in the PHP programming language. The XML generated by this serializer is independent of the programming language, and can be used by other existing Web Objects in XML (WOX) serializers and de-serializers, which allow interoperability with other object-oriented programming languages.
Foot Recognition Using Deep Learning for Knee Rehabilitation
The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method.