host machine
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 14)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Anand Mehta ◽  

Cloud computing is an internet provisioned method for sharing the resources on demand by network management, storage, services, applications and the serves that necessitate management optimal effort. VMM (virtual machine migration) plays a major role in enhancing the resource utilization, application isolation, processing nodes, fault tolerance in VMs for enhancing nodes portability and for maximizing the efficiency of physical server. For balancing the clouds with resources for the enhanced performance, varied users are served with application deployment in the cloud environment is considered as the major task. The user can rent or request the resources when it becomes significant. The emphasis of this paper is on different energy VM energy efficient module as per machine learning methods. While allocating the VMs to the host machines, MBFD (Modified Best Fit Decreasing) is considered and the classification of host machine capability such as overloaded, normal loaded and underloaded is executed according to SVM (Support vector machine). SVM is utilized as a classifier for analyzing the MBFD algorithm and for the classification of the host as per the job properties. In this procedure, the numbers of jobs that are not allocated are examined via simulation which is computed by means of time consumption, energy consumption and a total number of migrations.


2021 ◽  
Vol 21 ◽  
pp. 279-286
Author(s):  
Oleksandr Voloshchenko ◽  
Małgorzata Plechawska-Wójcik

The purpose of this paper is to compare classical machine learning algorithms for handwritten number classification. The following algorithms were chosen for comparison: Logistic Regression, SVM, Decision Tree, Random Forest and k-NN. MNIST handwritten digit database is used in the task of training and testing the above algorithms. The dataset consists of 70,000 images of numbers from 0 to 9. The algorithms are compared considering such criteria as the learning speed, prediction construction speed, host machine load, and classification accuracy. Each algorithm went through the training and testing phases 100 times, with the desired KPIs retained at each iteration. The results were averaged to reach reliable outcomes.


Author(s):  
B.K Praveen Kumar ◽  
◽  
Dr. K. Santhi Sree ◽  

The rise of neoteric technologies like Machine Learning, the Internet of Things, Cloud Services, etc has affected the life of a common man at various levels. Irrespective of the size or domain, almost all companies are now incorporating digitization to various degrees and thus progressing towards a new business model with little or no significance to geographical and physical barriers. This shift from traditional store models to automated entities is referred to as Digital Transformation. It is the simplified way of outlining how digital technologies are transforming and automating business operations across all organizations irrespective of their domain. This digital revolution relies on a whole range of machinery, networks, services, and operations to expand their power of communication, thus ensuring seamless integration with the technologies. At this juncture, there would be many challenges in both technical and non-technical aspects. To facilitate successful automation by resolving those issues, using the concept of virtualization can be very helpful. Virtualization is the process of creating a virtual instance of hardware resources like virtual applications, servers, or storage by logically separating them from the hardware. It enables multiple applications or operations to gain access to the hardware resources/ software resources of the host machine. In a sense, virtualization is always at the center of all this revolution providing a rock-solid foundation. For example, when digitizing an organization, machine learning algorithms are applied to the IoT data in addition to the organizational data. Given the huge size of data, companies adapting to this automation rely on cloud services for data management because of the reliability it provides. This issue is solved using the Edge Computing concept which is an advanced implementation of Virtualization. In this paper, we try to discuss such challenges and try to understand how virtualization can be useful in solving them. This can be exemplified using a hypothetical digitalization in a retail store scenario.


Cryptography ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 18
Author(s):  
Yutian Gui ◽  
Chaitanya Bhure ◽  
Marcus Hughes ◽  
Fareena Saqib

Direct Memory Access (DMA) is a state-of-the-art technique to optimize the speed of memory access and to efficiently use processing power during data transfers between the main system and a peripheral device. However, this advanced feature opens security vulnerabilities of access compromise and to manipulate the main memory of the victim host machine. The paper outlines a lightweight process that creates resilience against DMA attacks minimal modification to the configuration of the DMA protocol. The proposed scheme performs device identification of the trusted PCIe devices that have DMA capabilities and constructs a database of profiling time to authenticate the trusted devices before they can access the system. The results show that the proposed scheme generates a unique identifier for trusted devices and authenticates the devices. Furthermore, a machine learning–based real-time authentication scheme is proposed that enables runtime authentication and share the results of the time required for training and respective accuracy.


2021 ◽  
Vol 23 (07) ◽  
pp. 924-929
Author(s):  
Dr. Kiran V ◽  
◽  
Akshay Narayan Pai ◽  
Gautham S ◽  
◽  
...  

Cloud computing is a technique for storing and processing data that makes use of a network of remote servers. Cloud computing is gaining popularity due to its vast storage capacity, ease of access, and diverse variety of services. When cloud computing advanced and technologies such as virtual machines appeared, virtualization entered the scene. When customers’ computing demands for storage and servers increased, however, virtual machines were unable to match those expectations due to scalability and resource allocation limits. As a consequence, containerization became a reality. Containerization is the process of packaging software code along with all of its essential components, including frameworks, libraries, and other dependencies, such that they may be separated or separated in their own container. The program operating in containers may execute reliably in any environment or infrastructure. Containers provide OS-level virtualization, which reduces the computational load on the host machine and enables programs to run much faster and more reliably. Performance analysis is very important in comparing the throughput of both VM-based and Container-based designs. To analyze it same web application is running in both the designs. CPU usage and RAM usage in both designs were compared. Results obtained are tabulated and a Proper conclusion has been given.


2021 ◽  
Vol 33 (3) ◽  
pp. 686-697
Author(s):  
Manato Hirabayashi ◽  
Yukihiro Saito ◽  
Kosuke Murakami ◽  
Akihito Ohsato ◽  
Shinpei Kato ◽  
...  

The perception of the surrounding circumstances is an essential task for fully autonomous driving systems, but its high computational and network loads typically impede a single host machine from taking charge of the systems. Decentralized processing is a candidate to decrease such loads; however, it has not been clear that this approach fulfills the requirements of onboard systems, including low latency and low power consumption. Embedded oriented graphics processing units (GPUs) are attracting great interest because they provide massively parallel computation capacity with lower power consumption compared to traditional GPUs. This study explored the effects of decentralized processing on autonomous driving using embedded oriented GPUs as decentralized units. We implemented a prototype system that off-loaded image-based object detection tasks onto embedded oriented GPUs to clarify the effects of decentralized processing. The results of experimental evaluation demonstrated that decentralized processing and network quantization achieved approximately 27 ms delay between the feeding of an image and the arrival of detection results to the host as well as approximately 7 W power consumption on each GPU and network load degradation in orders of magnitude. Judging from these results, we concluded that decentralized processing could be a promising approach to decrease processing latency, network load, and power consumption toward the deployment of autonomous driving systems.


2021 ◽  
Vol 1948 (1) ◽  
pp. 012166
Author(s):  
Long Shi ◽  
Junjie Ma ◽  
Haiwen Guo
Keyword(s):  

2021 ◽  
Vol 12 ◽  
Author(s):  
Kangli Li ◽  
Congcong Wang ◽  
Fan Yang ◽  
Weijun Cao ◽  
Zixiang Zhu ◽  
...  

Foot-and-mouth disease (FMD) is a highly contagious disease of cloven-hoofed animals, which has been regarded as a persistent challenge for the livestock industry in many countries. Foot-and-mouth disease virus (FMDV) is the etiological agent of FMD that can spread rapidly by direct and indirect transmission. FMDV is internalized into host cell by the interaction between FMDV capsid proteins and cellular receptors. When the virus invades into the cells, the host antiviral system is quickly activated to suppress the replication of the virus and remove the virus. To retain fitness and host adaptation, various viruses have evolved multiple elegant strategies to manipulate host machine and circumvent the host antiviral responses. Therefore, identification of virus-host interactions is critical for understanding the host defense against virus infections and the pathogenesis of the viral infectious diseases. This review elaborates on the virus-host interactions during FMDV infection to summarize the pathogenic mechanisms of FMD, and we hope it can provide insights for designing effective vaccines or drugs to prevent and control the spread of FMD and other diseases caused by picornaviruses.


2020 ◽  
Vol 8 (6) ◽  
pp. 1678-1682

I2P is an anonymous P2P distributed communication layer used to send messages to each other anonymously and safely. It is built on top of the internet and can be considered as an internet within the internet. Even though I2P is developed with an intention to create censorship resistant environment for the free flow of information, it is misused for illegal activities now a days. The possible misuses are less known among law enforcement agencies and existing industry approved software programs have no detection functionality for I2P. Because of the increased use of I2P in criminal purposes, there is a need for methods and tools to acquire and analyze digital evidence related to I2P. We conducted a detailed live memory dump analysis in order to find out the I2P related artifacts from a host machine. Furthermore, we propose a tool that will analyze the memory dump and system local files to find out the I2P related artifacts and provide a detailed report to the investigator.


Sign in / Sign up

Export Citation Format

Share Document