Real-time micro-modelling of a million pedestrians

2016 ◽  
Vol 33 (1) ◽  
pp. 217-237 ◽  
Author(s):  
R Lohner ◽  
Muhammad Baqui ◽  
Eberhard Haug ◽  
Britto Muhamad

Purpose – The purpose of this paper is to develop a first-principles model for the simulation of pedestrian flows and crowd dynamics capable of computing the movement of a million pedestrians in real-time in order to assess the potential safety hazards and operational performance at events where many individuals are gathered. Examples of such situations are sport and music events, cinemas and theatres, museums, conference centres, places of pilgrimage and worship, street demonstrations, emergency evacuation during natural disasters. Design/methodology/approach – The model is based on a series of forces, such as: will forces (the desire to reach a place at a certain time), pedestrian collision avoidance forces, obstacle/wall avoidance forces; pedestrian contact forces, and obstacle/wall contact forces. In order to allow for general geometries a so-called background triangulation is used to carry all geographic information. At any given time the location of any given pedestrian is updated on this mesh. The model has been validated qualitatively and quantitavely on repeated occasions. The code has been ported to shared and distributed memory parallel machines. Findings – The results obtained show that the stated aim of computing the movement of a million pedestrians in real-time has been achieved. This is an important milestone, as it enables faster-than-real-time simulations of large crowds (stadiums, airports, train and bus stations, concerts) as well as evacuation simulations for whole cities. Research limitations/implications – All models are wrong, but some are useful. The same applies to any modelling of pedestrians. Pedestrians are not machines, so stochastic runs will be required in the future in order to obtain statistically relevant ensembles. Practical implications – This opens the way to link real-time data gathering of crowds (i.e. via cameras) with predictive calculations done faster than real-time, so that security personnel can be alerted to potential future problems during large-scale events. Social implications – This will allow much better predictions for large-scale events, improving security and comfort. Originality/value – This is the first time such speeds have been achieved for a micro-modelling code for pedestrians.

2017 ◽  
Vol 10 (2) ◽  
pp. 145-165 ◽  
Author(s):  
Kehe Wu ◽  
Yayun Zhu ◽  
Quan Li ◽  
Ziwei Wu

Purpose The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities exchange, electric power secondary system, etc. Concretely, the proposed framework should handle several difficult requirements including the management of gigantic data sources, the need for a fast self-adaptive algorithm, the relatively accurate prediction of multiple time series, and the real-time demand. Design/methodology/approach First, the autoregressive integrated moving average-based prediction algorithm is introduced. Second, the processing framework is designed, which includes a time-series data storage model based on the HBase, and a real-time distributed prediction platform based on Storm. Then, the work principle of this platform is described. Finally, a proof-of-concept testbed is illustrated to verify the proposed framework. Findings Several tests based on Power Grid monitoring data are provided for the proposed framework. The experimental results indicate that prediction data are basically consistent with actual data, processing efficiency is relatively high, and resources consumption is reasonable. Originality/value This paper provides a distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources.


Author(s):  
Sepehr Fathizadan ◽  
Feng Ju ◽  
Kyle Rowe ◽  
Alex Fiechter ◽  
Nils Hofmann

Abstract Production efficiency and product quality need to be addressed simultaneously to ensure the reliability of large scale additive manufacturing. Specifically, print surface temperature plays a critical role in determining the quality characteristics of the product. Moreover, heat transfer via conduction as a result of spatial correlation between locations on the surface of large and complex geometries necessitates the employment of more robust methodologies to extract and monitor the data. In this paper, we propose a framework for real-time data extraction from thermal images as well as a novel method for controlling layer time during the printing process. A FLIR™ thermal camera captures and stores the stream of images from the print surface temperature while the Thermwood Large Scale Additive Manufacturing (LSAM™) machine is printing components. A set of digital image processing tasks were performed to extract the thermal data. Separate regression models based on real-time thermal imaging data are built on each location on the surface to predict the associated temperatures. Subsequently, a control method is proposed to find the best time for printing the next layer given the predictions. Finally, several scenarios based on the cooling dynamics of surface structure were defined and analyzed, and the results were compared to the current fixed layer time policy. It was concluded that the proposed method can significantly increase the efficiency by reducing the overall printing time while preserving the quality.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Heather Lutz ◽  
Laura Birou ◽  
Joe Walden

PurposeThis paper aims to provide the results of a survey of courses dedicated to the field of supply chain management in higher education. This research is unique because it represents the first large-scale study of graduate supply chain management courses taught at universities globally. Design/methodology/approachContent analysis was performed on each syllabus to identify the actual course content: requirements, pedagogy and content emphasis. This aggregated information was used to compare historical research findings in this area, with the current skills identified as important for career success. This data provides input for a gap analysis between offerings in higher education and those needs identified by practitioners. FindingsData gathering efforts yielded a sample of 112 graduate courses representing 61 schools across the world. The aggregate number of topics covered in graduate courses totaled 114. The primary evaluation techniques include exams, projects and homework. Details regarding content and assessment techniques are provided along with a gap analysis between the supply chain management course content and the needs identified by APICS Supply Chain Manager Competency Model (2014). Originality/valueThe goal is to use this data as a means of continuous improvement in the quality and value of the educational experience on a longitudinal basis. The findings are designed to foster information sharing and provide data for benchmarking efforts in the development of supply chain management courses and curricula in academia, as well as training, development and recruitment efforts by professionals in the field of supply chain management.


2014 ◽  
Vol 571-572 ◽  
pp. 497-501 ◽  
Author(s):  
Qi Lv ◽  
Wei Xie

Real-time log analysis on large scale data is important for applications. Specifically, real-time refers to UI latency within 100ms. Therefore, techniques which efficiently support real-time analysis over large log data sets are desired. MongoDB provides well query performance, aggregation frameworks, and distributed architecture which is suitable for real-time data query and massive log analysis. In this paper, a novel implementation approach for an event driven file log analyzer is presented, and performance comparison of query, scan and aggregation operations over MongoDB, HBase and MySQL is analyzed. Our experimental results show that HBase performs best balanced in all operations, while MongoDB provides less than 10ms query speed in some operations which is most suitable for real-time applications.


Author(s):  
Yasmina Maizi ◽  
Ygal Bendavid

With the fast development of IoT technologies and the potential of real-time data gathering, allowing decision makers to take advantage of real-time visibility on their processes, the rise of Digital Twins (DT) has attracted several research interests. DT are among the highest technological trends for the near future and their evolution is expected to transform the face of several industries and applications and opens the door to a huge number of possibilities. However, DT concept application remains at a cradle stage and it is mainly restricted to the manufacturing sector. In fact, its true potential will be revealed in many other sectors. In this research paper, we aim to propose a DT prototype for instore daily operations management and test its impact on daily operations management performances. More specifically, for this specific research work, we focus the impact analysis of DT in the fitting rooms’ area.


2019 ◽  
Vol 31 (1) ◽  
pp. 265-290 ◽  
Author(s):  
Ganjar Alfian ◽  
Muhammad Fazal Ijaz ◽  
Muhammad Syafrudin ◽  
M. Alex Syaekhoni ◽  
Norma Latif Fitriyani ◽  
...  

PurposeThe purpose of this paper is to propose customer behavior analysis based on real-time data processing and association rule for digital signage-based online store (DSOS). The real-time data processing based on big data technology (such as NoSQL MongoDB and Apache Kafka) is utilized to handle the vast amount of customer behavior data.Design/methodology/approachIn order to extract customer behavior patterns, customers’ browsing history and transactional data from digital signage (DS) could be used as the input for decision making. First, the authors developed a DSOS and installed it in different locations, so that customers could have the experience of browsing and buying a product. Second, the real-time data processing system gathered customers’ browsing history and transaction data as it occurred. In addition, the authors utilized the association rule to extract useful information from customer behavior, so it may be used by the managers to efficiently enhance the service quality.FindingsFirst, as the number of customers and DS increases, the proposed system was capable of processing a gigantic amount of input data conveniently. Second, the data set showed that as the number of visit and shopping duration increases, the chance of products being purchased also increased. Third, by combining purchasing and browsing data from customers, the association rules from the frequent transaction pattern were achieved. Thus, the products will have a high possibility to be purchased if they are used as recommendations.Research limitations/implicationsThis research empirically supports the theory of association rule that frequent patterns, correlations or causal relationship found in various kinds of databases. The scope of the present study is limited to DSOS, although the findings can be interpreted and generalized in a global business scenario.Practical implicationsThe proposed system is expected to help management in taking decisions such as improving the layout of the DS and providing better product suggestions to the customer.Social implicationsThe proposed system may be utilized to promote green products to the customer, having a positive impact on sustainability.Originality/valueThe key novelty of the present study lies in system development based on big data technology to handle the enormous amounts of data as well as analyzing the customer behavior in real time in the DSOS. The real-time data processing based on big data technology (such as NoSQL MongoDB and Apache Kafka) is used to handle the vast amount of customer behavior data. In addition, the present study proposed association rule to extract useful information from customer behavior. These results can be used for promotion as well as relevant product recommendations to DSOS customers. Besides in today’s changing retail environment, analyzing the customer behavior in real time in DSOS helps to attract and retain customers more efficiently and effectively, and retailers can get a competitive advantage over their competitors.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sandeep Kumar Singh ◽  
Mamata Jenamani

Purpose The purpose of this paper is to design a supply chain database schema for Cassandra to store real-time data generated by Radio Frequency IDentification technology in a traceability system. Design/methodology/approach The real-time data generated in such traceability systems are of high frequency and volume, making it difficult to handle by traditional relational database technologies. To overcome this difficulty, a NoSQL database repository based on Casandra is proposed. The efficacy of the proposed schema is compared with two such databases, document-based MongoDB and column family-based Cassandra, which are suitable for storing traceability data. Findings The proposed Cassandra-based data repository outperforms the traditional Structured Query Language-based and MongoDB system from the literature in terms of concurrent reading, and works at par with respect to writing and updating of tracing queries. Originality/value The proposed schema is able to store the real-time data generated in a supply chain with low latency. To test the performance of the Cassandra-based data repository, a test-bed is designed in the lab and supply chain operations of Indian Public Distribution System are simulated to generate data.


Facilities ◽  
2005 ◽  
Vol 23 (1/2) ◽  
pp. 31-46 ◽  
Author(s):  
Seán T. McAndrew ◽  
Chimay J. Anumba ◽  
Tarek M. Hassan ◽  
Alistair K. Duke

PurposeThe purpose of the paper is to discuss the scope for improving the delivery of FM services through the use of wireless web‐based communications infrastructure, delivered via an application service provider (ASP) business model. This paper discusses the findings from case studies of three organisations and their approach to the management of facilities.Design/methodology/approachAn investigation was undertaken to ascertain the current state of play in terms of managing and tracking processes within the facilities management department of three different organisations. These case studies were chosen from distinct sectors, namely health care, higher education, and banking. Emphasis is placed on analysing how the organisations currently operate with their existing FM systems and the degree of influence technology has on existing processes. This was considered mainly in terms of computer‐aided facilities management (CAFM) and computer‐integrated facilities management (CIFM).FindingsThe study found that a new wireless web‐based service for FM systems would be considered useful. Although notoriously slow adopters of new technology, there was an acceptance by the facilities managers interviewed that a wireless web‐based approach would improve current practice, especially with respect to real‐time job reporting and tracking and in the determination of FM operative working time utilisation.Practical implicationsFurther work by the author is focusing on the development of a suitable demonstrator to illustrate the key concepts of a wireless web‐based FM service which will then be tested and evaluated. For further information, visit the research project web site at www.wirelessfm.org Originality/value – The paper hopefully stimulates discussion in the area of emerging wireless technologies that have the potential to streamline and improve current practices for the management of facilities, in particular that of real‐time job reporting and tracking.


Sign in / Sign up

Export Citation Format

Share Document