From Semantically Abstracted Traces to Process Mining and Process Model Comparison

Author(s):  
Giorgio Leonardi ◽  
Manuel Striani ◽  
Silvana Quaglini ◽  
Anna Cavallini ◽  
Stefania Montani
2021 ◽  
Vol 11 (9) ◽  
pp. 4121
Author(s):  
Hana Tomaskova ◽  
Erfan Babaee Tirkolaee

The purpose of this article was to demonstrate the difference between a pandemic plan’s textual prescription and its effective processing using graphical notation. Before creating a case study of the Business Process Model and Notation (BPMN) of the Czech Republic’s pandemic plan, we conducted a systematic review of the process approach in pandemic planning and a document analysis of relevant public documents. The authors emphasized the opacity of hundreds of pages of text records in an explanatory case study and demonstrated the effectiveness of the process approach in reengineering and improving the response to such a critical situation. A potential extension to the automation and involvement of SMART technologies or process optimization through process mining techniques is presented as a future research topic.


2021 ◽  
Vol 4 ◽  
Author(s):  
Rashid Zaman ◽  
Marwan Hassani ◽  
Boudewijn F. Van Dongen

In the context of process mining, event logs consist of process instances called cases. Conformance checking is a process mining task that inspects whether a log file is conformant with an existing process model. This inspection is additionally quantifying the conformance in an explainable manner. Online conformance checking processes streaming event logs by having precise insights into the running cases and timely mitigating non-conformance, if any. State-of-the-art online conformance checking approaches bound the memory by either delimiting storage of the events per case or limiting the number of cases to a specific window width. The former technique still requires unbounded memory as the number of cases to store is unlimited, while the latter technique forgets running, not yet concluded, cases to conform to the limited window width. Consequently, the processing system may later encounter events that represent some intermediate activity as per the process model and for which the relevant case has been forgotten, to be referred to as orphan events. The naïve approach to cope with an orphan event is to either neglect its relevant case for conformance checking or treat it as an altogether new case. However, this might result in misleading process insights, for instance, overestimated non-conformance. In order to bound memory yet effectively incorporate the orphan events into processing, we propose an imputation of missing-prefix approach for such orphan events. Our approach utilizes the existing process model for imputing the missing prefix. Furthermore, we leverage the case storage management to increase the accuracy of the prefix prediction. We propose a systematic forgetting mechanism that distinguishes and forgets the cases that can be reliably regenerated as prefix upon receipt of their future orphan event. We evaluate the efficacy of our proposed approach through multiple experiments with synthetic and three real event logs while simulating a streaming setting. Our approach achieves considerably higher realistic conformance statistics than the state of the art while requiring the same storage.


Author(s):  
Bruna Brandão ◽  
Flávia Santoro ◽  
Leonardo Azevedo

In business process models, elements can be scattered (repeated) within different processes, making it difficult to handle changes, analyze process for improvements, or check crosscutting impacts. These scattered elements are named as Aspects. Similar to the aspect-oriented paradigm in programming languages, in BPM, aspect handling has the goal to modularize the crosscutting concerns spread across the models. This process modularization facilitates the management of the process (reuse, maintenance and understanding). The current approaches for aspect identification are made manually; thus, resulting in the problem of subjectivity and lack of systematization. This paper proposes a method to automatically identify aspects in business process from its event logs. The method is based on mining techniques and it aims to solve the problem of the subjectivity identification made by specialists. The initial results from a preliminary evaluation showed evidences that the method identified correctly the aspects present in the process model.


2021 ◽  
Vol 10 (9) ◽  
pp. 144-147
Author(s):  
Huiling LI ◽  
Xuan SU ◽  
Shuaipeng ZHANG

Massive amounts of business process event logs are collected and stored by modern information systems. Model discovery aims to discover a process model from such event logs, however, most of the existing approaches still suffer from low efficiency when facing large-scale event logs. Event log sampling techniques provide an effective scheme to improve the efficiency of process discovery, but the existing techniques still cannot guarantee the quality of model mining. Therefore, a sampling approach based on set coverage algorithm named set coverage sampling approach is proposed. The proposed sampling approach has been implemented in the open-source process mining toolkit ProM. Furthermore, experiments using a real event log data set from conformance checking and time performance analysis show that the proposed event log sampling approach can greatly improve the efficiency of log sampling on the premise of ensuring the quality of model mining.


2018 ◽  
Vol 27 (02) ◽  
pp. 1850002
Author(s):  
Sung-Hyun Sim ◽  
Hyerim Bae ◽  
Yulim Choi ◽  
Ling Liu

In Big data and IoT environments, process execution generates huge-sized data some of which is subsequently obtained by sensors. The main issue in such areas has been the necessity of analyzing data in order to suggest enhancements to processes. In this regard, evaluation of process model conformance to the execution log is of great importance. For this purpose, previous reports on process mining approaches have advocated conformance checking by fitness measure, which is a process that uses token replay and node-arc relations based on Petri net. However, fitness measure so far has not considered statistical significance, but just offers a numeric ratio. We herein propose a statistical verification method based on the Kolmogorov–Smirnov (K–S) test to judge whether two different log datasets follow the same process model. Our method can be easily extended to determinations that process execution actually follows a process model, by playing out the model and generating event log data from it. Additionally, in order to solve the problem of the trade-off between model abstraction and process conformance, we also propose the new concepts of Confidence Interval of Abstraction Value (CIAV) and Maximum Confidence Abstraction Value (MCAV). We showed that our method can be applied to any process mining algorithm (e.g. heuristic mining, fuzzy mining) that has parameters related to model abstraction. We expect that our method will come to be widely utilized in many applications dealing with business process enhancement involving process-model and execution-log analyses.


Author(s):  
Pavlos Delias ◽  
Kleanthi Lakiotaki

Automated discovery of a process model is a major task of Process Mining that means to produce a process model from an event log, without any a-priori information. However, when an event log contains a large number of distinct activities, process discovery can be real challenging. The goal of this article is to facilitate process discovery in such cases when a process is expected to contain a large set of unique activities. To this end, this article proposes a clustering approach that recommends horizontal boundaries for the process. The proposed approach ultimately partitions the event log in a way that human interpretation efforts are decomposed. In addition, it makes automated discovery more efficient as well as effective by simultaneously considering two quality criteria: informativeness and robustness of the derived groups of activities. The authors conducted several experiments to test the behavior of the algorithm under different settings, and to compare it against other techniques. Finally, they provide a set of recommendations that may help process analysts during the process discovery endeavor.


2020 ◽  
Vol 10 (4) ◽  
pp. 1493 ◽  
Author(s):  
Kwanghoon Pio Kim

In this paper, we propose an integrated approach for seamlessly and effectively providing the mining and the analyzing functionalities to redesigning work for very large-scale and massively parallel process models that are discovered from their enactment event logs. The integrated approach especially aims at analyzing not only their structural complexity and correctness but also their animation-based behavioral properness, and becomes concretized to a sophisticated analyzer. The core function of the analyzer is to discover a very large-scale and massively parallel process model from a process log dataset and to validate the structural complexity and the syntactical and behavioral properness of the discovered process model. Finally, this paper writes up the detailed description of the system architecture with its functional integration of process mining and process analyzing. More precisely, we excogitate a series of functional algorithms for extracting the structural constructs and for visualizing the behavioral properness of those discovered very large-scale and massively parallel process models. As experimental validation, we apply the proposed approach and analyzer to a couple of process enactment event log datasets available on the website of the 4TU.Centre for Research Data.


Author(s):  
David Sánchez-Charles ◽  
Victor Muntés-Mulero ◽  
Josep Carmona ◽  
Marc Solé

2016 ◽  
Vol 17 (4) ◽  
pp. 240-255 ◽  
Author(s):  
David J. Hansen ◽  
Javier Monllor ◽  
Rodney C. Shrader

There is plenty of debate in the entrepreneurship literature regarding entrepreneurial opportunity. There also has been a lack of construct clarity. These two issues have combined to stifle progress in understanding this important phenomenon. We believe that across these debates there are many underlying commonalities and potential for more clear constructs. In this article, we review how scholars have defined and operationalized entrepreneurial opportunity and opportunity-related processes in order to better understand what they really mean when they say ‘opportunity’. We found a total of 102 definitions and 51 operationalizations from 105 articles published in leading entrepreneurship and management journals. A total of 81 elements were identified across the definitions and operationalizations and compiled into an integrated process model. The model incorporates what seemed to be disparate views into a single unifying model. Comparison between conceptual definitions and operationalizations reveals many elements that are missing either conceptual or empirical attention. The model will help scholars more easily identify and build upon prior research. To that effect, numerous suggestions for future research are discussed and are summarized in a table.


2012 ◽  
Vol 29 (1) ◽  
pp. 186-196 ◽  
Author(s):  
Stephen Craven ◽  
Nishikant Shirsat ◽  
Jessica Whelan ◽  
Brian Glennon

Sign in / Sign up

Export Citation Format

Share Document