A METRIC-BASED APPROACH FOR PREDICTING CONCEPTUAL DATA MODELS MAINTAINABILITY

Author(s):  
MARIO PIATTINI ◽  
MARCELA GENERO ◽  
LUIS JIMÉNEZ

It is generally accepted in the information system (IS) field that IS quality is highly dependent on the decisions made early in the development life cycle. The construction of conceptual data models is often an important task of this early development. Therefore, improving the quality of conceptual data models will be a major step towards the quality improvement of the IS development. Several quality frameworks for conceptual data models have been proposed, but most of them lack valid quantitative measures in order to evaluate the quality of conceptual data models in an objective way. In this article we will define measures for the structural complexity (internal attribute) of entity relationship diagrams (ERD) and use them for predicting their maintainability (external attribute). We will theoretically validate the proposed metrics following Briand et al.'s framework with the goal of demonstrating the properties that characterise each metric. We will also show how it is possible to predict each of the maintainability sub-characteristics using a prediction model generated using a novel method for induction of fuzzy rules.

2019 ◽  
Vol 30 (1) ◽  
pp. 1-21
Author(s):  
Ljubica Kazi ◽  
Zoltan Kazi

Conceptual data models can change during the information system development and teamwork phases, which require constantly monitoring with synonyms detection. This study elaborates on an approach for detecting synonyms in an entity-relationship model based on mapping with ontological elements. The use of a specific data model validator (DMV) tool enables formalization of the ontology and ER models, as well as their integration with the set of reasoning rules. The reasoning rules enable mapping between formalized elements of the ontology and ER model, and the extraction of synonyms. Formalized elements and reasoning rules are processed within Prolog for the extraction of synonyms. An empirical study conducted by using university student exams demonstrates usability of the proposed approach. The results show effectiveness in extraction of synonyms in all types of conceptual data model elements.


2008 ◽  
pp. 1068-1080
Author(s):  
Haya El-Ghalayini ◽  
Mohammed Odeh ◽  
Richard McClatchey

This article studies the differences and similarities between domain ontologies and conceptual data models and the role that ontologies can play in establishing conceptual data models during the process of developing information systems. A mapping algorithm has been proposed and embedded in a special purpose transformation engine to generate a conceptual data model from a given domain ontology. Both quantitative and qualitative methods have been adopted to critically evaluate this new approach. In addition, this article focuses on evaluating the quality of the generated conceptual data model elements using Bunge-Wand-Weber and OntoClean ontologies. The results of this evaluation indicate that the generated conceptual data model provides a high degree of accuracy in identifying the substantial domain entities, along with their relationships being derived from the consensual semantics of domain knowledge. The results are encouraging and support the potential role that this approach can take part in the process of information system development.


Author(s):  
Haya El-Ghalayini ◽  
Mohammed Odeh ◽  
Richard McClatchey

This paper studies the differences and similarities between domain ontologies and conceptual data models and the role that ontologies can play in establishing conceptual data models during the process of information systems development. A mapping algorithm has been proposed and embedded in a special purpose Transformation Engine to generate a conceptual data model from a given domain ontology. Both quantitative and qualitative methods have been adopted to critically evaluate this new approach. In addition, this paper focuses on evaluating the quality of the generated conceptual data model elements using Bunge-Wand-Weber and OntoClean ontologies. The results of this evaluation indicate that the generated conceptual data model provides a high degree of accuracy in identifying the substantial domain entities along with their relationships being derived from the consensual semantics of domain knowledge. The results are encouraging and support the potential role that this approach can take part in the process of information system development.


Author(s):  
Mustika Mustika

Application of Puskesmas Activity Reports (LB4) can assist Puskesmas staff in delivering or making information more quickly and efficiently. Because the old system in the manufacturing process is still manual, because the calculation of the number of visitors is still counted one by one and there are still frequent errors SDLC method is applied in making the application, the programming language used is Microsoft Visual Basic 6.0, supported by MySql database to improve the quality of information presentation report on the activities of LB4 Puskesmas Seputih Banyak. The method used is to use data collection methods, observations using the SDLC Method, while the application design uses a document flow chart, Data Flow Diagrams (DFD), Flow Diagrams (Flowcharts), Entity Relationship Diagrams (ERD). besides that it also uses the MySql database. The application can facilitate each officer in making LB4 reports that no longer use written documents, so that it is faster in distributing visitor data and also can count the number of visitors who come without having to count manually, because in the LB4 Report Application Design System for each section and has the number automatic data without counting one by one each part.


Author(s):  
Antonio Badia

This chapter describes transformations between conceptual models (mainly entity-relationship diagrams and also UML) and data models. It describes algorithms to transform a given conceptual model into a data model for a relational, object-relational, object-oriented and XML database. Some examples are used to illustrate the transformations. While some transformations are well known, some (like the transformation into XML or into object-relational schemas) have not been investigated in depth. The chapter shows that most of these transformations offer options which involve important trade-offs that database designers should be aware of.


SIMAK ◽  
2020 ◽  
Vol 18 (02) ◽  
pp. 184-202
Author(s):  
Weli Weli ◽  
Medio Rahmat Gustimuda Taruna

This study aims to analyze the process of implementing a cloud-based official travel filing system at PT. Timah Tbk. This research was conducted in January to April 2019. Analysis of system implementation was carried out based on the System Development Life Cycle method using analytical tools such as Flowcharts, Data Flow Diagrams, and Entity Relationship Diagrams. Data collection was carried out using interview and observation methods. The analysis process was carried out on the blueprint that had been made in the previous stage then continued with the implementation process using the Kec workflow application. The results of the implementation of the new system, at PT. Timah Tbk has succeeded in increasing the efficiency and effectiveness in carrying out the submission, approval, recording and realization of payment processes. In addition, the new system meets the goals the company wants, namely paperless and cashless.


1995 ◽  
Vol 04 (02n03) ◽  
pp. 237-258 ◽  
Author(s):  
MANFRED A. JEUSFELD ◽  
UWE A. JOHNEN

A logical database schema, e.g. a relational one, is the implementation of a specification, e.g. an entity-relationship diagram. Upcoming new data models require a cost-effective method for mapping from one data model to the other. We present an approach where the mapping process is divided into three parts. The first part reformulates the source and target data models into a so-called meta model. The second part classifies the input schema into the meta model, yielding a data model-independent representation. The third part synthesizes the output schema in terms of the target data model. The meta model, the data models as well as the schemas are all represented in the logic-based formalism of O-Telos. Its ability to quantify across data model concepts is the key to classifying schema elements independently of their data model. A prototype has been implemented on top of the deductive object base manager ConceptBase for the mapping of relational schemas to entity-relationship diagrams. From this, a C++-based tool has been derived as part of a commercial CASE environment for database applications.


Author(s):  
С.И. Рябухин

Процессные модели предметной области широко применяются при проектировании баз данных, а именно в ходе концептуального моделирования данных. Предлагается решение проблемы неоднозначности преобразования процессных доменных моделей типа SADT в концептуальные модели данных. Domain process models are widely used in database design, namely in conceptual data modeling. The solution of the problem of ambiguity of transformation of process domain models of the SADT type into conceptual data models is proposed.


Author(s):  
Andriy Lishchytovych ◽  
Volodymyr Pavlenko

The present article describes setup, configuration and usage of the key performance indicators (KPIs) of members of project teams involved into the software development life cycle. Key performance indicators are described for the full software development life cycle and imply the deep integration with both task tracking systems and project code management systems, as well as a software product quality testing system. To illustrate, we used the extremely popular products - Atlassian Jira (tracking development tasks and bugs tracking system) and git (code management system). The calculation of key performance indicators is given for a team of three developers, two testing engineers responsible for product quality, one designer, one system administrator, one product manager (responsible for setting business requirements) and one project manager. For the key members of the team, it is suggested to use one integral key performance indicator per the role / team member, which reflects the quality of the fulfillment of the corresponding role of the tasks. The model of performance indicators is inverse positive - the initial value of each of the indicators is zero and increases in the case of certain deviations from the standard performance of official duties inherent in a particular role. The calculation of the proposed key performance indicators can be fully automated (in particular, using Atlassian Jira and Atlassian Bitbucket (git) or any other systems, like Redmine, GitLab or TestLink), which eliminates the human factor and, after the automation, does not require any additional effort to calculate. Using such a tool as the key performance indicators allows project managers to completely eliminate bias, reduce the emotional component and provide objective data for the project manager. The described key performance indicators can be used to reduce the time required to resolve conflicts in the team, increase productivity and improve the quality of the software product.


Sign in / Sign up

Export Citation Format

Share Document