data marts
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 16)

H-INDEX

7
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Zainab Alkhayat ◽  
Kadhim B. S. Aljanabi

2021 ◽  
Vol 27 (10) ◽  
pp. 542-549
Author(s):  
G. Ch. Nabibayova ◽  

The article proposes an approach to the development of an electronic demographic decision support system using technologies of Data Warehouse (DW) and Interactive Analytical Processing OLAP. This makes it possible to conduct high-level demographic research and provide support to decision-makers in demographic sphere. The article notes that demography is an interdisciplinary field of research and is defined as a complex science. Each industry of demography has many indicators. A sample list of these indicators is presented. The main characteristics of the DW, which should be taken into account when developing its architecture, are stated. Among these characteristics, one can find the main defining characteristics of Big Data — volume, velocity, variety, veracity, variability, visualization, value etc. For a more rational and efficient use of a large amount of information, taking into account its constant increase, to ensure the speed of execution of requests for a given system, it is proposed to use a Bus of Interconnected Data Marts (DM) as an architecture of DW. One of the advantages of using DM is that their use assumes distributed parallel data processing. This architecture allows for much faster results generation. It is based on the MapReduce distributed computing model and the Hadoop project. In addition, to effectively use large amounts of data, it is also proposed to use OLAP operations such as roll-up and drill-down, as well as fuzzy set theory, based on the technique of computing with words. The article also shows the practical application of interconnected DM. An OLAP cube is built on the basis of these DM. OLAP operations provide the ability to view cubes in different slices and provide aggregate data.


2021 ◽  
Vol 11 (18) ◽  
pp. 8651
Author(s):  
Vladimir Belov ◽  
Alexander N. Kosenkov ◽  
Evgeny Nikulchev

One of the most popular methods for building analytical platforms involves the use of the concept of data lakes. A data lake is a storage system in which the data are presented in their original format, making it difficult to conduct analytics or present aggregated data. To solve this issue, data marts are used, representing environments of stored data of highly specialized information, focused on the requests of employees of a certain department, the vector of an organization’s work. This article presents a study of big data storage formats in the Apache Hadoop platform when used to build data marts.


2021 ◽  
Vol 17 (3) ◽  
pp. 22-43
Author(s):  
Sonali Ashish Chakraborty

Data from multiple sources are loaded into the organization data warehouse for analysis. Since some OLAP queries are quite frequently fired on the warehouse data, their execution time is reduced by storing the queries and results in a relational database, referred as materialized query database (MQDB). If the tables, fields, functions, and criteria of input query and stored query are the same but the query criteria specified in WHERE or HAVING clause do not match, then they are considered non-synonymous to each other. In the present research, the results of non-synonymous queries are generated by reusing the existing stored results after applying UNION or MINUS operations on them. This will reduce the execution time of non-synonymous queries. For superset criteria values of input query, UNION operation is applied, and for subset values, MINUS operation is applied. Incremental result processing of existing stored results, if required, is performed using Data Marts.


2021 ◽  
Vol 17 (2) ◽  
pp. 85-105
Author(s):  
Sonali Ashish Chakraborty ◽  
Jyotika Doshi

The enterprise data warehouse stores an enormous amount of data collected from multiple sources for analytical processing and strategic decision making. The analytical processing is done using online analytical processing (OLAP) queries where the performance in terms of result retrieval time is an important factor. The major existing approaches for retrieving results from a data warehouse are multidimensional data cubes and materialized views that incur more storage, processing, and maintenance costs. The present study strives to achieve a simpler and faster query result retrieval approach from data warehouse with reduced storage space and minimal maintenance cost. The execution time of frequent queries is saved in the present approach by storing their results for reuse when the query is fired next time. The executed OLAP queries are stored along with the query results and necessary metadata information in a relational database is referred as materialized query database (MQDB). The tables, fields, functions, relational operators, and criteria used in the input query are matched with those of stored query, and if they are found to be same, then the input query and the stored query are considered as a synonymous query. Further, the stored query is checked for incremental updates, and if no incremental updates are required, then the existing stored results are fetched from MQDB. On the other hand, if the stored query requires an incremental update of results, then the processing of only incremental result is considered from data marts. The performance of MQDB model is evaluated by comparing with the developed novel approach, and it is observed that, using MQDB, a significant reduction in query processing time is achieved as compared to the major existing approaches. The developed model will be useful for the organizations keeping their historical records in the data warehouse.


Author(s):  
Hoemra N. Halvadi Assistant Professor

In today’s world there are large amount of significant data to counter highly force races, extends market share and improve profitability. for that they required the information in a such way that can be a subject oriented, combined, non-volatile and time-variant. It a conceptual data for data repository of data for gathering data from various sources and merge by the whole enterprise. The data mart is newly progress area of data Science which is to be used to important deployment of decision support ability with the fire answer on investment need by the pace of extant business. Basically, data mart is coming from the data warehouse to emerged the technology and getting data in faster way which is used to merge to create Data warehouse. In the paper we will take review about design and integration of data marts and various techniques used for merged data mart.


2020 ◽  
Author(s):  
◽  
Julia Juro Barrios ◽  
Fredy Enrique Salazar Timoteo

La propuesta de solución de inteligencia de negocios pretende contribuir a la organización objeto de estudio la oportunidad que mejorar en la toma de decisiones por parte de los directivos; ya que, se proporcionará la entrega de cuadro de mandos corporativos con información recopilada y centralizada que permita realizar consultas variadas que se encuentren basadas en modelos de estructuras multidimensionales o más conocidos como Cubos OLAP1. La inteligencia de negocios establece un determinado proceso para tratar la información que se encuentra dispersa, así como, mediante la técnica denominada ETL2 que permite convertir datos explotables que serán mostrador en Data marts o cubos OLAP, y, la herramienta que se utiliza para la presentación de los indicadores, reportes o cuadro de mandos con la información recopilada dentro del Data mart será el Power BI. Por ello, se procedió a realizar el análisis del negocio y del proceso de Planificación de Servicios Mineros con el apoyo del marco de referencia Zachman3, luego se detalla la situación actual del proceso y se propone la situación futura. Cuando la propuesta es aceptada, se establece el resultado del proyecto a través de las reglas de negocio, requerimientos funcionales y no funcionales, restricciones, drivers funcionales y los casos de uso del sistema. Con toda la información previa, se procederá a efectuar los drivers arquitectónicos basada en la metodología Kimball4, conceptos de diseño, estilos arquitecturales y tácticas, se adiciona, las especificaciones de los casos de uso del sistema y los prototipos de los cuadros de mandos propuestos. Para finalizar, se mostrará la gestión del proyecto con el apoyo de la Guía del PMBOK5.


2020 ◽  
Vol 16 (4) ◽  
pp. 95-111
Author(s):  
Wallace Anacleto Pinheiro ◽  
Geraldo Xexéo ◽  
Jano Moreira de Souza ◽  
Ana Bárbara Sapienza Pinheiro

This work proposes a methodology applied to repositories modeled using star schemas, such as data marts, to discover relevant time series relations. This paper applies a set of measures related to association, correlation, and causality to create connections among data. In this context, the research proposes a new causality function based on peaks and values that relate coherently time series. To evaluate the approach, the authors use a set of experiments exploring time series about a particular neglected disease that affects several Brazilian cities called American Tegumentary Leishmaniasis and time series about the climate of some cities in Brazil. The authors populate data marts with these data, and the proposed methodology has generated a set of relations linking the notifications of this disease to the variation of temperature and pluviometry.


2020 ◽  
pp. 228-236
Author(s):  
G.Ch. Nabibekova ◽  

The article suggests an approach to the development of an electronic demographic decision support system using data warehouse and interactive analytical processing OLAP. This makes it possible to conduct research on demographic processes at a high level and to support decision makers in the field of demography. Due to the presence of many types of demography and a large number of indicators, proposed in the article, a Data Mart Bus Architecture with Linked Dimensional Data Marts is proposed as a Data Warehouse architecture. The article also shows the practical application of this approach using two Data Marts as an example. Based on these Data Marts, OLAP-cubes are built. OLAP operations provide the ability to view cubes in various slices, as well as provide aggregate data.


2020 ◽  
Vol 3 (1) ◽  
pp. 26-39
Author(s):  
Refed Adnan ◽  
Talib M. J. Abbas

Particular and timely unified information along with quick and effective query response times is the basic fundamental requirement for the success of any collection of independent data marts (data warehouse) which forms Fact Constellation Schema or Galaxy Schema. Because of the materialized view storage area, the materialization of all views is practically impossible thus suitable materialized views (MVs) picking is one of the intelligent decisions in designing a Fact Constellation Schema to get optimal efficiency. This study presents a framework for picking best-materialized view using Quantum Particle Swarm Optimization (QPSO) algorithm where it is one of the stochastic algorithm in order to achieve the effective combination of good query response time, low query handling cost and low view maintenance cost. The results reveals that the proposed method for picking best-materialized view using QPSO algorithm is better than other techniques via computing the ratio of query response time and compare it to the response time of the same queries on the materialized views. Ratio of implementing the query on the base table takes five times more time than the query implementation on the materialized views. Where the response time of queries through MVs access were found 0.084 seconds while by direct access queries were found 0.422 seconds. This outlines that the performance of query through materialized views access is 402.38% better than those directly access via data warehouse-logical.


Sign in / Sign up

Export Citation Format

Share Document