The applying of machine learning methods to improve the quality of well casing

2020 ◽  
pp. 81-93
Author(s):  
D. V. Shalyapin ◽  
D. L. Bakirov ◽  
M. M. Fattakhov ◽  
A. D. Shalyapina ◽  
A. V. Melekhov ◽  
...  

The article is devoted to the quality of well casing at the Pyakyakhinskoye oil and gas condensate field. The issue of improving the quality of well casing is associated with many problems, for example, a large amount of work on finding the relationship between laboratory studies and actual data from the field; the difficulty of finding logically determined relationships between the parameters and the final quality of well casing. The text gives valuable information on a new approach to assessing the impact of various parameters, based on a mathematical apparatus that excludes subjective expert assessments, which in the future will allow applying this method to deposits with different rock and geological conditions. We propose using the principles of mathematical processing of large data sets applying neural networks trained to predict the characteristics of the quality of well casing (continuity of contact of cement with the rock and with the casing). Taking into account the previously identified factors, we developed solutions to improve the tightness of the well casing and the adhesion of cement to the limiting surfaces.

Author(s):  
David Japikse ◽  
Oleg Dubitsky ◽  
Kerry N. Oliphant ◽  
Robert J. Pelton ◽  
Daniel Maynes ◽  
...  

In the course of developing advanced data processing and advanced performance models, as presented in companion papers, a number of basic scientific and mathematical questions arose. This paper deals with questions such as uniqueness, convergence, statistical accuracy, training, and evaluation methodologies. The process of bringing together large data sets and utilizing them, with outside data supplementation, is considered in detail. After these questions are focused carefully, emphasis is placed on how the new models, based on highly refined data processing, can best be used in the design world. The impact of this work on designs of the future is discussed. It is expected that this methodology will assist designers to move beyond contemporary design practices.


Leonardo ◽  
2012 ◽  
Vol 45 (2) ◽  
pp. 113-118 ◽  
Author(s):  
Rama C. Hoetzlein

This paper follows the development of visual communication through information visualization in the wake of the Fukushima nuclear accident in Japan. While information aesthetics are often applied to large data sets retrospectively, the author developed new works concurrently with an ongoing crisis to examine the impact and social aspects of visual communication while events continued to unfold. The resulting work, Fukushima Nuclear Accident—Radiation Comparison Map, is a reflection of rapidly acquired data, collaborative on-line analysis and reflective criticism of contemporary news media, resolved into a coherent picture through the participation of an on-line community.


2019 ◽  
Author(s):  
Anna C. Gilbert ◽  
Alexander Vargo

AbstractHere, we evaluate the performance of a variety of marker selection methods on scRNA-seq UMI counts data. We test on an assortment of experimental and synthetic data sets that range in size from several thousand to one million cells. In addition, we propose several performance measures for evaluating the quality of a set of markers when there is no known ground truth. According to these metrics, most existing marker selection methods show similar performance on experimental scRNA-seq data; thus, the speed of the algorithm is the most important consid-eration for large data sets. With this in mind, we introduce RANKCORR, a fast marker selection method with strong mathematical underpinnings that takes a step towards sensible multi-class marker selection.


2019 ◽  
Vol 12 (1) ◽  
pp. 34-40
Author(s):  
Mareeswari Venkatachalaappaswamy ◽  
Vijayan Ramaraj ◽  
Saranya Ravichandran

Background: In many modern applications, information filtering is now used that exposes users to a collection of data. In such systems, the users are provided with recommended items’ list they might prefer or predict the rate that they might prefer for the items. So that, the users might be select the items that are preferred in that list. Objective: In web service recommendation based on Quality of Service (QoS), predicting QoS value will greatly help people to select the appropriate web service and discover new services. Methods: The effective method or technique for this would be Collaborative Filtering (CF). CF will greatly help in service selection and web service recommendation. It is the more general way of information filtering among the large data sets. In the narrower sense, it is the method of making predictions about a user’s interest by collecting taste information from many users. Results: It is easy to build and also much more effective for recommendations by predicting missing QoS values for the users. It also addresses the scalability problem since the recommendations are based on like-minded users using PCC or in clusters using KNN rather than in large data sources. Conclusion: In this paper, location-aware collaborative filtering is used to recommend the services. The proposed system compares the prediction outcomes and execution time with existing algorithms.


2009 ◽  
Vol 42 (5) ◽  
pp. 783-792 ◽  
Author(s):  
A. Morawiec

Progress in experimental methods of serial sectioning and orientation determination opens new opportunities to study inter-crystalline boundaries in polycrystalline materials. In particular, macroscopic boundary parameters can now be measured automatically. With sufficiently large data sets, statistical analysis of interfaces between crystals is possible. The most basic and interesting issue is to find out the probability of occurrence of various boundaries in a given material. In order to define a boundary density function, a model of uniformity is needed. A number of such models can be conceived. It is proposed to use those derived from an assumed metric structure of the interface manifold. Some basic metrics on the manifold are explicitly given, and a number of notions and constructs needed for a strict definition of the boundary density function are considered. In particular, the crucial issue of the impact of symmetries is examined. The treatments of homo- and hetero-phase boundaries differ in some respects, and approaches applicable to each of these two cases are described. In order to make the abstract matter of the paper more accessible, a concrete boundary parameterization is used and some examples are given.


Psychology ◽  
2020 ◽  
Author(s):  
Jeffrey Stanton

The term “data science” refers to an emerging field of research and practice that focuses on obtaining, processing, visualizing, analyzing, preserving, and re-using large collections of information. A related term, “big data,” has been used to refer to one of the important challenges faced by data scientists in many applied environments: the need to analyze large data sources, in certain cases using high-speed, real-time data analysis techniques. Data science encompasses much more than big data, however, as a result of many advancements in cognate fields such as computer science and statistics. Data science has also benefited from the widespread availability of inexpensive computing hardware—a development that has enabled “cloud-based” services for the storage and analysis of large data sets. The techniques and tools of data science have broad applicability in the sciences. Within the field of psychology, data science offers new opportunities for data collection and data analysis that have begun to streamline and augment efforts to investigate the brain and behavior. The tools of data science also enable new areas of research, such as computational neuroscience. As an example of the impact of data science, psychologists frequently use predictive analysis as an investigative tool to probe the relationships between a set of independent variables and one or more dependent variables. While predictive analysis has traditionally been accomplished with techniques such as multiple regression, recent developments in the area of machine learning have put new predictive tools in the hands of psychologists. These machine learning tools relax distributional assumptions and facilitate exploration of non-linear relationships among variables. These tools also enable the analysis of large data sets by opening options for parallel processing. In this article, a range of relevant areas from data science is reviewed for applicability to key research problems in psychology including large-scale data collection, exploratory data analysis, confirmatory data analysis, and visualization. This bibliography covers data mining, machine learning, deep learning, natural language processing, Bayesian data analysis, visualization, crowdsourcing, web scraping, open source software, application programming interfaces, and research resources such as journals and textbooks.


2020 ◽  
Vol 65 (4) ◽  
pp. 608-627
Author(s):  
Dennis W. Carlton ◽  
Ken Heyer

In this essay, we evaluate the impact of the revolution that has occurred in antitrust and in particular the growing role played by economic analysis. Section II describes exactly what we think that revolution was. There were actually two revolutions. The first was the use by economists and other academics of existing economic insights together with the development of new economic insights to improve the understanding of the consequences of certain forms of market structure and firm behaviors. It also included the application of advanced empirical techniques to large data sets. The second was a revolution in legal jurisprudence, as both the federal competition agencies and the courts increasingly accepted and relied on the insights and evidence emanating from this economic research. Section III explains the impact of the revolution on economists, consulting firms, and research in the field of industrial organization. One question it addresses is why, if economics is being so widely employed and is so useful, one finds skilled economists so often in disagreement. Section IV asks whether the revolution has been successful or whether, as some critics claim, it has gone too far. Our view is that it has generally been beneficial though, as with most any policy, it can be improved. Section V discusses some of the hot issues in antitrust today and, in particular, what some of its critics say about the state of the revolution. The final section concludes with the hope that those wishing to turn back the clock to the antitrust and regulatory policies of fifty years ago more closely study that experience, otherwise they risk having its demonstrated deficiencies be repeated by throwing out the revolution’s baby with the bathwater.


1990 ◽  
Vol 6 (2) ◽  
pp. 220-228 ◽  
Author(s):  
Robert W. Dubois

AbstractModeling death rates has been suggested as a potential method to screen hospitals and identify superior and substandard providers. This article begins with a review of one hospital death rate study and focuses upon its findings and limitations. It also explores the inherent limitations in the use of large data sets to assess quality of care.


1990 ◽  
Vol 6 (2) ◽  
pp. 229-238 ◽  
Author(s):  
Susan Desharnais

AbstractThis article examines how large data sets can be used for evaluating the effects of health policy changes and for flagging providers with potential quality problems. An example is presented, illustrating how three risk-adjusted measures of hospital performance were developed using patient discharge abstracts. Advantages and disadvantage of this approach are discussed.


Sign in / Sign up

Export Citation Format

Share Document