scholarly journals Pengembangan Model Perencanaan Himpunan Data dan Aplikasi Instrumentasi Berbasis Pola Tujuh Belas Plus

Nadwa ◽  
2014 ◽  
Vol 8 (2) ◽  
pp. 193
Author(s):  
Indra Kusuma

<p>This paper describes the development of a set of data and planning models based instrumentation applications seventeen plus patterns in BK teachers SMP / MTs in Bondowoso. The results of this study indicate that counselling activity planning model using the approach pattern seventeen plus becomes very necessary. B.K. teachers in SMP/MTs of Bondowoso still do not have a wide range of data that should be held for the provision of counselling services. The teachers feel it is important to have variety of data sets and instrumentation applications for the smooth running of counselling services (score = 3.23). The evaluation of the implementation of set of data and applications instrument was still very low (score = 1.14). The planning model development activities and application instrumentation data set found that B.K. teachers desperately need (score = 4.28). The assessment model development planning activities data set and instrumentation applications that are promoted rated excellent (score = 4.47). </p><p><br /><strong>Abstrak </strong></p><p>Makalah ini menjelaskan pengembangan model perencanaan himpunan data dan aplikasi instrumentasi berbasis pola tujuh belas plus pada guru B.K. SMP/MTs di Bondowoso. Hasil penelitian ini menunjukkan bahwa model perencanaan kegiatan konseling dengan menggunakan pendekatan pola tujuh belas plus menjadi sangat perlu dilakukan agar implementasinya sesuai dengan kebutuhan siswa. Guru BK SMP/MTs di Bondowoso masih banyak yang belum memiliki berbagai data untuk penyelenggaraan layanan konseling. Guru-guru tersebut sangat memerlukan himpunan data dan aplikasi instrumentasi untuk kelancaran menjalankan tugasnya (rerata = 3,23). Sementara evaluasi pelaksanaan himpunan data dan aplikasi instrumennya ternyata masih sangat rendah (rerata = 1,14. Guru-guru BK juga sangat membutuhkan (rerata = 4,28). Penilaian pengembangan model perencanaan kegiatan himpunan data dan aplikasi instrumentasi yang dipromosikan dinilai sangat baik (rata = 4,47) </p>

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3406
Author(s):  
Jie Jiang ◽  
Yin Zou ◽  
Lidong Chen ◽  
Yujie Fang

Precise localization and pose estimation in indoor environments are commonly employed in a wide range of applications, including robotics, augmented reality, and navigation and positioning services. Such applications can be solved via visual-based localization using a pre-built 3D model. The increase in searching space associated with large scenes can be overcome by retrieving images in advance and subsequently estimating the pose. The majority of current deep learning-based image retrieval methods require labeled data, which increase data annotation costs and complicate the acquisition of data. In this paper, we propose an unsupervised hierarchical indoor localization framework that integrates an unsupervised network variational autoencoder (VAE) with a visual-based Structure-from-Motion (SfM) approach in order to extract global and local features. During the localization process, global features are applied for the image retrieval at the level of the scene map in order to obtain candidate images, and are subsequently used to estimate the pose from 2D-3D matches between query and candidate images. RGB images only are used as the input of the proposed localization system, which is both convenient and challenging. Experimental results reveal that the proposed method can localize images within 0.16 m and 4° in the 7-Scenes data sets and 32.8% within 5 m and 20° in the Baidu data set. Furthermore, our proposed method achieves a higher precision compared to advanced methods.


2007 ◽  
Vol 37 (10) ◽  
pp. 2010-2021 ◽  
Author(s):  
Samuel D. Pittman ◽  
B. Bruce Bare ◽  
David G. Briggs

Forest planning models have increased in size and complexity as planners address a growing array of economic, ecological, and societal issues. Hierarchical production models offer a means of better managing these large and complex models. Hierarchical production planning models decompose large models into a set of smaller linked models. For example, in this paper, a Lagrangian relaxation formulation and a modified Dantzig–Wolfe decomposition – column generation routine are used to solve a hierarchical forest planning model that maximizes the net present value of harvest incomes while recognizing specific geographical units that are subject to harvest flow and green-up constraints. This allows the planning model to consider forest-wide constraints such as harvest flow, as well as address separate subproblems for each contiguous management zone for which detailed spatial plans are computed. The approach taken in this paper is different from past approaches in forest hierarchical planning because we start with a single model and derive a hierarchical model that addresses integer subproblems using Dantzig–Wolfe decomposition. The decomposition approach is demonstrated by analyzing a set of randomly generated planning problems constructed from a large forest and land inventory data set.


2019 ◽  
Author(s):  
Matthew Gard ◽  
Derrick Hasterok ◽  
Jacqueline Halpin

Abstract. Dissemination and collation of geochemical data are critical to promote rapid, creative and accurate research and place new results in an appropriate global context. To this end, we have assembled a global whole-rock geochemical database, with other associated sample information and properties, sourced from various existing databases and supplemented with numerous individual publications and corrections. Currently the database stands at 1,023,490 samples with varying amounts of associated information including major and trace element concentrations, isotopic ratios, and location data. The distribution both spatially and temporally is quite heterogeneous, however temporal distributions are enhanced over some previous database compilations, particularly in terms of ages older than ~ 1000 Ma. Also included are a wide range of computed geochemical indices, physical property estimates and naming schema on a major element normalized version of the geochemical data for quick reference. This compilation will be useful for geochemical studies requiring extensive data sets, in particular those wishing to investigate secular temporal trends. The addition of physical properties, estimated by sample chemistry, represents a unique contribution to otherwise similar geochemical databases. The data is published in .csv format for the purposes of simple distribution but exists in a format acceptable for database management systems (e.g. SQL). One can either manipulate this data using conventional analysis tools such as MATLAB®, Microsoft® Excel, or R, or upload to a relational database management system for easy querying and management of the data as unique keys already exist. This data set will continue to grow, and we encourage readers to contact us or other database compilations contained within about any data that is yet to be included. The data files described in this paper are available at https://doi.org/10.5281/zenodo.2592823 (Gard et al., 2019).


Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 12
Author(s):  
Jose M. Castillo T. ◽  
Muhammad Arif ◽  
Martijn P. A. Starmans ◽  
Wiro J. Niessen ◽  
Chris H. Bangma ◽  
...  

The computer-aided analysis of prostate multiparametric MRI (mpMRI) could improve significant-prostate-cancer (PCa) detection. Various deep-learning- and radiomics-based methods for significant-PCa segmentation or classification have been reported in the literature. To be able to assess the generalizability of the performance of these methods, using various external data sets is crucial. While both deep-learning and radiomics approaches have been compared based on the same data set of one center, the comparison of the performances of both approaches on various data sets from different centers and different scanners is lacking. The goal of this study was to compare the performance of a deep-learning model with the performance of a radiomics model for the significant-PCa diagnosis of the cohorts of various patients. We included the data from two consecutive patient cohorts from our own center (n = 371 patients), and two external sets of which one was a publicly available patient cohort (n = 195 patients) and the other contained data from patients from two hospitals (n = 79 patients). Using multiparametric MRI (mpMRI), the radiologist tumor delineations and pathology reports were collected for all patients. During training, one of our patient cohorts (n = 271 patients) was used for both the deep-learning- and radiomics-model development, and the three remaining cohorts (n = 374 patients) were kept as unseen test sets. The performances of the models were assessed in terms of their area under the receiver-operating-characteristic curve (AUC). Whereas the internal cross-validation showed a higher AUC for the deep-learning approach, the radiomics model obtained AUCs of 0.88, 0.91 and 0.65 on the independent test sets compared to AUCs of 0.70, 0.73 and 0.44 for the deep-learning model. Our radiomics model that was based on delineated regions resulted in a more accurate tool for significant-PCa classification in the three unseen test sets when compared to a fully automated deep-learning model.


2018 ◽  
Author(s):  
Brian Hie ◽  
Bryan Bryson ◽  
Bonnie Berger

AbstractResearchers are generating single-cell RNA sequencing (scRNA-seq) profiles of diverse biological systems1–4 and every cell type in the human body.5 Leveraging this data to gain unprecedented insight into biology and disease will require assembling heterogeneous cell populations across multiple experiments, laboratories, and technologies. Although methods for scRNA-seq data integration exist6,7, they often naively merge data sets together even when the data sets have no cell types in common, leading to results that do not correspond to real biological patterns. Here we present Scanorama, inspired by algorithms for panorama stitching, that overcomes the limitations of existing methods to enable accurate, heterogeneous scRNA-seq data set integration. Our strategy identifies and merges the shared cell types among all pairs of data sets and is orders of magnitude faster than existing techniques. We use Scanorama to combine 105,476 cells from 26 diverse scRNA-seq experiments across 9 different technologies into a single comprehensive reference, demonstrating how Scanorama can be used to obtain a more complete picture of cellular function across a wide range of scRNA-seq experiments.


2017 ◽  
Vol 44 (2) ◽  
pp. 203-229 ◽  
Author(s):  
Javier D Fernández ◽  
Miguel A Martínez-Prieto ◽  
Pablo de la Fuente Redondo ◽  
Claudio Gutiérrez

The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.


2017 ◽  
Author(s):  
João C. Marques ◽  
Michael B. Orger

AbstractHow to partition a data set into a set of distinct clusters is a ubiquitous and challenging problem. The fact that data varies widely in features such as cluster shape, cluster number, density distribution, background noise, outliers and degree of overlap, makes it difficult to find a single algorithm that can be broadly applied. One recent method, clusterdp, based on search of density peaks, can be applied successfully to cluster many kinds of data, but it is not fully automatic, and fails on some simple data distributions. We propose an alternative approach, clusterdv, which estimates density dips between points, and allows robust determination of cluster number and distribution across a wide range of data, without any manual parameter adjustment. We show that this method is able to solve a range of synthetic and experimental data sets, where the underlying structure is known, and identifies consistent and meaningful clusters in new behavioral data.Author summarIt is common that natural phenomena produce groupings, or clusters, in data, that can reveal the underlying processes. However, the form of these clusters can vary arbitrarily, making it challenging to find a single algorithm that identifies their structure correctly, without prior knowledge of the number of groupings or their distribution. We describe a simple clustering algorithm that is fully automatic and is able to correctly identify the number and shape of groupings in data of many types. We expect this algorithm to be useful in finding unknown natural phenomena present in data from a wide range of scientific fields.


2021 ◽  
Vol 143 (11) ◽  
Author(s):  
Mohsen Faramarzi-Palangar ◽  
Behnam Sedaee ◽  
Mohammad Emami Niri

Abstract The correct definition of rock types plays a critical role in reservoir characterization, simulation, and field development planning. In this study, we use the critical pore size (linf) as an approach for reservoir rock typing. Two linf relations were separately derived based on two permeability prediction models and then merged together to drive a generalized linf relation. The proposed rock typing methodology includes two main parts: in the first part, we determine an appropriate constant coefficient, and in the second part, we perform reservoir rock typing based on two different scenarios. The first scenario is based on the forming groups of rocks using statistical analysis, and the second scenario is based on the forming groups of rocks with similar capillary pressure curves. This approach was applied to three data sets. In detail, two data sets were used to determine the constant coefficient, and one data set was used to show the applicability of the linf method in comparison with FZI for rock typing.


2018 ◽  
Vol 33 (4) ◽  
pp. 266-269 ◽  
Author(s):  
Marcus H. Mendenhall

This work provides a short summary of techniques for formally-correct handling of statistical uncertainties in Poisson-statistics dominated data, with emphasis on X-ray powder diffraction patterns. Correct assignment of uncertainties for low counts is documented. Further, we describe a technique for adaptively rebinning such data sets to provide more uniform statistics across a pattern with a wide range of count rates, from a few (or no) counts in a background bin to on-peak regions with many counts. This permits better plotting of data and analysis of a smaller number of points in a fitting package, without significant degradation of the information content of the data set. Examples of the effect of this on a diffraction data set are given.


1999 ◽  
Vol 5 (S2) ◽  
pp. 74-75
Author(s):  
P.K. Carpenter

Both precision and accuracy are central to quantitative microanalysis. While precision may be evaluated from x-ray counting statistics and replicate measurement, the determination of analytical accuracy requires well characterized standards of which there are few that span a wide range of compositions in binary and ternary systems. The accuracy of silicate mineral analysis has been previously studied via measurement of α factors at multiple accelerating potential and the subsequent evaluation of correction algorithms and mass absorption coefficient (mac) data sets. This approach has been extended in this study to the In2O3-Ga2O3 and HgTe-CdTe systems.Single crystals of ln2O3, Ga2O3, and an InGa-oxide of unknown composition were used to evaluate accuracy in the In2O3-Ga2O3 binary, using the GaKα, GaLα, and InLα x-ray lines, with WDS measurements performed at 15, 20, and 25KV relative to the ln2O3 and Ga2O3 standards (see Table I). The Ga Kα line exhibits minimal absorption, has no fluorescence correction in this system and is not critically dependent on the correction algorithm or mac data set used.


Sign in / Sign up

Export Citation Format

Share Document