sampling strategy
Recently Published Documents


TOTAL DOCUMENTS

1960
(FIVE YEARS 789)

H-INDEX

53
(FIVE YEARS 7)

In the past two decades, the number of cross-border mergers and acquisitions in ASEAN has progressively expanded as the region has become a desired economic market for trade and investment. Therefore, this study aimed to identify the factors contributing to the success of acquisitions by corporations. It investigates the role of acquisition management capability with strategic integration and acquisition. The non-probability sampling strategy was used to collect information from 51 firms. With a five-point Likert scale, a systematic questionnaire was designed to test the latent variables by employing confirmatory factor analysis. The quantitative method of Structural Equation Modeling was used in the analysis. The results show that the structural model had a Goodness of Fit Index value that indicates all three latent variables and independent variables were valid. The findings indicate that acquisition management capability have a central role in advancing the overall integration of the acquiring firm in the ASEAN context.


2022 ◽  
Vol 12 (2) ◽  
pp. 602
Author(s):  
Weihua Li ◽  
Zhuang Miao ◽  
Jing Mu ◽  
Fanming Li

Superpixel segmentation has become a crucial pre-processing tool to reduce computation in many computer vision applications. In this paper, a superpixel extraction algorithm based on a seed strategy of contour encoding (SSCE) for infrared images is presented, which can generate superpixels with high boundary adherence and compactness. Specifically, SSCE can solve the problem of superpixels being unable to self-adapt to the image content. First, a contour encoding map is obtained by ray scanning the binary edge map, which ensures that each connected domain belongs to the same homogeneous region. Second, according to the seed sampling strategy, each seed point can be extracted from the contour encoding map. The initial seed set, which is adaptively scattered based on the local structure, is capable of improving the capability of boundary adherence, especially for small regions. Finally, the initial superpixels limited by the image contour are generated by clustering and refined by merging similar adjacent superpixels in the region adjacency graph (RAG) to reduce redundant superpixels. Experimental results on a self-built infrared dataset and the public datasets BSD500 and 3Dircadb demonstrate the generalization ability in grayscale and medical images, and the superiority of the proposed method over several state-of-the-art methods in terms of accuracy and compactness.


Author(s):  
Xueyan Cheng ◽  
Liang Zhang

This study aimed to explore the health service needs of empty nest families from a household perspective. A multistage random sampling strategy was conducted to select 1606 individuals in 803 empty nest households in this study. A questionnaire was used to ask each individual about their health service needs in each household. The consistency rate was calculated based on their consistent answers to the questionnaire. We used a collective household model to analyze individuals’ public health service needs on the family level. According to the results, individuals’ consistency rates of health service needs in empty nest households, such as diagnosis and treatment service (H1), chronic disease management service (H2), telemedicine care (H3), physical examination service (H4), health education service (H5), mental healthcare (H6), and traditional Chinese medicine service (H7) were 40.30%, 89.13%, 98.85%, 58.93%, 57.95%, 72.84%, and 63.40%, respectively. Therefore, family-level health service needs could be studied from a family level. Health service needs of H1, H3, H4, H5, and H7 for individuals in empty nest households have significant correlations with each other (r = 0.404, 0.177, 0.286, 0.265, 0.220, p < 0.001). This will be helpful for health management in primary care in rural China; the concordance will alleviate the pressure of primary care and increase the effectiveness of doctor–patient communication. Health service needs in empty nest households who took individuals’ public needs as household needs (n = 746) included the H4 (43.3%) and H5 (24.9%) and were always with a male householder (94.0%) or at least one had chronic diseases (82.4%). Health service needs in empty nest households that considered one member’s needs as household needs (n = 46) included the H1 (56.5%), H4 (65.2%), H5 (63.0%), and H7 (45.7%), and the member would be the householder of the family (90.5%) or had a disease within two weeks (100.0%). In conclusion, family members’ roles and health status play an important role in health service needs in empty nest households. Additionally, physical examination and health education services are the two health services that are most needed by empty nest households, and are suitable for delivering within a household unit.


2022 ◽  
Vol 15 (1) ◽  
pp. 117-129
Author(s):  
Mark T. Richardson ◽  
David R. Thompson ◽  
Marcin J. Kurowski ◽  
Matthew D. Lebsock

Abstract. Upcoming spaceborne imaging spectrometers will retrieve clear-sky total column water vapour (TCWV) over land at a horizontal resolution of 30–80 m. Here we show how to obtain, from these retrievals, exponents describing the power-law scaling of sub-kilometre horizontal variability in clear-sky bulk planetary boundary layer (PBL) water vapour (q) accounting for realistic non-vertical sunlight paths. We trace direct solar beam paths through large eddy simulations (LES) of shallow convective PBLs and show that retrieved 2-D water vapour fields are “smeared” in the direction of the solar azimuth. This changes the horizontal spatial scaling of the field primarily in that direction, and we address this by calculating exponents perpendicular to the solar azimuth, that is to say flying “across” the sunlight path rather than “towards” or “away” from the Sun. Across 23 LES snapshots, at solar zenith angle SZA = 60∘ the mean bias in calculated exponent is 38 ± 12 % (95 % range) along the solar azimuth, while following our strategy it is 3 ± 9 % and no longer significant. Both bias and root-mean-square error decrease with lower SZA. We include retrieval errors from several sources, including (1) the Earth Surface Mineral Dust Source Investigation (EMIT) instrument noise model, (2) requisite assumptions about the atmospheric thermodynamic profile, and (3) spatially nonuniform aerosol distributions. By only considering the direct beam, we neglect 3-D radiative effects such as light scattered into the field of view by nearby clouds. However, our proposed technique is necessary to counteract the direct-path effect of solar geometries and obtain unique information about sub-kilometre PBL q scaling from upcoming spaceborne spectrometer missions.


2022 ◽  
Author(s):  
Seunghwan Park ◽  
Hae-Wwan Lee ◽  
Jongho Im

<div>We consider the binary classification of imbalanced data. A dataset is imbalanced if the proportion of classes are heavily skewed. Imbalanced data classification is often challengeable, especially for high-dimensional data, because unequal classes deteriorate classifier performance. Under sampling the majority class or oversampling the minority class are popular methods to construct balanced samples, facilitating classification performance improvement. However, many existing sampling methods cannot be easily extended to high-dimensional data and mixed data, including categorical variables, because they often require approximating the attribute distributions, which becomes another critical issue. In this paper, we propose a new sampling strategy employing raking and relabeling procedures, such that the attribute values of the majority class are imputed for the values of the minority class in the construction of balanced samples. The proposed algorithms produce comparable performance as existing popular methods but are more flexible regarding the data shape and attribute size. The sampling algorithm is attractive in practice, considering that it does not require density estimation for synthetic data generation in oversampling and is not bothered by mixed-type variables. In addition, the proposed sampling strategy is robust to classifiers in the sense that classification performance is not sensitive to choosing the classifiers.</div>


2022 ◽  
Author(s):  
Seunghwan Park ◽  
Hae-Wwan Lee ◽  
Jongho Im

<div>We consider the binary classification of imbalanced data. A dataset is imbalanced if the proportion of classes are heavily skewed. Imbalanced data classification is often challengeable, especially for high-dimensional data, because unequal classes deteriorate classifier performance. Under sampling the majority class or oversampling the minority class are popular methods to construct balanced samples, facilitating classification performance improvement. However, many existing sampling methods cannot be easily extended to high-dimensional data and mixed data, including categorical variables, because they often require approximating the attribute distributions, which becomes another critical issue. In this paper, we propose a new sampling strategy employing raking and relabeling procedures, such that the attribute values of the majority class are imputed for the values of the minority class in the construction of balanced samples. The proposed algorithms produce comparable performance as existing popular methods but are more flexible regarding the data shape and attribute size. The sampling algorithm is attractive in practice, considering that it does not require density estimation for synthetic data generation in oversampling and is not bothered by mixed-type variables. In addition, the proposed sampling strategy is robust to classifiers in the sense that classification performance is not sensitive to choosing the classifiers.</div>


2022 ◽  
Vol 12 ◽  
Author(s):  
Zhendong Liu ◽  
Yurong Yang ◽  
Dongyan Li ◽  
Xinrong Lv ◽  
Xi Chen ◽  
...  

Background: Macromolecule structure prediction remains a fundamental challenge of bioinformatics. Over the past several decades, the Rosetta framework has provided solutions to diverse challenges in computational biology. However, it is challenging to model RNA tertiary structures effectively when the de novo modeling of RNA involves solving a well-defined small puzzle.Methods: In this study, we introduce a stepwise Monte Carlo parallelization (SMCP) algorithm for RNA tertiary structure prediction. Millions of conformations were randomly searched using the Monte Carlo algorithm and stepwise ansatz hypothesis, and SMCP uses a parallel mechanism for efficient sampling. Moreover, to achieve better prediction accuracy and completeness, we judged and processed the modeling results.Results: A benchmark of nine single-stranded RNA loops drawn from riboswitches establishes the general ability of the algorithm to model RNA with high accuracy and integrity, including six motifs that cannot be solved by knowledge mining–based modeling algorithms. Experimental results show that the modeling accuracy of the SMCP algorithm is up to 0.14 Å, and the modeling integrity on this benchmark is extremely high.Conclusion: SMCP is an ab initio modeling algorithm that substantially outperforms previous algorithms in the Rosetta framework, especially in improving the accuracy and completeness of the model. It is expected that the work will provide new research ideas for macromolecular structure prediction in the future. In addition, this work will provide theoretical basis for the development of the biomedical field.


2022 ◽  
Author(s):  
Ameilia Kusumawardani ◽  
Annisa Dewi Anggraeni ◽  
Sonya Ivanda Fiorella ◽  
Andi Putri Maharani ◽  
Moses Glorino Rumambo Pandin

Background: The revolution 4.0 era affects aspects of life, including books. Humans must be able to adapt to changes that may occur in the future. This study aims to determine whether physical books have been replaced by e-books in the digital era along with the 4.0 revolution and the implementation of human technology for the advancement of book form in the digital era. The method used is descriptive qualitative method, interviews and using a purposive sampling strategy. Researchers identify the focus of research to make it easier to find data, then collect data by selecting data and sources with the criteria of informants having used physical books or e-books. The research sample is students aged 15 to 25 years. Data analysis used the Miles and Huberman analysis model, namely by collecting data, reducing data, presenting data, and drawing conclusions. The results indicate that most students prefer physical books as their reading reference with the reason that physical books are considered more comfortable to use. However, in reality the frequency of using e-books is higher. In this case, it is feared that the level of public literacy will decrease if physical books have been completely replaced by e-books. Recommendation: This research is expected to increase the effectiveness of the use of e-books as a form of axiology implementation in the field of technological development. Limitations: data collection is only limited to respondents who are in an academic environment so that the coverage is narrow.


Author(s):  
Benjamin Ries ◽  
Karl Normak ◽  
R. Gregor Weiß ◽  
Salomé Rieder ◽  
Emília P. Barros ◽  
...  

AbstractThe calculation of relative free-energy differences between different compounds plays an important role in drug design to identify potent binders for a given protein target. Most rigorous methods based on molecular dynamics simulations estimate the free-energy difference between pairs of ligands. Thus, the comparison of multiple ligands requires the construction of a “state graph”, in which the compounds are connected by alchemical transformations. The computational cost can be optimized by reducing the state graph to a minimal set of transformations. However, this may require individual adaptation of the sampling strategy if a transformation process does not converge in a given simulation time. In contrast, path-free methods like replica-exchange enveloping distribution sampling (RE-EDS) allow the sampling of multiple states within a single simulation without the pre-definition of alchemical transition paths. To optimize sampling and convergence, a set of RE-EDS parameters needs to be estimated in a pre-processing step. Here, we present an automated procedure for this step that determines all required parameters, improving the robustness and ease of use of the methodology. To illustrate the performance, the relative binding free energies are calculated for a series of checkpoint kinase 1 inhibitors containing challenging transformations in ring size, opening/closing, and extension, which reflect changes observed in scaffold hopping. The simulation of such transformations with RE-EDS can be conducted with conventional force fields and, in particular, without soft bond-stretching terms.


Sign in / Sign up

Export Citation Format

Share Document