local window
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 12)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Tong Li ◽  
Jufeng Zhao ◽  
Xiaohui Wu ◽  
Haifeng Mao ◽  
Guangmang Cui

PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251417
Author(s):  
Mun Bae Lee ◽  
Geon-Ho Jahng ◽  
Hyung Joong Kim ◽  
Oh-In Kwon

Magnetic resonance electrical properties tomography (MREPT) aims to visualize the internal high-frequency conductivity distribution at Larmor frequency using the B1 transceive phase data. From the magnetic field perturbation by the electrical field associated with the radiofrequency (RF) magnetic field, the high-frequency conductivity and permittivity distributions inside the human brain have been reconstructed based on the Maxwell’s equation. Starting from the Maxwell’s equation, the complex permittivity can be described as a second order elliptic partial differential equation. The established reconstruction algorithms have focused on simplifying and/or regularizing the elliptic partial differential equation to reduce the noise artifact. Using the nonlinear relationship between the Maxwell’s equation, measured magnetic field, and conductivity distribution, we design a deep learning model to visualize the high-frequency conductivity in the brain, directly derived from measured magnetic flux density. The designed moving local window multi-layer perceptron (MLW-MLP) neural network by sliding local window consisting of neighboring voxels around each voxel predicts the high-frequency conductivity distribution in each local window. The designed MLW-MLP uses a family of multiple groups, consisting of the gradients and Laplacian of measured B1 phase data, as the input layer in a local window. The output layer of MLW-MLP returns the conductivity values in each local window. By taking a non-local mean filtering approach in the local window, we reconstruct a noise suppressed conductivity image while maintaining spatial resolution. To verify the proposed method, we used B1 phase datasets acquired from eight human subjects (five subjects for training procedure and three subjects for predicting the conductivity in the brain).


2021 ◽  
pp. 107196
Author(s):  
Putu Desiana Wulaning Ayu ◽  
Sri Hartati ◽  
Aina Musdholifah ◽  
Detty S. Nurdiati

2021 ◽  
Vol 9 ◽  
pp. 447-461
Author(s):  
Eunsol Choi ◽  
Jennimaria Palomaki ◽  
Matthew Lamm ◽  
Tom Kwiatkowski ◽  
Dipanjan Das ◽  
...  

Abstract Models for question answering, dialogue agents, and summarization often interpret the meaning of a sentence in a rich context and use that meaning in a new context. Taking excerpts of text can be problematic, as key pieces may not be explicit in a local window. We isolate and define the problem of sentence decontextualization: taking a sentence together with its context and rewriting it to be interpretable out of context, while preserving its meaning. We describe an annotation procedure, collect data on the Wikipedia corpus, and use the data to train models to automatically decontextualize sentences. We present preliminary studies that show the value of sentence decontextualization in a user-facing task, and as preprocessing for systems that perform document understanding. We argue that decontextualization is an important subtask in many downstream applications, and that the definitions and resources provided can benefit tasks that operate on sentences that occur in a richer context.


Author(s):  
Yenewondim Biadgie Sinshahw

<span>In medical and scientific imaging, lossless image compression is recommended because the loss of minor details subject to medical diagnosis can lead to wrong diagniosis. On the other hand, lossy compression of medical images is required in the long run because a huge quantity of medical data needs remote storage. This, in turn, takes long time to search and transfer an image. Instead of thinking lossless or lossy image compression methods, near-loss image compression mehod can be used to compromise the two conflicting requirements. In the previous work, an edge adaptive hierarchical interpolation (EAHINT) was proposed for resolution scalable lossless compression of images. In this paper, it was enhanced for scalable near-less image compression. The interpolator of this arlgorithm swiches among one-directional, multi-directional and non-directional linear interpolators adaptively based on the strength of the edge in a 3x3 local casual context of the current pixel being predicted. The strength of the edge in local window was estimated using the variance of the the pixels in the local window. Although the actual predictors are still linear functions, the switching mechanism tried to deal with non-linear structures like edges. Simulation results demonstrate that the improved interpolation algorithm has better compression ratio over the the exsisting the original EAHINT algorithm and JPEG-Ls image compression standard. </span>


2020 ◽  
Author(s):  
Wei-Lin Huang ◽  
Fei Gao ◽  
Jian-Ping Liao ◽  
Xiao-Yu Chuai

AbstractThe local slopes contain rich information of the reflection geometry, which can be used to facilitate many subsequent procedures such as seismic velocities picking, normal move out correction, time-domain imaging and structural interpretation. Generally the slope estimation is achieved by manually picking or scanning the seismic profile along various slopes. We present here a deep learning-based technique to automatically estimate the local slope map from the seismic data. In the presented technique, three convolution layers are used to extract structural features in a local window and three fully connected layers serve as a classifier to predict the slope of the central point of the local window based on the extracted features. The deep learning network is trained using only synthetic seismic data, it can however accurately estimate local slopes within real seismic data. We examine its feasibility using simulated and real-seismic data. The estimated local slope maps demonstrate the successful performance of the synthetically-trained network.


2020 ◽  
Vol 10 (17) ◽  
pp. 5799
Author(s):  
Yuanwei Yang ◽  
Shuhao Ran ◽  
Xianjun Gao ◽  
Mingwei Wang ◽  
Xi Li

Current automatic shadow compensation methods often suffer because their contrast improvement processes are not self-adaptive and, consequently, the results they produce do not adequately represent the real objects. The study presented in this paper designed a new automatic shadow compensation framework based on improvements to the Wallis principle, which included an intensity coefficient and a stretching coefficient to enhance contrast and brightness more efficiently. An automatic parameter calculation strategy also is a part of this framework, which is based on searching for and matching similar feature points around shadow boundaries. Finally, a final compensation combination strategy combines the regional compensation with the local window compensation of the pixels in each shadow to improve the shaded information in a balanced way. All these strategies in our method work together to provide a better measurement for customizing suitable compensation depending on the condition of each region and pixel. The intensity component I also is automatically strengthened through the customized compensation model. Color correction is executed in a way that avoids the color bias caused by over-compensated component values, thereby better reflecting shaded information. Images with clouds shadows and ground objects shadows were utilized to test our method and six other state-of-the-art methods. The comparison results indicate that our method compensated for shaded information more effectively, accurately, and evenly than the other methods for customizing suitable models for each shadow and pixel with reasonable time-cost. Its brightness, contrast, and object color in shaded areas were approximately equalized with non-shaded regions to present a shadow-free image.


Geophysics ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. N17-N26 ◽  
Author(s):  
Valentin Tschannen ◽  
Matthias Delescluse ◽  
Norman Ettrich ◽  
Janis Keuper

Extracting horizon surfaces from key reflections in a seismic image is an important step of the interpretation process. Interpreting a reflection surface in a geologically complex area is a difficult and time-consuming task, and it requires an understanding of the 3D subsurface geometry. Common methods to help automate the process are based on tracking waveforms in a local window around manual picks. Those approaches often fail when the wavelet character lacks lateral continuity or when reflections are truncated by faults. We have formulated horizon picking as a multiclass segmentation problem and solved it by supervised training of a 3D convolutional neural network. We design an efficient architecture to analyze the data over multiple scales while keeping memory and computational needs to a practical level. To allow for uncertainties in the exact location of the reflections, we use a probabilistic formulation to express the horizons position. By using a masked loss function, we give interpreters flexibility when picking the training data. Our method allows experts to interactively improve the results of the picking by fine training the network in the more complex areas. We also determine how our algorithm can be used to extend horizons to the prestack domain by following reflections across offsets planes, even in the presence of residual moveout. We validate our approach on two field data sets and show that it yields accurate results on nontrivial reflectivity while being trained from a workable amount of manually picked data. Initial training of the network takes approximately 1 h, and the fine training and prediction on a large seismic volume take a minute at most.


Author(s):  
Rajiv Verma ◽  
Rajoo Pandey

The shape of local window plays a vital role in the estimation of original signal variance, which is used to shrink the noisy wavelet coefficients in wavelet-based image denoising algorithms. This paper presents an anisotropic-shaped region-based Wiener filtering (ASRWF) and BayesShrink (ASRBS) algorithms, which exploit the region characteristics to estimate the original signal variance using a statistical approach. The proposed approach divides the region centered on a noisy wavelet coefficient into various non-overlapping subregions. The Euclidean distance-based measure is considered to obtain the similarities between reference subregion and adjacent subregions. An appropriate threshold value is estimated by applying a statistical approach on these distances and the sets of similar and dissimilar subregions are obtained from a defined region. Thus, an anisotropic-shaped region is obtained by neglecting the dissimilar subregions in a defined region. The variance of every similar subregion is calculated and then averaged to estimate the original signal variance to shrink noisy wavelet coefficients effectively. Finally, the estimated signal variance is utilized in Wiener filtering and BayesShrink algorithms to improve the denoising performance. The performance of the proposed algorithms is analyzed qualitatively and quantitatively on standard images for different noise levels.


2020 ◽  
Vol 197 ◽  
pp. 08021
Author(s):  
Francesca Merli ◽  
Elisa Belloni ◽  
Cinzia Buratti

The work was developed in the ReScaLe FiAer project framework, funded by the Fondazione Cassa di Risparmio di Perugia. It is focused on the identification and collection of multiple high quality wood waste from a local window manufacturer. Three types of wood were available, from different tree species (pine, oak, and mahogany) and sizes (pieces of wood, mixed coarse chips, and mixed fine chips). Preliminary analyses were performed in order to evaluate the properties of the raw material. For each type of wood, eco-sustainable panels (300x300 mm2) were assembled by gluing. Multiple tests were carried out in order to identify the optimal mixtures and to ensure a good mechanical resistance with the minimum adhesive use. Panels were assembled by using vinyl glue, easily available and cheap, and flour glue, with a lower environmental impact and safe for people’s health. The thermal conductivity of the panels was measured by means of the Small Hot Box experimental apparatus: it varies in the 0.071-0.084 W/mK range, at an average temperature of 10°C, depending on the tree species and regardless of the type of adhesive used. Furthermore, 100-mm diameter cylindrical samples with two different thicknesses for each type of wood and glue were fabricated, in order to investigate their acoustic behaviour in an impedance tube. The use of flour glue improves the sound absorption and insulation performance of the samples.


Sign in / Sign up

Export Citation Format

Share Document