scholarly journals Biases in Thorpe-Scale Estimates of Turbulence Dissipation. Part I: Assessments from Large-Scale Overturns in Oceanographic Data

2015 ◽  
Vol 45 (10) ◽  
pp. 2497-2521 ◽  
Author(s):  
Benjamin D. Mater ◽  
Subhas K. Venayagamoorthy ◽  
Louis St. Laurent ◽  
James N. Moum

AbstractOceanic density overturns are commonly used to parameterize the dissipation rate of turbulent kinetic energy. This method assumes a linear scaling between the Thorpe length scale LT and the Ozmidov length scale LO. Historic evidence supporting LT ~ LO has been shown for relatively weak shear-driven turbulence of the thermocline; however, little support for the method exists in regions of turbulence driven by the convective collapse of topographically influenced overturns that are large by open-ocean standards. This study presents a direct comparison of LT and LO, using vertical profiles of temperature and microstructure shear collected in the Luzon Strait—a site characterized by topographically influenced overturns up to O(100) m in scale. The comparison is also done for open-ocean sites in the Brazil basin and North Atlantic where overturns are generally smaller and due to different processes. A key result is that LT/LO increases with overturn size in a fashion similar to that observed in numerical studies of Kelvin–Helmholtz (K–H) instabilities for all sites but is most clear in data from the Luzon Strait. Resultant bias in parameterized dissipation is mitigated by ensemble averaging; however, a positive bias appears when instantaneous observations are depth and time integrated. For a series of profiles taken during a spring tidal period in the Luzon Strait, the integrated value is nearly an order of magnitude larger than that based on the microstructure observations. Physical arguments supporting LT ~ LO are revisited, and conceptual regimes explaining the relationship between LT/LO and a nondimensional overturn size are proposed. In a companion paper, Scotti obtains similar conclusions from energetics arguments and simulations.

2021 ◽  
Author(s):  
Parsoa Khorsand ◽  
Fereydoun Hormozdiari

Abstract Large scale catalogs of common genetic variants (including indels and structural variants) are being created using data from second and third generation whole-genome sequencing technologies. However, the genotyping of these variants in newly sequenced samples is a nontrivial task that requires extensive computational resources. Furthermore, current approaches are mostly limited to only specific types of variants and are generally prone to various errors and ambiguities when genotyping complex events. We are proposing an ultra-efficient approach for genotyping any type of structural variation that is not limited by the shortcomings and complexities of current mapping-based approaches. Our method Nebula utilizes the changes in the count of k-mers to predict the genotype of structural variants. We have shown that not only Nebula is an order of magnitude faster than mapping based approaches for genotyping structural variants, but also has comparable accuracy to state-of-the-art approaches. Furthermore, Nebula is a generic framework not limited to any specific type of event. Nebula is publicly available at https://github.com/Parsoa/Nebula.


2017 ◽  
Vol 34 (5) ◽  
pp. 1551-1571 ◽  
Author(s):  
Ming Xia

Purpose The main purpose of this paper is to present a comprehensive upscale theory of the thermo-mechanical coupling particle simulation for three-dimensional (3D) large-scale non-isothermal problems, so that a small 3D length-scale particle model can exactly reproduce the same mechanical and thermal results with that of a large 3D length-scale one. Design/methodology/approach The objective is achieved by following the scaling methodology proposed by Feng and Owen (2014). Findings After four basic physical quantities and their similarity-ratios are chosen, the derived quantities and its similarity-ratios can be derived from its dimensions. As the proposed comprehensive 3D upscale theory contains five similarity criteria, it reveals the intrinsic relationship between the particle-simulation solution obtained from a small 3D length-scale (e.g. a laboratory length-scale) model and that obtained from a large 3D length-scale (e.g. a geological length-scale) one. The scale invariance of the 3D interaction law in the thermo-mechanical coupled particle model is examined. The proposed 3D upscale theory is tested through two typical examples. Finally, a practical application example of 3D transient heat flow in a solid with constant heat flux is given to illustrate the performance of the proposed 3D upscale theory in the thermo-mechanical coupling particle simulation of 3D large-scale non-isothermal problems. Both the benchmark tests and application example are provided to demonstrate the correctness and usefulness of the proposed 3D upscale theory for simulating 3D non-isothermal problems using the particle simulation method. Originality/value The paper provides some important theoretical guidance to modeling 3D large-scale non-isothermal problems at both the engineering length-scale (i.e. the meter-scale) and the geological length-scale (i.e. the kilometer-scale) using the particle simulation method directly.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Jean-Luc Menet

The implantation of wind turbines generally follows a wind potential study which is made using specific numerical tools; the generated expenses are only acceptable for great projects. The purpose of the present paper is to propose a simplified methodology for the evaluation of the wind potential, following three successive steps for the determination of (i) the mean velocity, either directly or by the use of the most occurrence velocity (MOV); (ii) the velocity distribution coming from the single knowledge of the mean velocity by the use of a Rayleigh distribution and a Davenport-Harris law; (iii) an appropriate approximation of the characteristic curve of the turbine, coming from only two technical data. These last two steps allow calculating directly the electric delivered energy for the considered wind turbine. This methodology, called the SWEPT approach, can be easily implemented in a single worksheet. The results returned by the SWEPT tool are of the same order of magnitude than those given by the classical commercial tools. Moreover, everybody, even a “neophyte,” can use this methodology to obtain a first estimation of the wind potential of a site considering a given wind turbine, on the basis of very few general data.


1982 ◽  
Vol 22 (03) ◽  
pp. 409-419 ◽  
Author(s):  
R.G. Larson

Abstract The variably-timed flux updating (VTU) finite difference technique is extended to two dimensions. VTU simulations of miscible floods on a repeated five-spot pattern are compared with exact solutions and with solutions obtained by front tracking. It is found that for neutral and favorable mobility ratios. VTU gives accurate results even on a coarse mesh and reduces numerical dispersion by a factor of 10 or more over the level generated by conventional single-point (SP) upstream weighting. For highly unfavorable mobility ratios, VTU reduces numerical dispersion. but on a coarse mesh the simulation is nevertheless inaccurate because of the inherent inadequacy of the finite-difference estimation of the flow field. Introduction A companion paper (see Pages 399-408) introduced the one-dimensional version of VTU for controlling numerical dispersion in finite-difference simulation of displacements in porous media. For linear and nonlinear, one- and two-independent-component problems, VTU resulted in more than an order-of-magnitude reduction in numerical dispersion over conventional explicit. SP upstream-weighted simulations with the same number of gridblocks. In this paper, the technique is extended to two dimensional (2D) problems, which require solution of a set of coupled partial differential equations that express conservation of material components-i.e., (1) and (2) Fi, the fractional flux of component i, is a function of the set of s - 1 independent-component fractional concentrations {Ci}, which prevail at the given position and time., the dispersion flux, is given by an expression that is linear in the specie concentration gradients. The velocity, is proportional to the pressure gradient,. (3) where lambda, in general, can be a function of composition and of the magnitude of the pressure gradient. The premises on which Eqs. 1 through 3 rest are stated in the companion paper. VTU in Two Dimensions The basic idea of variably-timed flux updating is to use finite-difference discretization of time and space, but to update the flux of a component not every timestep, but with a frequency determined by the corresponding concentration velocity -i.e., the velocity of propagation of fixed concentration of that component. The concentration velocity is a function of time and position. In the formulation described here, the convected flux is upstream-weighted, and all variables except pressure are evaluated explicitly. As described in the companion paper (SPE 8027), the crux of the method is the estimation of the number of timesteps required for a fixed concentration to traverse from an inflow to an outflow face of a gridblock. This task is simpler in one dimension, where there is only one inflow and one outflow face per gridblock, than it is in two dimensions, where each gridblock has in general multiple inflow and outflow faces. SPEJ P. 409^


2021 ◽  
Vol 15 (3) ◽  
pp. 1-31
Author(s):  
Haida Zhang ◽  
Zengfeng Huang ◽  
Xuemin Lin ◽  
Zhe Lin ◽  
Wenjie Zhang ◽  
...  

Driven by many real applications, we study the problem of seeded graph matching. Given two graphs and , and a small set of pre-matched node pairs where and , the problem is to identify a matching between and growing from , such that each pair in the matching corresponds to the same underlying entity. Recent studies on efficient and effective seeded graph matching have drawn a great deal of attention and many popular methods are largely based on exploring the similarity between local structures to identify matching pairs. While these recent techniques work provably well on random graphs, their accuracy is low over many real networks. In this work, we propose to utilize higher-order neighboring information to improve the matching accuracy and efficiency. As a result, a new framework of seeded graph matching is proposed, which employs Personalized PageRank (PPR) to quantify the matching score of each node pair. To further boost the matching accuracy, we propose a novel postponing strategy, which postpones the selection of pairs that have competitors with similar matching scores. We show that the postpone strategy indeed significantly improves the matching accuracy. To improve the scalability of matching large graphs, we also propose efficient approximation techniques based on algorithms for computing PPR heavy hitters. Our comprehensive experimental studies on large-scale real datasets demonstrate that, compared with state-of-the-art approaches, our framework not only increases the precision and recall both by a significant margin but also achieves speed-up up to more than one order of magnitude.


Author(s):  
F. Ma ◽  
J. H. Hwang

Abstract In analyzing a nonclassically damped linear system, one common procedure is to neglect those damping terms which are nonclassical, and retain the classical ones. This approach is termed the method of approximate decoupling. For large-scale systems, the computational effort at adopting approximate decoupling is at least an order of magnitude smaller than the method of complex modes. In this paper, the error introduced by approximate decoupling is evaluated. A tight error bound, which can be computed with relative ease, is given for this method of approximate solution. The role that modal coupling plays in the control of error is clarified. If the normalized damping matrix is strongly diagonally dominant, it is shown that adequate frequency separation is not necessary to ensure small errors.


2008 ◽  
Vol 26 (4) ◽  
pp. 843-852 ◽  
Author(s):  
T. K. Yeoman ◽  
G. Chisham ◽  
L. J. Baddeley ◽  
R. S. Dhillon ◽  
T. J. T. Karhunen ◽  
...  

Abstract. The Super Dual Auroral Radar Network (SuperDARN) network of HF coherent backscatter radars form a unique global diagnostic of large-scale ionospheric and magnetospheric dynamics in the Northern and Southern Hemispheres. Currently the ground projections of the HF radar returns are routinely determined by a simple rangefinding algorithm, which takes no account of the prevailing, or indeed the average, HF propagation conditions. This is in spite of the fact that both direct E- and F-region backscatter and 1½-hop E- and F-region backscatter are commonly used in geophysical interpretation of the data. In a companion paper, Chisham et al. (2008) have suggested a new virtual height model for SuperDARN, based on average measured propagation paths. Over shorter propagation paths the existing rangefinding algorithm is adequate, but mapping errors become significant for longer paths where the roundness of the Earth becomes important, and a correct assumption of virtual height becomes more difficult. The SuperDARN radar at Hankasalmi has a propagation path to high power HF ionospheric modification facilities at both Tromsø on a ½-hop path and SPEAR on a 1½-hop path. The SuperDARN radar at Þykkvibǽr has propagation paths to both facilities over 1½-hop paths. These paths provide an opportunity to quantitatively test the available SuperDARN virtual height models. It is also possible to use HF radar backscatter which has been artificially induced by the ionospheric heaters as an accurate calibration point for the Hankasalmi elevation angle of arrival data, providing a range correction algorithm for the SuperDARN radars which directly uses elevation angle. These developments enable the accurate mappings of the SuperDARN electric field measurements which are required for the growing number of multi-instrument studies of the Earth's ionosphere and magnetosphere.


2017 ◽  
Vol 114 (11) ◽  
pp. 2922-2927 ◽  
Author(s):  
Kazuya Suzuki ◽  
Makito Miyazaki ◽  
Jun Takagi ◽  
Takeshi Itabashi ◽  
Shin’ichi Ishiwata

Collective behaviors of motile units through hydrodynamic interactions induce directed fluid flow on a larger length scale than individual units. In cells, active cytoskeletal systems composed of polar filaments and molecular motors drive fluid flow, a process known as cytoplasmic streaming. The motor-driven elongation of microtubule bundles generates turbulent-like flow in purified systems; however, it remains unclear whether and how microtubule bundles induce large-scale directed flow like the cytoplasmic streaming observed in cells. Here, we adopted Xenopus egg extracts as a model system of the cytoplasm and found that microtubule bundle elongation induces directed flow for which the length scale and timescale depend on the existence of geometrical constraints. At the lower activity of dynein, kinesins bundle and slide microtubules, organizing extensile microtubule bundles. In bulk extracts, the extensile bundles connected with each other and formed a random network, and vortex flows with a length scale comparable to the bundle length continually emerged and persisted for 1 min at multiple places. When the extracts were encapsulated in droplets, the extensile bundles pushed the droplet boundary. This pushing force initiated symmetry breaking of the randomly oriented bundle network, leading to bundles aligning into a rotating vortex structure. This vortex induced rotational cytoplasmic flows on the length scale and timescale that were 10- to 100-fold longer than the vortex flows emerging in bulk extracts. Our results suggest that microtubule systems use not only hydrodynamic interactions but also mechanical interactions to induce large-scale temporally stable cytoplasmic flow.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3273
Author(s):  
Lesong Zhou ◽  
Zheng Sheng ◽  
Qixiang Liao

In recent years, Thorpe analysis has been used to retrieve the characteristics of turbulence in free atmosphere from balloon-borne sensor data. However, previous studies have mainly focused on the mid-high latitude region, and this method is still rarely applied at heights above 30 km, especially above 35 km. Therefore, seven sets of upper air (>35 km) sounding data from the Changsha Sounding Station (28°12′ N, 113°05′ E), China are analyzed with Thorpe analysis in this article. It is noted that, in the troposphere, Thorpe analysis can better retrieve the turbulence distribution and the corresponding turbulence parameters. Also, because of the thicker troposphere at low latitudes, the values of the Thorpe scale L T and turbulent energy dissipation rate ε remain greater in a larger height range. In the stratosphere below the height of 35 km, the obtained ε is higher, and Thorpe analysis can only be used to analyze the characteristics of large-scale turbulence. In the stratosphere at a height of 35–40 km, because of the interference of sensor noise, Thorpe analysis can only help to retrieve the rough distribution position of large-scale turbulence, while it can hardly help with the calculation of the turbulence parameters.


Sign in / Sign up

Export Citation Format

Share Document