scholarly journals On Simulation of the Young Measures

2018 ◽  
Vol 41 (2) ◽  
pp. 171-184
Author(s):  
Andrzej Z. Grzybowski ◽  
Piotr Puchała

"Young measure" is an abstract notion from mathematical measure theory.  Originally, the notion appeared in the context of some variational problems related to the analysis of sequences of “fast” oscillating of functions.  From the formal point of view the Young measure  may be treated as a continuous linear functional defined on the space of Carathéodory integrands satisfying certain regularity conditions. Calculating an explicit form of specific Young measure is a very important task.  However, from a strictly mathematical standpoint  it is a very difficult problem not solved as yet in general. Even more difficult would be the problem of calculating Lebasque’s integrals with respect to such measures. Based on known formal results it can be done only in the most simple cases.  On the other hand in many real-world applications it would be enough to learn only some of the most important probabilistic  characteristics  of the Young distribution or learn only approximate values of the appropriate integrals. In such a case a possible solution is to adopt Monte Carlo techniques. In the presentation we propose three different algorithms designed for simulating random variables distributed according to the Young measures  associated with piecewise functions.  Next with the help of computer simulations we compare their statistical performance via some benchmarking problems. In this study we focus on the accurateness of the distribution of the generated sample.

Author(s):  
Ajay Jasra ◽  
Maria De Iorio ◽  
Marc Chadeau-Hyam

In this paper, we consider a simulation technique for stochastic trees. One of the most important areas in computational genetics is the calculation and subsequent maximization of the likelihood function associated with such models. This typically consists of using importance sampling and sequential Monte Carlo techniques. The approach proceeds by simulating the tree, backward in time from observed data, to a most recent common ancestor. However, in many cases, the computational time and variance of estimators are often too high to make standard approaches useful. In this paper, we propose to stop the simulation, subsequently yielding biased estimates of the likelihood surface. The bias is investigated from a theoretical point of view. Results from simulation studies are also given to investigate the balance between loss of accuracy, saving in computing time and variance reduction.


The object of this paper is to illustrate the main features of wave propagation in dispersive media. In the case of surface waves on deep water it has been remarked that the earlier investigators considered the more difficult problem of the propagation of an arbitrary initial disturbance as expressed by a Fourier integral, ignoring the simpler theory developed subsequently by considering the propagation of a single element of their integrals, namely an unending train of simple harmonic waves. The point of view on which stress is laid here consists of a return to the Fourier integral, with the idea that the element of disturbance is not a simple harmonic wave-train, but a simple group, an aggregate of simple wave-trains clustering around a given central period. In many cases it is then possible to select from the integral die few simple groups that are important, and hence to isolate the chief regular features, if any, in the phenomena. In certain of the following sections well-known results appear; the aim has been to develop these from the present point of view, and so illustrate die dependence of the phenomena upon the character of the velocity function, In the other sections it is hoped that progress has been made in the theory if the propagation of an arbitrary initial group of waves, and also of the character of the wave pattern diverging from a point impulse travelling on die surface.


Acta Numerica ◽  
2016 ◽  
Vol 25 ◽  
pp. 567-679 ◽  
Author(s):  
Ulrik S. Fjordholm ◽  
Siddhartha Mishra ◽  
Eitan Tadmor

A standard paradigm for the existence of solutions in fluid dynamics is based on the construction of sequences of approximate solutions or approximate minimizers. This approach faces serious obstacles, most notably in multi-dimensional problems, where the persistence of oscillations at ever finer scales prevents compactness. Indeed, these oscillations are an indication, consistent with recent theoretical results, of the possible lack of existence/uniqueness of solutions within the standard framework of integrable functions. It is in this context that Young measures – parametrized probability measures which can describe the limits of such oscillatory sequences – offer the more general paradigm of measure-valued solutions for these problems.We present viable numerical algorithms to compute approximate measure-valued solutions, based on the realization of approximate measures as laws of Monte Carlo sampled random fields. We prove convergence of these algorithms to measure-valued solutions for the equations of compressible and incompressible inviscid fluid dynamics, and present a large number of numerical experiments which provide convincing evidence for the viability of the new paradigm. We also discuss the use of these algorithms, and their extensions, in uncertainty quantification and contexts other than fluid dynamics, such as non-convex variational problems in materials science.


Author(s):  
Tarek Ben Mena ◽  
Narjès Bellamine-Ben Saoud ◽  
Mohamed Ben Ahmed ◽  
Bernard Pavard

This chapter aims to define context notion for multi-agent systems (MAS). Starting from the state of the art on context in different disciplines, we present context as a generic and abstract notion. We argue that context depends on three characteristics: domain, entity, and problem. By specifying this definition with MAS, we initially consider context from an extensional point of view as three components—actant, role, and situation—and then from an intensional one, which represents the context model for agents in MAS which consist of information on environment, other objects, agents, and relations between them. Therefore, we underline a new way of representing agent knowledge, building context on this knowledge, and using it. Furthermore, we prove the applicability of contextual agent solution for other research fields, particularly in personalized information retrieval by taking into account as agents: crawlers and as objects: documents.


1960 ◽  
Vol 29 (4) ◽  
pp. 424-439 ◽  
Author(s):  
George L. Mosse

The relationship between Christianity and the Enlightenment presents a subtle and difficult problem. No historian has as yet fully answered the important question of how the world view of the eighteenth century is related to that of traditional Christianity. It is certain, however, that the deism of that century rejected traditional Christianity as superstitious and denied Christianity a monopoly upon religious truth. The many formal parallels which can be drawn between Enlightenment and Christianity cannot obscure this fact. From the point of view of historical Christianity, both Protestant and Catholic, the faith of the Enlightenment was blasphemy. It did away with a personal God, it admitted no supernatural above the natural, it denied the relevance of Christ's redemptive task in this world. This essay attempts to discover whether traditional Christian thought itself did not make a contribution to the Enlightenment.


1945 ◽  
Vol 38 (4) ◽  
pp. 270-272
Author(s):  
William H. P. Hatch

The first half of Matthew 6: 33 presents a difficult problem to the textual critic, and the leading editors of the New Testament have solved it in different ways. The textual authorities offer several variant readings, but none of them is satisfactory from every point of view. However, it is possible by means of a highly plausible conjecture to obtain a text which makes excellent sense.


1979 ◽  
Vol 11 (03) ◽  
pp. 591-602
Author(s):  
David Mannion

We showed in [2] that if an object of initial size x (x large) is subjected to a succession of random partitions, then the object is decomposed into a large number of terminal cells, each of relatively small size, where if Z(x, B) denotes the number of such cells whose sizes are points in the set B, then there exists c, (0 < ≦ 1), such that Z(x, B)x −c converges in probability, as x → ∞, to a random variable W. We show here that if a parent object of size x produces k offspring of sizes y 1, y 2, ···, y k and if for each k x - y 1 - y 2 - ··· - y k (the ‘waste’ or the ‘cover’, depending on the point of view) is relatively small, then for each n the nth cumulant, Ψ n (x, B), of Z(x, B) satisfies Ψ n (x, B)x -c → κ n (B), as x → ∞, for some κ n (B). Thus, writing N = x c , Z(x, B) has approximately the same distribution as the sum of N independent and identically distributed random variables (The determination of the distribution of the individual appears to be a difficult problem.) The theory also applies when an object of moderate size is broken down into very fine particles or granules.


2014 ◽  
Vol 11 (04) ◽  
pp. 1450034 ◽  
Author(s):  
Leonardo Colombo ◽  
Pedro Daniel Prieto-Martínez

In this paper, we consider an intrinsic point of view to describe the equations of motion for higher-order variational problems with constraints on higher-order trivial principal bundles. Our techniques are an adaptation of the classical Skinner–Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics. As an interesting application we deduce the equations of motion for optimal control of underactuated mechanical systems defined on principal bundles.


Author(s):  
Gero Friesecke

For scalar variational problemssubject to linear boundary values, we determine completely those integrandsW: ℝn→ ℝ for which the minimum is not attained, thereby completing previous efforts such as a recent nonexistence theorem of Chipot [9] and unifying a large number of examples and counterexamples in the literature.As a corollary, we show that in case of nonattainment (and providedWgrows superlinearly at infinity), every minimising sequence converges weakly but not strongly inW1,1(Ω) to a unique limit, namely the linear deformation prescribed at the boundary, and develops fine structure everywhere in Ω, that is to say every Young measure associated with the sequence of its gradients is almost-nowhere a Dirac mass.Connections with solid–solid phase transformations are indicated.


2018 ◽  
Vol 28 (12) ◽  
pp. 2367-2401 ◽  
Author(s):  
Barbora Benešová ◽  
Martin Kružík ◽  
Anja Schlömerkemper

We use gradient Young measures generated by Lipschitz maps to define a relaxation of integral functionals which are allowed to attain the value [Formula: see text] and can model ideal locking in elasticity as defined by Prager in 1957. Furthermore, we show the existence of minimizers for variational problems for elastic materials with energy densities that can be expressed in terms of a function being continuous in the deformation gradient and convex in the gradient of the cofactor (and possibly also the gradient of the determinant) of the corresponding deformation gradient. We call the related energy functional gradient polyconvex. Thus, instead of considering second derivatives of the deformation gradient as in second-grade materials, only a weaker higher integrability is imposed. Although the second-order gradient of the deformation is not included in our model, gradient polyconvex functionals allow for an implicit uniform positive lower bound on the determinant of the deformation gradient on the closure of the domain representing the elastic body. Consequently, the material does not allow for extreme local compression.


Sign in / Sign up

Export Citation Format

Share Document