scholarly journals Change of Variable in Spaces of Mixed Smoothness and Numerical Integration of Multivariate Functions on the Unit Cube

2017 ◽  
Vol 46 (1) ◽  
pp. 69-108 ◽  
Author(s):  
Van Kien Nguyen ◽  
Mario Ullrich ◽  
Tino Ullrich
Author(s):  
Mario Cvetkovic ◽  
Dragan Poljak ◽  
Ante Lojic Kapetanovic ◽  
Hrvoje Dodig

2021 ◽  
Vol 37 (3) ◽  
pp. 291-320
Author(s):  
Dinh Dũng ◽  
Van Kien Nguyen ◽  
Mai Xuan Thao

The purpose of the present paper is to study the computation complexity of deep ReLU neural networks to approximate functions in H\"older-Nikol'skii spaces of mixed smoothness $H_\infty^\alpha(\mathbb{I}^d)$ on the unit cube $\mathbb{I}^d:=[0,1]^d$. In this context, for any function $f\in H_\infty^\alpha(\mathbb{I}^d)$, we explicitly construct nonadaptive and adaptive deep ReLU neural networks having an output that approximates $f$ with a prescribed accuracy $\varepsilon$, and prove dimension-dependent bounds for the computation complexity of this approximation, characterized by the size and the depth of this deep ReLU neural network, explicitly in $d$ and $\varepsilon$. Our results show the advantage of the adaptive method of approximation by deep ReLU neural networks over nonadaptive one.


2021 ◽  
Vol 42 (7) ◽  
pp. 1608-1621
Author(s):  
L. I. Vysotsky ◽  
A. V. Smirnov ◽  
E. E. Tyrtyshnikov

2019 ◽  
Vol 40 (3) ◽  
pp. 2052-2075
Author(s):  
Takashi Goda

Abstract We study numerical integration of smooth functions defined over the $s$-dimensional unit cube. A recent work by Dick et al. (2019, Richardson extrapolation of polynomial lattice rules. SIAM J. Numer. Anal., 57, 44–69) has introduced so-called extrapolated polynomial lattice rules, which achieve the almost optimal rate of convergence for numerical integration, and can be constructed by the fast component-by-component search algorithm with smaller computational costs as compared to interlaced polynomial lattice rules. In this paper we prove that, instead of polynomial lattice point sets, truncated higher-order digital nets and sequences can be used within the same algorithmic framework to explicitly construct good quadrature rules achieving the almost optimal rate of convergence. The major advantage of our new approach compared to original higher-order digital nets is that we can significantly reduce the precision of points, i.e., the number of digits necessary to describe each quadrature node. This finding has a practically useful implication when either the number of points or the smoothness parameter is so large that original higher-order digital nets require more than the available finite-precision floating-point representations.


2017 ◽  
Vol 15 (1) ◽  
pp. 1568-1577
Author(s):  
Zhihua Zhang

AbstractDue to discontinuity on the boundary, traditional Fourier approximation does not work efficiently ford−variate functions on [0, 1]d. In this paper, we will give a recursive method to reconstruct/approximate functions on [0, 1]dwell. The main process is as follows: We reconstruct ad−variate function by using all of its (d−1)–variate boundary functions and fewd–variate Fourier coefficients. We reconstruct each (d−1)–variate boundary function given in the preceding reconstruction by using all of its (d−2)–variate boundary functions and few (d−1)–variate Fourier coefficients. Continuing this procedure, we finally reconstruct each univariate boundary function in the preceding reconstruction by using values of the function at two ends and few univariate Fourier coefficients. Our recursive method can reconstruct multivariate functions on the unit cube with much smaller error than traditional Fourier methods.


Author(s):  
D. Dung ◽  
◽  
V. K. Nguyen ◽  
M. X. Thao ◽  
◽  
...  

We investigate computation complexity of deep ReLU neural networks for approximating functions in H\"older-Nikol'skii spaces of mixed smoothness $\Lad$ on the unit cube $\IId:=[0,1]^d$. For any function $f\in \Lad$, we explicitly construct nonadaptive and adaptive deep ReLU neural networks having an output that approximates $f$ with a prescribed accuracy $\varepsilon$, and prove dimension-dependent bounds for the computation complexity of this approximation, characterized by the size and depth of this deep ReLU neural network, explicitly in $d$ and $\varepsilon$. Our results show the advantage of the adaptive method of approximation by deep ReLU neural networks over nonadaptive one.


1966 ◽  
Vol 25 ◽  
pp. 227-229 ◽  
Author(s):  
D. Brouwer

The paper presents a summary of the results obtained by C. J. Cohen and E. C. Hubbard, who established by numerical integration that a resonance relation exists between the orbits of Neptune and Pluto. The problem may be explored further by approximating the motion of Pluto by that of a particle with negligible mass in the three-dimensional (circular) restricted problem. The mass of Pluto and the eccentricity of Neptune's orbit are ignored in this approximation. Significant features of the problem appear to be the presence of two critical arguments and the possibility that the orbit may be related to a periodic orbit of the third kind.


Sign in / Sign up

Export Citation Format

Share Document