Monte Carlo Strategies for Exploiting Fairness in N-player Ultimatum Games

Author(s):  
Garrison W. Greenwood ◽  
Daniel Ashlock
Keyword(s):  
1974 ◽  
Vol 22 ◽  
pp. 307 ◽  
Author(s):  
Zdenek Sekanina

AbstractIt is suggested that the outbursts of Periodic Comet Schwassmann-Wachmann 1 are triggered by impacts of interplanetary boulders on the surface of the comet’s nucleus. The existence of a cloud of such boulders in interplanetary space was predicted by Harwit (1967). We have used the hypothesis to calculate the characteristics of the outbursts – such as their mean rate, optically important dimensions of ejected debris, expansion velocity of the ejecta, maximum diameter of the expanding cloud before it fades out, and the magnitude of the accompanying orbital impulse – and found them reasonably consistent with observations, if the solid constituent of the comet is assumed in the form of a porous matrix of lowstrength meteoric material. A Monte Carlo method was applied to simulate the distributions of impacts, their directions and impact velocities.


1988 ◽  
Vol 102 ◽  
pp. 79-81
Author(s):  
A. Goldberg ◽  
S.D. Bloom

AbstractClosed expressions for the first, second, and (in some cases) the third moment of atomic transition arrays now exist. Recently a method has been developed for getting to very high moments (up to the 12th and beyond) in cases where a “collective” state-vector (i.e. a state-vector containing the entire electric dipole strength) can be created from each eigenstate in the parent configuration. Both of these approaches give exact results. Herein we describe astatistical(or Monte Carlo) approach which requires onlyonerepresentative state-vector |RV> for the entire parent manifold to get estimates of transition moments of high order. The representation is achieved through the random amplitudes associated with each basis vector making up |RV>. This also gives rise to the dispersion characterizing the method, which has been applied to a system (in the M shell) with≈250,000 lines where we have calculated up to the 5th moment. It turns out that the dispersion in the moments decreases with the size of the manifold, making its application to very big systems statistically advantageous. A discussion of the method and these dispersion characteristics will be presented.


Author(s):  
Ryuichi Shimizu ◽  
Ze-Jun Ding

Monte Carlo simulation has been becoming most powerful tool to describe the electron scattering in solids, leading to more comprehensive understanding of the complicated mechanism of generation of various types of signals for microbeam analysis.The present paper proposes a practical model for the Monte Carlo simulation of scattering processes of a penetrating electron and the generation of the slow secondaries in solids. The model is based on the combined use of Gryzinski’s inner-shell electron excitation function and the dielectric function for taking into account the valence electron contribution in inelastic scattering processes, while the cross-sections derived by partial wave expansion method are used for describing elastic scattering processes. An improvement of the use of this elastic scattering cross-section can be seen in the success to describe the anisotropy of angular distribution of elastically backscattered electrons from Au in low energy region, shown in Fig.l. Fig.l(a) shows the elastic cross-sections of 600 eV electron for single Au-atom, clearly indicating that the angular distribution is no more smooth as expected from Rutherford scattering formula, but has the socalled lobes appearing at the large scattering angle.


Author(s):  
D. R. Liu ◽  
S. S. Shinozaki ◽  
R. J. Baird

The epitaxially grown (GaAs)Ge thin film has been arousing much interest because it is one of metastable alloys of III-V compound semiconductors with germanium and a possible candidate in optoelectronic applications. It is important to be able to accurately determine the composition of the film, particularly whether or not the GaAs component is in stoichiometry, but x-ray energy dispersive analysis (EDS) cannot meet this need. The thickness of the film is usually about 0.5-1.5 μm. If Kα peaks are used for quantification, the accelerating voltage must be more than 10 kV in order for these peaks to be excited. Under this voltage, the generation depth of x-ray photons approaches 1 μm, as evidenced by a Monte Carlo simulation and actual x-ray intensity measurement as discussed below. If a lower voltage is used to reduce the generation depth, their L peaks have to be used. But these L peaks actually are merged as one big hump simply because the atomic numbers of these three elements are relatively small and close together, and the EDS energy resolution is limited.


Author(s):  
Makoto Shiojiri ◽  
Toshiyuki Isshiki ◽  
Tetsuya Fudaba ◽  
Yoshihiro Hirota

In hexagonal Se crystal each atom is covalently bound to two others to form an endless spiral chain, and in Sb crystal each atom to three others to form an extended puckered sheet. Such chains and sheets may be regarded as one- and two- dimensional molecules, respectively. In this paper we investigate the structures in amorphous state of these elements and the crystallization.HRTEM and ED images of vacuum-deposited amorphous Se and Sb films were taken with a JEM-200CX electron microscope (Cs=1.2 mm). The structure models of amorphous films were constructed on a computer by Monte Carlo method. Generated atoms were subsequently deposited on a space of 2 nm×2 nm as they fulfiled the binding condition, to form a film 5 nm thick (Fig. 1a-1c). An improvement on a previous computer program has been made as to realize the actual film formation. Radial distribution fuction (RDF) curves, ED intensities and HRTEM images for the constructed structure models were calculated, and compared with the observed ones.


Author(s):  
Matthew T. Johnson ◽  
Ian M. Anderson ◽  
Jim Bentley ◽  
C. Barry Carter

Energy-dispersive X-ray spectrometry (EDS) performed at low (≤ 5 kV) accelerating voltages in the SEM has the potential for providing quantitative microanalytical information with a spatial resolution of ∼100 nm. In the present work, EDS analyses were performed on magnesium ferrite spinel [(MgxFe1−x)Fe2O4] dendrites embedded in a MgO matrix, as shown in Fig. 1. spatial resolution of X-ray microanalysis at conventional accelerating voltages is insufficient for the quantitative analysis of these dendrites, which have widths of the order of a few hundred nanometers, without deconvolution of contributions from the MgO matrix. However, Monte Carlo simulations indicate that the interaction volume for MgFe2O4 is ∼150 nm at 3 kV accelerating voltage and therefore sufficient to analyze the dendrites without matrix contributions.Single-crystal {001}-oriented MgO was reacted with hematite (Fe2O3) powder for 6 h at 1450°C in air and furnace cooled. The specimen was then cleaved to expose a clean cross-section suitable for microanalysis.


Author(s):  
John C. Russ

Monte-Carlo programs are well recognized for their ability to model electron beam interactions with samples, and to incorporate boundary conditions such as compositional or surface variations which are difficult to handle analytically. This success has been especially powerful for modelling X-ray emission and the backscattering of high energy electrons. Secondary electron emission has proven to be somewhat more difficult, since the diffusion of the generated secondaries to the surface is strongly geometry dependent, and requires analytical calculations as well as material parameters. Modelling of secondary electron yield within a Monte-Carlo framework has been done using multiple scattering programs, but is not readily adapted to the moderately complex geometries associated with samples such as microelectronic devices, etc.This paper reports results using a different approach in which simplifying assumptions are made to permit direct and easy estimation of the secondary electron signal from samples of arbitrary complexity. The single-scattering program which performs the basic Monte-Carlo simulation (and is also used for backscattered electron and EBIC simulation) allows multiple regions to be defined within the sample, each with boundaries formed by a polygon of any number of sides. Each region may be given any elemental composition in atomic percent. In addition to the regions comprising the primary structure of the sample, a series of thin regions are defined along the surface(s) in which the total energy loss of the primary electrons is summed. This energy loss is assumed to be proportional to the generated secondary electron signal which would be emitted from the sample. The only adjustable variable is the thickness of the region, which plays the same role as the mean free path of the secondary electrons in an analytical calculation. This is treated as an empirical factor, similar in many respects to the λ and ε parameters in the Joy model.


2019 ◽  
Vol 62 (3) ◽  
pp. 577-586 ◽  
Author(s):  
Garnett P. McMillan ◽  
John B. Cannon

Purpose This article presents a basic exploration of Bayesian inference to inform researchers unfamiliar to this type of analysis of the many advantages this readily available approach provides. Method First, we demonstrate the development of Bayes' theorem, the cornerstone of Bayesian statistics, into an iterative process of updating priors. Working with a few assumptions, including normalcy and conjugacy of prior distribution, we express how one would calculate the posterior distribution using the prior distribution and the likelihood of the parameter. Next, we move to an example in auditory research by considering the effect of sound therapy for reducing the perceived loudness of tinnitus. In this case, as well as most real-world settings, we turn to Markov chain simulations because the assumptions allowing for easy calculations no longer hold. Using Markov chain Monte Carlo methods, we can illustrate several analysis solutions given by a straightforward Bayesian approach. Conclusion Bayesian methods are widely applicable and can help scientists overcome analysis problems, including how to include existing information, run interim analysis, achieve consensus through measurement, and, most importantly, interpret results correctly. Supplemental Material https://doi.org/10.23641/asha.7822592


Sign in / Sign up

Export Citation Format

Share Document