Development of Grid Depression Model for GalaxyCosmo-S

Author(s):  
Hiroki Koike ◽  
Kazuki Kirimura ◽  
Kazuya Yamaji ◽  
Shinya Kosaka ◽  
Hideki Matsumoto

An efficient grid depression reconstruction model on axial assembly power distribution was developed for MHI nuclear design code system GalaxyCosmo-S. The objective of this paper is to present the background, methodology and its application of the new model in GalaxyCosmo-S. In order to consider the grid depression effect to the homogeneous axial power distribution obtained from 3D nodal core calculation, the new model employs the concept of the pin-power reconstruction model widely used in modern core design codes. In the new model, axial heterogeneous assembly power distribution is calculated by synthesizing the grid form function to the axial homogeneous power distribution by nodal calculation. The form function is pre-produced by fitting the local grid depression data processed from the measured axial thimble reaction rate in the grid position. By incorporating the measured data, the form function can reflect the precise grid depression information. According to the present study, it was shown that the form function has a burnup dependency for its depth, and it is prepared for each fuel type and axial grid position. In order to confirm the applicability of the present method to the existing PWRs, the predicted axial power distribution by GalaxyCosmo-S was compared with the measured data by the movable detector (M/D). As a result, the good agreements were confirmed without any specific trends for burnup condition. In addition, the difference of axial power distribution between predicted and measured data was statistically analyzed for multiple plants, cycles and burnup conditions. From the results, it is confirmed that the systematic over- or under-estimations of the power distribution observed in the grid homogenized model are reduced by the grid depression model. So this model is suitable for the 3D power distribution analysis and FQ uncertainty evaluation.

2021 ◽  
pp. 174498712110161
Author(s):  
Ann-Marie Cannaby ◽  
Vanda Carter ◽  
Thomas Hoe ◽  
Stephenson Strobel ◽  
Elena Ashtari Tafti ◽  
...  

Background The association between the nurse-to-patient ratio and patient outcomes has been extensively investigated. Real time location systems have the potential capability of measuring the actual amount of bedside contact patients receive. Aims This study aimed to determine the feasibility and accuracy of real time location systems as a measure of the amount of contact time that nurses spent in the patients’ bed space. Methods An exploratory, observational, feasibility study was designed to compare the accuracy of data collection between manual observation performed by a researcher and real time location systems data capture capability. Four nurses participated in the study, which took place in 2019 on two hospital wards. They were observed by a researcher while carrying out their work activities for a total of 230 minutes. The amount of time the nurses spent in the patients’ bed space was recorded in 10-minute blocks of time and the real time location systems data were extracted for the same nurse at the time of observation. Data were then analysed for the level of agreement between the observed and the real time location systems measured data, descriptively and graphically using a kernel density and a scatter plot. Results The difference (in minutes) between researcher observed and real time location systems measured data for the 23, 10-minute observation blocks ranged from zero (complete agreement) to 5 minutes. The mean difference between the researcher observed and real time location systems time in the patients’ bed space was one minute (10% of the time). On average, real time location systems measured time in the bed space was longer than the researcher observed time. Conclusions There were good levels of agreement between researcher observation and real time location systems data of the time nurses spend at the bedside. This study confirms that it is feasible to use real time location systems as an accurate measure of the amount of time nurses spend at the patients’ bedside.


2020 ◽  
Vol 10 (23) ◽  
pp. 8660
Author(s):  
Lu Wang ◽  
Dongkai Zhang ◽  
Jiahao Guo ◽  
Yuexing Han

Detecting image anomalies automatically in industrial scenarios can improve economic efficiency, but the scarcity of anomalous samples increases the challenge of the task. Recently, autoencoder has been widely used in image anomaly detection without using anomalous images during training. However, it is hard to determine the proper dimensionality of the latent space, and it often leads to unwanted reconstructions of the anomalous parts. To solve this problem, we propose a novel method based on the autoencoder. In this method, the latent space of the autoencoder is estimated using a discrete probability model. With the estimated probability model, the anomalous components in the latent space can be well excluded and undesirable reconstruction of the anomalous parts can be avoided. Specifically, we first adopt VQ-VAE as the reconstruction model to get a discrete latent space of normal samples. Then, PixelSail, a deep autoregressive model, is used to estimate the probability model of the discrete latent space. In the detection stage, the autoregressive model will determine the parts that deviate from the normal distribution in the input latent space. Then, the deviation code will be resampled from the normal distribution and decoded to yield a restored image, which is closest to the anomaly input. The anomaly is then detected by comparing the difference between the restored image and the anomaly image. Our proposed method is evaluated on the high-resolution industrial inspection image datasets MVTec AD which consist of 15 categories. The results show that the AUROC of the model improves by 15% over autoencoder and also yields competitive performance compared with state-of-the-art methods.


2002 ◽  
Vol 93 (1) ◽  
pp. 233-241 ◽  
Author(s):  
Jeff K. Trimmer ◽  
Jean-Marc Schwarz ◽  
Gretchen A. Casazza ◽  
Michael A. Horning ◽  
Nestor Rodriguez ◽  
...  

We evaluated the hypothesis that coordinated adjustments in absolute rates of gluconeogenesis (GNGab) and hepatic glycogenolysis (Gly) would maintain euglycemia and match glucose production (GP) to peripheral utilization during rest and exercise. Specifically, we evaluated the extent to which gradations in exercise power output would affect the contribution of GNGab to GP. For these purposes, we employed mass isotopomer distribution analysis (MIDA) and isotope-dilution techniques on eight postabsorptive (PA) endurance-trained men during 90 min of leg cycle ergometry at 45 and 65% peak O2 consumption (V˙o 2 peak; moderate and hard intensities, respectively) and the preceding rest period. GP was constant in resting subjects, whereas the fraction from GNG (fGNG) increased over time during rest (22.3 ± 0.9% at 11.25 h PA vs. 25.6 ± 0.9% at 12.0 h PA, P < 0.05). In the transition from rest to exercise, GP increased in an intensity-dependent manner (rest, 2.0 ± 0.1; 45%, 4.0 ± 0.4; 65%, 5.84 ± 0.64 mg · kg−1 · min−1, P < 0.05), although glucose rate of disappearance exceeded rate of appearance during the last 30 min of exercise at 65%V˙o 2 peak. Compared with rest, increases in GP were sustained by 92 and 135% increments in GNGab during moderate- and hard-intensity exercises, respectively. Correspondingly, Gly (calculated as the difference between GP and MIDA-measured GNGab) increased 100 and 203% over rest during the two exercise intensities. During moderate-intensity exercise, fGNG was the same as at rest; however, during the harder exercise fGNG decreased significantly to account for only 21% of GP. The highest sustained GNGab observed in these trials on PA men was 1.24 ± 0.3 mg · kg−1 · min−1. We conclude that, after an overnight fast, 1) absolute GNG rates increased with intensity of effort despite a reduced fGNG at 65% V˙o 2 peak, 2) during exercise Gly is more responsible than GNGab for maintaining GP, and 3) in 12-h fasted men, neither increased Gly or GNGab nor was their combination able to maintain euglycemia during prolonged hard (65%V˙o 2 peak) exercise.


Author(s):  
Jian Zhang ◽  
Dick Beetham ◽  
Grant Dellow ◽  
John X. Zhao ◽  
Graeme H. McVerry

A New empirical model has been developed for predicting liquefaction-induced lateral spreading displacement and is a function of response spectral displacements and geotechnical parameters. Different from the earlier model of Zhang and Zhao (2005), the application of which was limited to Japan and California, the new model can potentially be applied anywhere if ground shaking can be estimated (by using local strong-motion attenuation relations). The new model is applied in New Zealand where the response spectral displacement is estimated using New Zealand strong-motion attenuation relations (McVerry et al. 2006). The accuracy of the new model is evaluated by comparing predicted lateral displacements with those which have been measured from aerial photos or the width of ground cracks at the Landing Road bridge, the James Street loop, the Whakatane Pony Club and the Edgecumbe road and rail bridges sites after the 1987 Edgecumbe earthquake. Results show that most predicted errors (defined as the ratio of the difference between the measured and predicted lateral displacements to the measured one) from the new model are less than 40%. When compared with earlier models (Youd et al. 2002, Zhang and Zhao 2005), the new model provides the lowest mean errors.


2000 ◽  
Vol 84 (2) ◽  
pp. 233-245 ◽  
Author(s):  
Ole Lammert ◽  
Niels Grunnet ◽  
Peter Faber ◽  
Kirsten Schroll Bjørnsbo ◽  
John Dich ◽  
...  

Ten pairs of normal men were overfed by 5 MJ/d for 21 d with either a carbohydrate-rich or a fat-rich diet (C- and F-group). The two subjects in each pair were requested to follow each other throughout the day to ensure similar physical activity and were otherwise allowed to maintain normal daily life. The increase in body weight, fat free mass and fat mass showed great variation, the mean increases being 1·5 kg, 0·6 kg and 0·9 kg respectively. No significant differences between the C- and F-group were observed. Heat production during sleep did not change during overfeeding. The RQ during sleep was 0·86 and 0·78 in the C- and F-group respectively. The accumulated faecal loss of energy, DM, carbohydrate and protein was significantly higher in the C- compared with the F-group (30, 44, 69 and 51 % higher respectively), whereas the fat loss was the same in the two groups. N balance was not different between the C- and F-group and was positive. Fractional contribution from hepatic de novo lipogenesis, as measured by mass isotopomer distribution analysis after administration of [1-13C]acetate, was 0·20 and 0·03 in the C-group and the F-group respectively. Absolute hepatic de novo lipogenesis in the C-group was on average 211 g per 21 d. Whole-body de novo lipogenesis, as obtained by the difference between fat mass increase and dietary fat available for storage, was positive in six of the ten subjects in the C-group (mean 332 (SEM 191) g per 21 d). The change in plasma leptin concentration was positively correlated with the change in fat mass. Thus, fat storage during overfeeding of isoenergetic amounts of diets rich in carbohydrate or in fat was not significantly different, and carbohydrates seemed to be converted to fat by both hepatic and extrahepatic lipogenesis.


Author(s):  
Ville Valtavirta ◽  
Antti Rintala ◽  
Unna Lauranto

Abstract The Serpent Monte Carlo code and the Serpent-Ants two step calculation chain are used to model the hot zero power physics tests described in the BEAVRS benchmark. The predicted critical boron concentrations, control rod group worths and isothermal temperature coefficients are compared between Serpent and Serpent-Ants as well as against the experimental measurements. Furthermore, radial power distributions in the unrodded and rodded core configurations are compared between Serpent and Serpent-Ants. In addition to providing results using a best practices calculation chain, the effects of several simplifications or omissions in the group constant generation process on the results are estimated. Both the direct and two-step neutronics solutions provide results close to the measured values. Comparison between the measured data and the direct Serpent Monte Carlo solution yields RMS differences of 12.1 mg/kg, 25.1 × 10-5 and 0.67 × 10-5 K-1 for boron, control rod worths and temperature coefficients respectively. The two-step Serpent-Ants solution reaches a similar level of accuracy with RMS differences of 17.4 mg/kg, 23.6 × 10-5 and 0.29 × 10-5 K-1. The match in the radial power distribution between Serpent and Serpent-Ants was very good with the RMS and maximum for pin power errors being 1.31 % and 4.99 % respectively in the unrodded core and 1.67 %(RMS) and 8.39 % (MAX) in the rodded core.


Author(s):  
Brice Jardiné ◽  
Olivier Bougeant ◽  
Maxime Pfeiffer

The EPR™ reactor features a fixed incore instrumentation, composed of 72 Self Powered Neutron Detectors (SPND), that provides the online reconstruction of the core maximum Linear Power Density (LPD) and minimum Departure from Nucleate Boiling Ratio (DNBR). The Instrumentation and Control (I&C) systems of the EPR™ reactor use this online reconstruction in surveillance and protection functions. The onsite thresholds of those I&C functions have to take into account all the uncertainties affecting the online reconstruction of core power distribution measured by SPNDs. One of these uncertainties is the so-called Loss Of Representativeness (LOR). This uncertainty is defined as the difference between the LPD (respectively DNBR) physical value and the LPD (respectively DNBR) computed value using SPND signals. The LOR parameter is mostly linked to the difference between the core power distribution at the time where SPNDs are calibrated and the core power distribution at the time where their signals are used. For the DNBR, LOR also takes into account the use of a simplified on-line DNBR calculation algorithm. A statistical approach is used in order to define this uncertainty. The analysis is based on the evaluation of different sets of core power distributions generated thanks to random drawings of the plant state parameters (including power level, core inlet temperature, pressure, control rod insertion and xenon distribution). The sets of core configurations representative of normal plant operation are used to define the surveillance thresholds. The sets representative of accidental transients (for which the LPD and DNBR protections are claimed) are used to define the protection thresholds. The analysis of LOR values provides an envelop probability law covering a minimum of 95% of LOR values. In order to derive the on-site threshold for LPD and DNBR, a Monte Carlo method is used to propagate the LOR probability law and the other uncertainties. Sensitivity calculations have been performed in order to cover a large spectrum of fuel loading patterns and to take into account SPND failures. In conclusion, this approach allows defining an optimized and robust set of thresholds for the on-line surveillance and protection system of EPR™ reactor.


2013 ◽  
Vol 860-863 ◽  
pp. 2007-2012 ◽  
Author(s):  
Xiao Meng ◽  
Neng Ling Tai ◽  
Yan Hu ◽  
Xia Yang

The failure current in resonant grounder power distribution system is small, so it is difficult to detect the fault feeder. This passage presents the equivalent circuit of resonant grounded system, and discusses the difference of electrical characteristics between faulty feeder and sound feeders by using shunt resistors. To reduce the influence of shunt resistors on the system and improve the detection sensitivity, it presents the method of shunting multi-level resistors, and it proves the sensitivity and reliability of this method by EMTP simulation.


Sign in / Sign up

Export Citation Format

Share Document