scholarly journals Joint Imaging Platform for Federated Clinical Data Analytics

2020 ◽  
pp. 1027-1038
Author(s):  
Jonas Scherer ◽  
Marco Nolden ◽  
Jens Kleesiek ◽  
Jasmin Metzger ◽  
Klaus Kades ◽  
...  

PURPOSE Image analysis is one of the most promising applications of artificial intelligence (AI) in health care, potentially improving prediction, diagnosis, and treatment of diseases. Although scientific advances in this area critically depend on the accessibility of large-volume and high-quality data, sharing data between institutions faces various ethical and legal constraints as well as organizational and technical obstacles. METHODS The Joint Imaging Platform (JIP) of the German Cancer Consortium (DKTK) addresses these issues by providing federated data analysis technology in a secure and compliant way. Using the JIP, medical image data remain in the originator institutions, but analysis and AI algorithms are shared and jointly used. Common standards and interfaces to local systems ensure permanent data sovereignty of participating institutions. RESULTS The JIP is established in the radiology and nuclear medicine departments of 10 university hospitals in Germany (DKTK partner sites). In multiple complementary use cases, we show that the platform fulfills all relevant requirements to serve as a foundation for multicenter medical imaging trials and research on large cohorts, including the harmonization and integration of data, interactive analysis, automatic analysis, federated machine learning, and extensibility and maintenance processes, which are elementary for the sustainability of such a platform. CONCLUSION The results demonstrate the feasibility of using the JIP as a federated data analytics platform in heterogeneous clinical information technology and software landscapes, solving an important bottleneck for the application of AI to large-scale clinical imaging data.

Author(s):  
Minna Silver ◽  
Fulvio Rinaudo ◽  
Emanuele Morezzi ◽  
Francesca Quenda ◽  
Maria Laura Moretti

CIPA is contributing with its technical knowledge in saving the heritage of Syria by constructing an open access database based on the data that the CIPA members have collected during various projects in Syria over the years before the civil war in the country broke out in 2011. In this way we wish to support the protection and preservation of the environment, sites, monuments, and artefacts and the memory of the cultural region that has been crucial for the human past and the emergence of civilizations. Apart from the countless human atrocities and loss, damage, destruction and looting of the cultural heritage have taken place in a large scale. The CIPA’s initiative is one of the various international projects that have been set up after the conflict started. The Directorate-General of the Antiquities and Museums (DGAM) of Syria as well as UNESCO with its various sub-organizations have been central in facing the challenges during the war. Digital data capture, storage, use and dissemination are in the heart of CIPA’s strategies in recording and documenting cultural heritage, also in Syria. It goes without saying that for the conservation and restoration work the high quality data providing metric information is of utmost importance.


Processes ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 649
Author(s):  
Yifeng Liu ◽  
Wei Zhang ◽  
Wenhao Du

Deep learning based on a large number of high-quality data plays an important role in many industries. However, deep learning is hard to directly embed in the real-time system, because the data accumulation of the system depends on real-time acquisitions. However, the analysis tasks of such systems need to be carried out in real time, which makes it impossible to complete the analysis tasks by accumulating data for a long time. In order to solve the problems of high-quality data accumulation, high timeliness of the data analysis, and difficulty in embedding deep-learning algorithms directly in real-time systems, this paper proposes a new progressive deep-learning framework and conducts experiments on image recognition. The experimental results show that the proposed framework is effective and performs well and can reach a conclusion similar to the deep-learning framework based on large-scale data.


2000 ◽  
Vol 34 (2) ◽  
pp. 214-220 ◽  
Author(s):  
Michael G. Sawyer ◽  
Robert J. Kosky ◽  
Brian W. Graetz ◽  
Fiona Arney ◽  
Stephen R. Zubrick ◽  
...  

Objective: This paper describes the Child and Adolescent Component of the National Survey of Mental Health and Wellbeing. Method: The aims of the study, critical decisions in planning for the study, progress to date and key issues which influenced the course of the study are described. Results: The Child and Adolescent Component of the National Survey of Mental Health and Wellbeing is the largest study of child and adolescent mental health conducted in Australia and one of the few national studies to be conducted in the world. Results from the study will provide the first national picture of child and adolescent mental health in Australia. Conclusions: Large-scale epidemiological studies have the potential to provide considerable information about the mental health of children and adolescents. However, having a clear set of aims, ensuring that the scope of the study remains within manageable proportions and paying careful attention to the details of fieldwork are essential to ensure that high-quality data is obtained in such studies.


Author(s):  
Minna Silver ◽  
Fulvio Rinaudo ◽  
Emanuele Morezzi ◽  
Francesca Quenda ◽  
Maria Laura Moretti

CIPA is contributing with its technical knowledge in saving the heritage of Syria by constructing an open access database based on the data that the CIPA members have collected during various projects in Syria over the years before the civil war in the country broke out in 2011. In this way we wish to support the protection and preservation of the environment, sites, monuments, and artefacts and the memory of the cultural region that has been crucial for the human past and the emergence of civilizations. Apart from the countless human atrocities and loss, damage, destruction and looting of the cultural heritage have taken place in a large scale. The CIPA’s initiative is one of the various international projects that have been set up after the conflict started. The Directorate-General of the Antiquities and Museums (DGAM) of Syria as well as UNESCO with its various sub-organizations have been central in facing the challenges during the war. Digital data capture, storage, use and dissemination are in the heart of CIPA’s strategies in recording and documenting cultural heritage, also in Syria. It goes without saying that for the conservation and restoration work the high quality data providing metric information is of utmost importance.


Biostatistics ◽  
2021 ◽  
Author(s):  
Tien Vo ◽  
Akshay Mishra ◽  
Vamsi Ithapu ◽  
Vikas Singh ◽  
Michael A Newton

Summary For large-scale testing with graph-associated data, we present an empirical Bayes mixture technique to score local false-discovery rates (FDRs). Compared to procedures that ignore the graph, the proposed Graph-based Mixture Model (GraphMM) method gains power in settings where non-null cases form connected subgraphs, and it does so by regularizing parameter contrasts between testing units. Simulations show that GraphMM controls the FDR in a variety of settings, though it may lose control with excessive regularization. On magnetic resonance imaging data from a study of brain changes associated with the onset of Alzheimer’s disease, GraphMM produces greater yield than conventional large-scale testing procedures.


Galaxies ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 142 ◽  
Author(s):  
Valentina Vacca ◽  
Matteo Murgia ◽  
Federica Govoni ◽  
Torsten Enßlin ◽  
Niels Oppermann ◽  
...  

The formation and history of cosmic magnetism is still widely unknown. Significant progress can be made through the study of magnetic fields properties in the large-scale structure of the Universe: galaxy clusters, filaments, and voids of the cosmic web. A powerful tool to study magnetization of these environments is represented by radio observations of diffuse synchrotron sources and background or embedded radio galaxies. To draw a detailed picture of cosmic magnetism, high-quality data of these sources need to be used in conjunction with sophisticated tools of analysis.


2018 ◽  
Author(s):  
Tsubasa Ito ◽  
Keisuke Ota ◽  
Kanako Ueno ◽  
Yasuhiro Oisi ◽  
Chie Matsubara ◽  
...  

AbstractThe rapid progress of calcium imaging has reached a point where the activity of tens of thousands of cells can be recorded simultaneously. However, the huge amount of data in such records makes it difficult to carry out cell detection manually. Consequently, because the cell detection is the first step of multicellular data analysis, there is a pressing need for automatic cell detection methods for large-scale image data. Automatic cell detection algorithms have been pioneered by a handful of research groups. Such algorithms, however, assume a conventional field of view (FOV) (i.e. 512 × 512 pixels) and need a significantly higher computational power for a wider FOV to work within a practical period of time. To overcome this issue, we propose a method called low computational-cost cell detection (LCCD), which can complete its processing even on the latest ultra-large FOV data within a practical period of time. We compared it with two previously proposed methods, constrained non-negative matrix factorization (CNMF) and Suite2P. We found that LCCD makes it possible to detect cells from a huge-amount of high-density imaging data within a shorter period of time and with an accuracy comparable to or better than those of CNMF and Suite2P.


2021 ◽  
Author(s):  
Olivier J. M. Béquignon ◽  
Brandon J. Bongers ◽  
Willem Jespers ◽  
Ad P. IJzerman ◽  
Bob van de Water ◽  
...  

With the recent rapid growth of publicly available ligand-protein bioactivity data, there is a trove of viable data that can be used to train machine learning algorithms. However, not all data is equal in terms of size and quality, and a significant portion of researcher’s time is needed to adapt the data to their needs. On top of that, finding the right data for a research question can often be a challenge on its own. As an answer to that, we have constructed the Papyrus dataset (DOI: 10.4121/16896406), comprised of around 60 million datapoints. This dataset contains multiple large publicly available datasets such as ChEMBL and ExCAPE-DB combined with several smaller datasets containing high quality data. The aggregated data has been standardised and normalised in a manner that is suitable for machine learning. We show how data can be filtered in a variety of ways, and also perform some baseline quantitative structure-activity relationship analyses and proteochemometrics modeling. Our ambition is this pruned data collection constitutes a benchmark set that can be used for constructing predictive models, while also providing a solid baseline for related research.


2020 ◽  
Author(s):  
James McDonagh ◽  
William Swope ◽  
Richard L. Anderson ◽  
Michael Johnston ◽  
David J. Bray

Digitization offers significant opportunities for the formulated product industry to transform the way it works and develop new methods of business. R&D is one area of operation that is challenging to take advantage of these technologies due to its high level of domain specialisation and creativity but the benefits could be significant. Recent developments of base level technologies such as artificial intelligence (AI)/machine learning (ML), robotics and high performance computing (HPC), to name a few, present disruptive and transformative technologies which could offer new insights, discovery methods and enhanced chemical control when combined in a digital ecosystem of connectivity, distributive services and decentralisation. At the fundamental level, research in these technologies has shown that new physical and chemical insights can be gained, which in turn can augment experimental R&D approaches through physics-based chemical simulation, data driven models and hybrid approaches. In all of these cases, high quality data is required to build and validate models in addition to the skills and expertise to exploit such methods. In this article we give an overview of some of the digital technology demonstrators we have developed for formulated product R&D. We discuss the challenges in building and deploying these demonstrators.<br>


Sign in / Sign up

Export Citation Format

Share Document