analysis schema
Recently Published Documents


TOTAL DOCUMENTS

18
(FIVE YEARS 4)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Vol 4 ◽  
Author(s):  
Ashish Rajendra Sai ◽  
Jim Buckley ◽  
Andrew Le Gear

Cryptocurrencies often tend to maintain a publically accessible ledger of all transactions. This open nature of the transactional ledger allows us to gain macroeconomic insight into the USD 1 Trillion crypto economy. In this paper, we explore the free market-based economy of eight major cryptocurrencies: Bitcoin, Ethereum, Bitcoin Cash, Dash, Litecoin, ZCash, Dogecoin, and Ethereum Classic. We specifically focus on the aspect of wealth distribution within these cryptocurrencies as understanding wealth concentration allows us to highlight potential information security implications associated with wealth concentration. We also draw a parallel between the crypto economies and real-world economies. To adequately address these two points, we devise a generic econometric analysis schema for cryptocurrencies. Through this schema, we report on two primary econometric measures: Gini value and Nakamoto Index which report on wealth inequality and 51% wealth concentration respectively. Our analysis reports that, despite the heavy emphasis on decentralization in cryptocurrencies, the wealth distribution remains in-line with the real-world economies, with the exception of Dash. We also report that 3 of the observed cryptocurrencies (Dogecoin, ZCash, and Ethereum Classic) violate the honest majority assumption with less than 100 participants controlling over 51% wealth in the ecosystem, potentially indicating a security threat. This suggests that the free-market fundamentalism doctrine may be inadequate in countering wealth inequality within a crypto-economic context: Algorithmically driven free-market implementation of these cryptocurrencies may eventually lead to wealth inequality similar to those observed in real-world economies.


2021 ◽  
Author(s):  
Nathan Paldor ◽  
Ofer Shamir ◽  
Andreas Münchow ◽  
Albert D. Kirwan Jr.

Abstract. Here we use a new analysis schema, the Freshening Length, to study the transport in the Irminger Current on the east and west sides of Greenland. The Freshening Length schema relates the transports on either side of Greenland to the corresponding surface salinity gradients by analyzing climatological data from a data assimilating global ocean model. Surprisingly, the warm and salty waters of the Current are clearly identified by a salinity maximum that varies nearly linearly with distance along the Current’s axis. Our analysis of the climatological salinity data based on the Freshening Length schema shows that only about 20 % of the transport east of Greenland navigates the southern tip of Greenland to enter the Labrador Sea in the west. The other 80 % disperses into the ambient ocean. This independent quantitative estimate based on a 37-year long record complements seasonal to annual field campaigns that studied the connection between the seas east and west of Greenland more synoptically. A temperature-salinity analysis shows that the Irminger Current east of Greenland is characterized by a compensating isopycnal exchange of temperature and salinity, while west of Greenland the horizontal convergence of less dense surface water is accompanied by downwelling/subduction.


2021 ◽  
Author(s):  
Caroline Haythornthwaite ◽  
Priya Kumar ◽  
Anatoliy Gruzd ◽  
Sarah Gilbert ◽  
Marc Esteve Del Valle ◽  
...  

Learning on and through social media is becoming a cornerstone of lifelong learning, creating places not only for accessing information, but also for finding other self-motivated learners. Such is the case for Reddit, the online news sharing site that is also a forum for asking and answering questions. We studied learning practices found in ‘Ask’ subreddits AskScience, Ask_Politics, AskAcademia, and AskHistorians to develop a coding schema for informal learning. This paper describes the process of evaluating and defining a workable coding schema, one that started with attention to learning processes associated with discourse, exploratory talk, and conversational dialogue, and ended with including norms and practices on Reddit and the support of communities of inquiry. Our ‘learning in the wild’ coding schema contributes a content analysis schema for learning through social media, and an understanding of how knowledge, ideas, and resources are shared in open, online learning forums. Keywords: informal learning, social media, coding, content analysis, Reddit


2021 ◽  
Author(s):  
Caroline Haythornthwaite ◽  
Priya Kumar ◽  
Anatoliy Gruzd ◽  
Sarah Gilbert ◽  
Marc Esteve Del Valle ◽  
...  

Learning on and through social media is becoming a cornerstone of lifelong learning, creating places not only for accessing information, but also for finding other self-motivated learners. Such is the case for Reddit, the online news sharing site that is also a forum for asking and answering questions. We studied learning practices found in ‘Ask’ subreddits AskScience, Ask_Politics, AskAcademia, and AskHistorians to develop a coding schema for informal learning. This paper describes the process of evaluating and defining a workable coding schema, one that started with attention to learning processes associated with discourse, exploratory talk, and conversational dialogue, and ended with including norms and practices on Reddit and the support of communities of inquiry. Our ‘learning in the wild’ coding schema contributes a content analysis schema for learning through social media, and an understanding of how knowledge, ideas, and resources are shared in open, online learning forums. Keywords: informal learning, social media, coding, content analysis, Reddit


2018 ◽  
Vol 48 ◽  
pp. 01041
Author(s):  
Rüveyda Yavuz ◽  
Funda Savaşcı-Açıkalın

The purpose of this study is to examine how the seventh-grade students visualize the atomic structure and models in their minds. The problem of the study is “How do the seventh grade students visualize the atomic structure and models?” This study is conducted with 25 seventh-grade students in a state school, Bursa. Qualitative research methodology was adopted in the study. As data collection tools, worksheets were collected via four different activities by the researcher. Data collection process took two weeks (eight lessons). Worksheets consist of different questions about atom and structure and also atom models. For the validity and objective evaluation of the worksheets, an analysis schema was prepared for four different activities by a subject instructor and two science teachers with two or four years teaching experiences. Analysis schema and worksheets were re-evaluated by the science teacher with three years of teaching experiences. Consequently, students’ visualization of the atom structures do not match with the scientific models. And also, students confuse the basic concepts about atomic structure.


2017 ◽  
Vol 28 (1) ◽  
pp. 155-191
Author(s):  
Sara Sowers-Wills

AbstractEarly child phonological acquisition data typically contain exceptional phonetic forms that defy segment-based rules and have long challenged traditional theoretical frameworks. The templatic approach to phonological acquisition claims that whole-word phonotactic patterns emerge as the first primary units of representation, later giving way to segmental knowledge. This approach places importance on the relationships among a child’s forms in addition to those between child forms and their corresponding adult targets. Inscribed within dynamic systems theory, the templatic approach assumes a developing phonological system to be self-organizing and driven by general cognitive processes in response to patterns in the ambient language. This paper analyzes data from a diary study of one monolingual child acquiring American English. Data collected during the first six months of word production were put to templatic analysis, then examined for evidence of schematic structure. Incorporating the chronology of utterances the child produced, analysis revealed varying degrees of abstraction as early patterns integrated with newer patterns. The results reveal schema theory to be an informative supplementary framework for templatic analysis. Schema theory provides a structured way to trace the emergence and interaction of whole-word patterns a child uses to facilitate the production of first words.


2015 ◽  
Vol 2015 ◽  
pp. 1-16 ◽  
Author(s):  
A. Mesut Erzurumluoglu ◽  
Santiago Rodriguez ◽  
Hashem A. Shihab ◽  
Denis Baird ◽  
Tom G. Richardson ◽  
...  

Recent technological advances have created challenges for geneticists and a need to adapt to a wide range of new bioinformatics tools and an expanding wealth of publicly available data (e.g., mutation databases, and software). This wide range of methods and a diversity of file formats used in sequence analysis is a significant issue, with a considerable amount of time spent before anyone can even attempt to analyse the genetic basis of human disorders. Another point to consider that is although many possess “just enough” knowledge to analyse their data, they do not make full use of the tools and databases that are available and also do not fully understand how their data was created. The primary aim of this review is to document some of the key approaches and provide an analysis schema to make the analysis process more efficient and reliable in the context of discovering highly penetrant causal mutations/genes. This review will also compare the methods used to identify highly penetrant variants when data is obtained from consanguineous individuals as opposed to nonconsanguineous; and when Mendelian disorders are analysed as opposed to common-complex disorders.


2014 ◽  
Author(s):  
Mesut Erzurumluoglu

Recent technological advances have created challenges for geneticists and a need to adapt to a wide range of new bioinformatics tools and an expanding wealth of publicly available data (e.g. mutation databases, software). This wide range of methods and a diversity of file formats used in sequence analysis is a significant issue, with a considerable amount of time spent before anyone can even attempt to analyse the genetic basis of human disorders. Another point to consider is although many possess "just enough" knowledge to analyse their data, they do not make full use of the tools and databases that are available and also do not know how their data was created. The primary aim of this review is to document some of the key approaches and provide an analysis schema to make the analysis process more efficient and reliable in the context of discovering highly penetrant causal mutations/genes. This review will also compare the methods used to identify highly penetrant variants when data is obtained from consanguineous individuals as opposed to non-consanguineous; and when Mendelian disorders are analysed as opposed to common-complex disorders.


Sign in / Sign up

Export Citation Format

Share Document