rating format
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 0)

H-INDEX

10
(FIVE YEARS 0)

2012 ◽  
Vol 7 (1) ◽  
pp. 88-107 ◽  
Author(s):  
José Borges ◽  
António C. Real ◽  
J. Sarsfield Cabral ◽  
Gregory V. Jones

AbstractAn impartial assessment of the quality of the wine produced over the years in a region (vintage quality) is an essential tool for producers, consumers, investors, and wine researchers to understand factors influencing quality and make purchasing or investing decisions. However, scoring the overall wine quality over the years does not necessarily produce a consensus of which year or years are best. Several critics, magazines, and organizations publish vintage charts that assign a score to each vintage, representing the corresponding perception of the wine quality. Often, the scores given by different institutions reveal little consensus with respect to the relative quality of the vintages.In this work, we propose the utilization of a rank aggregation method to combine a collection of vintage charts for a region into a ranking of the vintages that represents the consensus of the input vintage charts. As a result, we obtain an impartial ranking of the vintages that represents the consensus of an arbitrary number of independent vintage charts. We illustrate the method with the scores from three wine regions.The proposed method produces a ranking of vintage-to-vintage quality that represents an impartial consensus of a collection of independent sources, each using a different rating format, scale, or classification. Such a ranking has the potential to be useful for the research community, which needs a relative measure of wine production quality over the years. Therefore, we make publicly available a software tool that implements the method (Borges, 2011). (JEL Classification: C38, C61, C88)


2005 ◽  
Vol 13 (2) ◽  
pp. 97-107 ◽  
Author(s):  
Gunna J. Yun ◽  
Lisa M. Donahue ◽  
Nicole M. Dudley ◽  
Lynn A. McFarland

2004 ◽  
Vol 18 (2) ◽  
pp. 127-141 ◽  
Author(s):  
Nanja J. Kolk ◽  
Marise Ph. Born ◽  
Henk van der Flier

In general, correlations between assessment centre (AC) ratings and personality inventories are low. In this paper, we examine three method factors that may be responsible for these low correlations: differences in (i) rating source (other versus self), (ii) rating domain (general versus specific), and (iii) rating format (multi‐ versus single item). This study tests whether these three factors diminish correlations between AC exercise ratings and external indicators of similar dimensions. Ratings of personality and performance were combined in an analytical framework following a 2 × 2 × 2 (source, domain, format) completely crossed, within subjects design. Results showed partial support for the influence of each of the three method factors. Implications for future research are discussed. Copyright © 2004 John Wiley & Sons, Ltd.


2002 ◽  
Vol 51 (3) ◽  
pp. 479-503 ◽  
Author(s):  
Aharon Tziner ◽  
Richard E. Kopelman

2002 ◽  
Vol 180 (1) ◽  
pp. 67-70 ◽  
Author(s):  
Ashok Roy ◽  
Helen Matthews ◽  
Paul Clifford ◽  
Vanessa Fowler ◽  
David M. Martin

Summaryrating instructions: (a) Complete the front sheet including ICD–10 diagnoses and subjective rating. (b) Rate each in order from item 1 to 18. (c) Do not include information rated in an earlier item. (d) Rate the person over the previous 4 weeks. (e) Rate the most severe problem that has occurred during the period rated. (f) All items follow the five-point rating format similar to other HoNOS instruments: 0=no problem during the period rated; 1=mild problem; 2=moderate problem; 3=severe problem; 4=very severe problem.


2001 ◽  
Vol 5 (2) ◽  
pp. 49-59
Author(s):  
Marilyn A Campbell ◽  
Ronald M Rapee ◽  
Susan H Spence

Sign in / Sign up

Export Citation Format

Share Document