canmeds role
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Victor Do ◽  
Jerry M Maniate ◽  
Lyn K Sonnenberg

One skill set identified within the CanMEDS Framework (CanMEDS) as essential to training future physicians is the Leader role.  Arguably however, the term Leader carries certain connotations that are inconsistent with the abilities outlined by CanMEDS as necessary for physicians.  For example, the term Leader may connote hierarchical authority and formalized responsibilities, while de-emphasizing informal day-to-day influencing.  This CanMEDS role was first labelled Manager, but was re-named Leader in 2015.  Perhaps the focus of this CanMEDS role should be further refined by adopting a more representative term that reflects the concept of intentional influence. Through this lens, learners can discern significant opportunities to influence positively each of the clinical and non-clinical environments they encounter.  We suggest that re-framing the Leader role as an Influencer role will be more comprehensive and inclusive of its full scope and potential.   Accordingly, given the potential for broader applicability and resonance with learners, collaborators, and the populations we serve, consideration should be given to re-characterizing the CanMEDS role of Leader as that of Influencer.


CJEM ◽  
2019 ◽  
Vol 21 (S1) ◽  
pp. S108
Author(s):  
S. Segeren ◽  
L. Shepherd ◽  
R. Pack

Introduction: For many years, Emergency Medicine (EM) educators have used narrative comments to assess their learners on each shift, either in isolation or combined with some type of Likert scale ranking. Competency based medical education (CBME), soon to be fully implemented throughout Canadian EM educational programs, encourages this type of frequent low-stakes narrative assessment. It is important to understand what information is currently garnered from existing narrative assessments in order to successfully and smoothly transition to the CBME system. The purpose of this study was to explore how one Canadian undergraduate EM program's narrative assessment comments mapped to two competency frameworks: one traditional CanMEDS-based and one competency-based, built on entrustable professional activities (EPAs). Methods: A qualitative and quantitative content analysis of 1925 retrospective, narrative assessments was conducted for the 2015/2016 and 2016/2017 academic years. The unprompted comments were mapped to the Royal College CanMEDS framework and the Association of Faculties of Medicine of Canada EPA Framework. Using an iterative coding process as per accepted qualitative methodologies, additional codes were generated to classify comments and identify themes that were not captured by either framework. Results: 93% and 85% of the unprompted narrative assessments contained comments that mapped to at least one CanMEDS role or EPA competency, respectively. The most common CanMEDS role commented upon was Medical Expert (86%), followed by Communicator, Collaborator and Scholar (all at 23%). The most common EPA competency mentioned related to history and physical findings (62%) followed by management plan (33%), and differential diagnosis (33%). However, 75% of narrative comments contained within the assessments, included ideas that did not fall into either framework but were repeated with frequency to suggest importance. The experiential characteristics of working with a learner were commented upon by 22% of preceptors. Other unmapped themes included contextual information, generalities and platitudes, and directed feedback for next steps to improve. Conclusion: While much of the currently captured data can be mapped to established frameworks, important information for both learner and assessor may be lost by limiting comments to the competencies described within a particular framework, suggesting caution when transitioning to a CBME assessment program.


CJEM ◽  
2014 ◽  
Vol 16 (02) ◽  
pp. 144-150 ◽  
Author(s):  
Aliya Kassam ◽  
Tyrone Donnon ◽  
Ian Rigby

ABSTRACTBackground:There is a question of whether a single assessment tool can assess the key competencies of residents as mandated by the Royal College of Physicians and Surgeons of Canada CanMEDS roles framework.Objective:The objective of the present study was to investigate the reliability and validity of an emergency medicine (EM) in-training evaluation report (ITER).Method:ITER data from 2009 to 2011 were combined for residents across the 5 years of the EM residency training program. An exploratory factor analysis with varimax rotation was used to explore the construct validity of the ITER. A total of 172 ITERs were completed on residents across their first to fifth year of training.Results:A combined, 24-item ITER yielded a five-factor solution measuring the CanMEDs role Medical Expert/ Scholar, Communicator/Collaborator, Professional, Health Advocate and Manager subscales. The factor solution accounted for 79% of the variance, and reliability coefficients (Cronbach alpha) ranged from α = 0.90 to 0.95 for each subscale and α = 0.97 overall. The combined, 24-item ITER used to assess residents’ competencies in the EM residency program showed strong reliability and evidence of construct validity for assessment of the CanMEDS roles.Conclusions:Further research is needed to develop and test ITER items that will differentiate each CanMEDS role exclusively.


2013 ◽  
Vol 4 (1) ◽  
pp. e81-e85
Author(s):  
Aliya Kassam ◽  
Tyrone Donnon ◽  
Michele Cowan ◽  
Joanne Todesco

Background: In this brief report, we describe two ways in which we assessed the Scholar CanMEDS role using a method to measure residents’ ability to complete a critical appraisal.  These were incorporated into a modified OSCE format where two stations consisted of 1) critically appraising an article and 2) critiquing an abstract.Method: Residents were invited to participate in the CanMEDS In-Training Exam (CITE) through the Office of Postgraduate Medical Education. Mean scores for the two Scholar stations were calculated using the number of correct responses out of 10. The global score represented the examiner’s overall impression of the resident’s knowledge and effort.  Correlations between scores are also presented between the two Scholar stations and a paired sample t-test comparing the global mean scores of the two stations was also performed.Results: Sixty-three of the 64 residents registered to complete the CanMEDS In-Training Exam including the two Scholar stations.  There were no significant differences between the global scores of the Scholar stations showing that the overall knowledge and effort of the residents was similar across both stations (3.8 vs. 3.5, p = 0.13).  The correlation between the total mean scores of both stations (inter-station reliability) was also non-significant (r = 0.05, p = 0.67).  No significant differences between senior residents and junior residents were detected or between internal medicine residents and non-internal medicine residents.Conclusion: Further testing of these stations is needed and other novel ways of assessing the Scholar role competencies should also be investigated.


2012 ◽  
Vol 17 (10) ◽  
pp. 557-560 ◽  
Author(s):  
Elizabeth Berger ◽  
Ming-Ka Chan ◽  
Ayelet Kuper ◽  
Mathieu Albert ◽  
Deirdre Jenkins ◽  
...  
Keyword(s):  

2003 ◽  
Vol 48 (4) ◽  
pp. 222-224 ◽  
Author(s):  
Isolda Tuhan

Postgraduate trainees in psychiatry are being evaluated on their proficiency at competencies that comprise the physician roles identified by the CanMEDS 2000 Project. This paper provides an overview of each CanMEDS role and its associated competencies and suggests strategies to help residents prepare for the new format of the Royal College of Physicians and Surgeons (RCPSC) certification examination in psychiatry.


Sign in / Sign up

Export Citation Format

Share Document