The process of licensing within the medical profession is built on a practical work of defining what competences constitute the profession, of determining how these competences can be acquired, and of specifying how they can be assessed. In order to obtain a medical license in the United States, medical students are evaluated on the basis of clinical encounters with standardized patients. By demonstrating how medical students' performances in clinical encounters are assessed, this study illustrates how the assessment of professional competence is practically accomplished. Because it frames the work of assessment as a situated interactional and practical work, the study focuses on how three faculty members in a series of panel meetings carried out the practical work of rating candidates' performances. The data were taken from a corpus of audio-recorded panel meetings where the physician-raters first watched the video-recorded student performances, rated them individually, and finally reached a consensus collectively for each clinical encounter. In examining how they accomplished coming to an agreement for the examinees' performances, we see that by using different local strategies, panelists made themselves accountable as both competent physicians and efficient raters. We also notice that panelists made a distinction between being accountable for efficient raters and being accountable for competent physicians when they were supposed to concede their ratings.