Feedback
From 2012.igem.org
Dear iGEMers,
The iGEM judges are trying to do a better job with assessment of iGEM team projects. This year, we have moved to a system that allows judges to vote on the aspects of a team’s project and allows us to tally those votes to determine awards. We tried the new system out at the Regional Jamborees, fixed some weaknesses, and used it at this year’s World Championship Jamboree. We still have work to do and hope to make it even better for iGEM next year.
You can now get feedback on your team’s performance directly from the judges. You can find this feedback on the Judging Feedback page. iGEM teams were assessed by judges studying the wikis, examining parts in the Registry, seeing presentations, and speaking to teams at their posters.
This year we introduced a computer-based ‘Rubric’ that encompasses the values we feel are important to iGEM teams and their projects. The Rubric is represented in an online ballot for the judges. We have made a lot of changes to the way the judging works in 2012 and we’re proud of our new system! This document outlines how the new system was devised over the last 6 months with the help of the head judging committee.
First, our new rubric-assisted judging system reflects the same values that iGEM judges have embraced in previous years: originality, hard work, scientific rigor, usefulness, societal impact, and creativity to name a few. Second, scores are recorded in the newly redesigned judges’ ballot system.
The new Rubric includes a standard grading language that enables judges to easily express what they think about the quality of each aspect of the projects. For example, a judge will be asked ‘Did you find the presentation engaging?’ and can choose one of seven responses, ranging from ‘Kept me on the edge of my seat’ to ‘Put me to sleep’. These options correspond with a score of 6 (best) to 1 (worst) and 0 (doesn’t apply). We created a Rubric for both the Regional Competitions and the World Championship Jamboree.
The rubric organizes key aspects of iGEM projects under the traditional categories, including the Presentation, Wiki, Poster, and Special Prizes. A judge will evaluate each aspect by selecting one response (from strongly positive to negative or absent) from a simple list.
Once every aspect was voted on, all votes were tallied and presented in the form of team rankings for each award. Therefore, every judge who evaluated any aspect of a team’s project contributed directly to that team’s score and ranking. This new system and the theory behind it is based on Ballinski and Laraki’s “Majority Judgment” thesis[http://www.amazon.com/Majority-Judgment-Measuring-Ranking-Electing/dp/0262015137/ref=sr_1_1?ie=UTF8&qid=1352999536&sr=8-1&keywords=majority+judgement].
In the regional competitions, the medal criteria were included in the beginning of the rubric, as an introduction to the team and as a way to view how each team self-designated their project. The rubric enabled judges to evaluate each iGEM project with the same metric. Therefore scores, rankings, and various awards are now more consistent across all regions.
Because every judge votes on some to most aspects, we have the ability to provide you with these actual scores. This gives you direct feedback from all the judges on every aspect of your project. We have not been able to provide such complete feedback in the past.
2012 is the first year this system has been used in iGEM and the Regional Jamborees served as trials. The system was changed and improved for the World Championship. We removed sections for the medals and awards that were not being handed out at the WCJ. We also refocused aspects and re-wrote language choices to better reflect values we think are most important to iGEM.
This system may not be perfect. We will continue to work on it in the coming years so we can better evaluate all the hard work you, the teams, put into your projects. Thank you for having patience during this transition. iGEM 2012 Judging Committee: