Team:St Andrews/Human-practices
From 2012.igem.org
m (biobrick tm) |
|||
Line 256: | Line 256: | ||
<h3>And the answer is...</h3> | <h3>And the answer is...</h3> | ||
<p>Things are looking good for iGEM! Since the first iGEM competition in 2003, more teams are participating each year and more parts are being submitted. The efforts to expose iGEM to the community has paid off - the amount of scientific papers mentioning iGEM and the Registry of Standard Parts has risen, proportional to the increase of the competition itself.</p> | <p>Things are looking good for iGEM! Since the first iGEM competition in 2003, more teams are participating each year and more parts are being submitted. The efforts to expose iGEM to the community has paid off - the amount of scientific papers mentioning iGEM and the Registry of Standard Parts has risen, proportional to the increase of the competition itself.</p> | ||
- | <p>Papers mentioning a specific Registry BioBrick have only begun to appear in recent years, but the numbers show growth. We hypothesize that the Registry's contents is only now reaching the critical mass to become a useful research tool. The founders' dream (<a href="https://igem.org/Press_Kit">iGEM Foundation, 2012b</a>) of "making genetics modular" is becoming reality.</p> | + | <p>Papers mentioning a specific Registry BioBrick<sup>TM</sup> have only begun to appear in recent years, but the numbers show growth. We hypothesize that the Registry's contents is only now reaching the critical mass to become a useful research tool. The founders' dream (<a href="https://igem.org/Press_Kit">iGEM Foundation, 2012b</a>) of "making genetics modular" is becoming reality.</p> |
<h3>Where this data comes from</h3> | <h3>Where this data comes from</h3> | ||
- | <p>Data about participating teams and number of submitted BioBricks comes from the <a href="https://igem.org/Previous_iGEM_Competitions">iGEM Foundation (2012a)</a>. The other data sets come from the results Google Scholar queries with IDs 1, 3 and 4 (see query summary table above).</p> | + | <p>Data about participating teams and number of submitted BioBricks<sup>TM</sup> comes from the <a href="https://igem.org/Previous_iGEM_Competitions">iGEM Foundation (2012a)</a>. The other data sets come from the results Google Scholar queries with IDs 1, 3 and 4 (see query summary table above).</p> |
</div> | </div> | ||
</div> | </div> | ||
Line 304: | Line 304: | ||
<div class="span5"> | <div class="span5"> | ||
<h3>Expectations</h3> | <h3>Expectations</h3> | ||
- | <p>One of the iGEM competition's important goals is to build up well-characterized BioBrick content in the Registry. Thus, the iGEM competition and the Registry are inherently linked. Hence we would expect publications that mention iGEM to also refer to the Registry. However, as the Registry is not <em>only</em> used by iGEM, we expected to also find a large number of papers that mention the Registry <em>without</em> mentioning iGEM.</p> | + | <p>One of the iGEM competition's important goals is to build up well-characterized BioBrick<sup>TM</sup> content in the Registry. Thus, the iGEM competition and the Registry are inherently linked. Hence we would expect publications that mention iGEM to also refer to the Registry. However, as the Registry is not <em>only</em> used by iGEM, we expected to also find a large number of papers that mention the Registry <em>without</em> mentioning iGEM.</p> |
<p>In other words, we expected: | <p>In other words, we expected: | ||
<ul> | <ul> | ||
Line 361: | Line 361: | ||
<h2>Discussion</h2> | <h2>Discussion</h2> | ||
- | <p>We considered citations <em>sufficient</em> when they cited the Registry or named the specific BioBricks used (when appropriate) <em>and</em> citing either the Knight or Endy paper about BioBrick assembly.</p> | + | <p>We considered citations <em>sufficient</em> when they cited the Registry or named the specific BioBricks<sup>TM</sup> used (when appropriate) <em>and</em> citing either the Knight or Endy paper about BioBrick<sup>TM</sup> assembly.</p> |
<p>We examined those 25 articles for how they cited iGEM and the Registry. We found that <strong>11</strong> articles cited iGEM and/or the Registry of Standard Biological Parts satisfactorily.</p> | <p>We examined those 25 articles for how they cited iGEM and the Registry. We found that <strong>11</strong> articles cited iGEM and/or the Registry of Standard Biological Parts satisfactorily.</p> | ||
Line 374: | Line 374: | ||
</p> | </p> | ||
- | <p>5 articles did not cite the Registry or BioBricks sufficiently according to our demands. For one of them, we question its content relevancy. The remaining 4 show examples of simple in-text citation of the Registry, with or without a hyperlink, no mention of the specific BioBrick used, or no mention of BioBricks at all. Surprisingly, each of these papers was associated with an iGEM team.</p> | + | <p>5 articles did not cite the Registry or BioBricks<sup>TM</sup> sufficiently according to our demands. For one of them, we question its content relevancy. The remaining 4 show examples of simple in-text citation of the Registry, with or without a hyperlink, no mention of the specific BioBrick<sup>TM</sup> used, or no mention of BioBricks<sup>TM</sup> at all. Surprisingly, each of these papers was associated with an iGEM team.</p> |
<p>There are 9 additional papers we found in the journal IELTS Synthetic Biology, all of which show vast variations in content, research, and citation quality. The journal seems to have been published once, and seems to have asked all 2006 iGEM finalists to submit a paper based on their research. This resulted in some sub-par articles. Because of their generally somewhat unprofessional form and strong affiliation to iGEM, we disregarded them from our overall data set.</p> | <p>There are 9 additional papers we found in the journal IELTS Synthetic Biology, all of which show vast variations in content, research, and citation quality. The journal seems to have been published once, and seems to have asked all 2006 iGEM finalists to submit a paper based on their research. This resulted in some sub-par articles. Because of their generally somewhat unprofessional form and strong affiliation to iGEM, we disregarded them from our overall data set.</p> |
Revision as of 22:06, 24 September 2012
Scientific impact of iGEM
"Most influential synthetic biology competition" vs. "Just some kids playing"?
Introduction
We wanted to determine just how relevant the iGEM competition is for the greater SynBio community. So we investigated the scientific attention garnered by both the iGEM and the Registry of Standard Parts. A data-driven approach was chosen: We extracted data from search results using various queries (such as ("iGEM" OR "International Genetically Engineered Machine") AND ("synthetic biology" OR "genetic engineering")
) from various publication search engines. We searched Web of Knowledge, Scopus, PubMed and Google Scholar. Google Scholar was chosen to perform more detailed data analysis, as we found the alternatives to have various shortcomings.
We found that our data results are conclusive with our initial hypothesis: iGEM is an important contributor to the SynBio community. These findings have some implications for the iGEM competition, which we discuss.
In order to quantify these results further, we analyzed how exactly iGEM and the Registry has been cited. Looking at around 50 papers in closer detail, we found a standard way of referencing the iGEM resources. All in all, we would like to make a recommendation that this citation standard be used by all papers related to iGEM/the Registry.
Data collection Data and where to find it
Query summary
Here's a quick breakdown of what we queried for on Google Scholar and what sort of data was returned. (The ID matches the name of the data set in our data tables). The h- and g-indexes are explained just below!
Dataset ID | Plain English query | Query | Nº Papers | Nº Citations | h-index | g-index | Query date |
---|---|---|---|---|---|---|---|
1 | Papers mentioning iGEM in context of synbio | (“synthetic biology” OR "genetic engineering") AND (“iGEM” OR “International Genetically Engineered Machine") |
770 | 3253 | 26 | 45 | 17/7/2012 |
2 | All synthetic biology | synthetic biology |
1000 | 68482 | 127 | 214 | 17/7/2012 |
3 | Papers mentioning Registry of Parts | "Registry of Standard Biological Parts" OR "partsregistry.org" OR "parts.mit.edu" |
751 | 6442 | 39 | 69 | 17/7/2012 |
4 | Papers citing a particular Part | partsregistry.org/Part: |
54 | 263 | 5 | 16 | 17/7/2012 |
5 | Papers mentioning iGEM | iGEM OR "International Genetically Engineered Machine" |
1000 | 9095 | 36 | 64 | 17/7/2012 |
6 | Papers mentioning iGEM and Registry | ("iGEM" OR "International Genetically Engineered Machine") AND("Registry of Standard Biological Parts" OR "partsregistry.org" OR "parts.mit.edu") |
330 | 2208 | 23 | 42 | 17/7/2012 |
Note Searches were capped at a maximum of 1000 results. Hence getting 1000 results for a query implies that more exist! Those first 1000 are only the ones the search engine judged most relevant.
Why we used Google Scholar
All in all, we found Google Scholar to most closely meet our analytic needs.
As WoK, Scopus and PubMed are strictly curated databases and limited in scope, they missed many obviously relevant publications. We also found their search options unsuitable: Many of them did not support either full text search (they looked at titles, keywords and abstracts only) or boolean operators. But for the following reasons, we needed a search to include both:
- iGEM cannot be expected to always be the main subject of a paper, hence full text search.
- There are many relevant terms floating about iGEM, hence boolean operators like "OR" to allow treating papers that contain "International Genetically Engineered Machine" or the acronym "iGEM" equally.
Just how much wider is Google's search scope? Here is an example: PubMed gave so few results (16 for iGEM genetic*
) that we quickly discarded it. Manually merging the Web of Knowledge and Scopus results for the query iGEM AND genetic*
(discarding obviously irrelevant results) gave 43 results. Then we queried Google Scholar. It gave us 770 for (“synthetic biology” OR "genetic engineering") AND (“iGEM” OR “International Genetically Engineered Machine")
.
Of course, Google Scholar too is but a bronze bullet: It brings its own drawbacks. It is engineered to pick up things that only seem like scholarly articles. Like Google's search results in general, the results are not curated by a human. This has been criticised in the literature (Péter J., 2006.). We found the occasional hilarious total miss. Google Scholar is also known to somewhat overestimate citation counts (Iselid, L., 2006.). However, we concluded from empirical manual examination of a random sample that the majority of the results are plausible and (most importantly) far greater in scope than searches in curated databases. Taking these aspects into consideration, we found Google Scholar best fulfilled our requirements.
Caveat! We only want statistics to identify trends. For this, large and coarse pieces of data suffice. We would discourage using this method for obtaining precise values!
Browse the data
...are online in a nifty and very usable Google Docs folder.
An introduction is included in case you get lost or want more information.
On extraction tools
We made extensive use of Harzing Publish or Perish (Harzing, A.W., 2007.) to scrape Google Scholar results. The tool has many limitations. However, in our experience it is the best available resource for managing the mess that scientific publication data scraping tends to become.
We did try other things: We quickly found manual methods too slow. Various Firefox browser plugins simply failed outright, were extremely awkward to use or produced clearly erroneous results. The Mac OS program Papers was easy to use and found huge numbers of papers (as it could access many sources), but had unacceptably high rates of error, problems with duplicates and could not export the results into a form we could easily process. Hence Publish or Perish.
Metrics Quantifying scientific impact
Once we had scraped our data from Google Scholar, we needed a method to quantify the relevance of a given scientific article. There are many ways you can quantify success of a paper. Here are a few we investigated:
Plain citation count
It's good scientific practice to cite (i.e. credit work carried out by others). High citation count can generally be taken as an indicator of a high-quality or high-impact paper. This is the most traditional method of ranking the influence of papers.
The main disadvantage of the "citation count"-method is its lack of consistent standards. Even papers within a distinct scientific fields will have differing reference approaches and citation counts. It is also significant that old papers have an edge over newer papers, as they've had more time to be cited.
h-index
The h-index is an integer unique to a set of papers. It is used to measure the output and influence of a set of scientists. A greater h-index implies more productive and more influential authors. It was invented by physicist J.E. Hirsch (2005) and has since been automatically calculated by many citation databases. Here is its definition: "A set of papers has h-index h if h papers out of that set have been cited at least h times." An image ("Ael 2" and "Vulpecula", 2012.) clarifies:
g-index
The g-index is a citation index meant to quantify the influence of papers. It was proposed by Leo Egghe (2006) as a variation to the h-index. It puts more emphasis on the most cited papers and Egghe argues that it ranks highly cited authors more fairly. He gives the following definition: "A set of papers has a g-index g if g is the highest rank such that the top g papers have, together, at least g² citations." Here's a clarifying image by our Polish friend ("Ael 2", 2012.) again:
Algorithmic methods
It's worth noting that there are many other ways of quantifying productivity and impact of a set of papers or scientists. For example, Y.B. Zhou et al (2012) propose a more complete method for "distinguishing prestige from popularity". In their algorithm, the weight of a citation to the influence of a paper is also dependent on the (already calculated) influences of the citing papers and their authors. This requires running a recursive algorithm on sufficiently complete bipartite network of papers and their authors.
Though the alternatives look enticing, we ended up looking mainly at citation count.
The algorithmic methods were beyond our reach of data availability: We would have had to find the names of every involved person in every iGEM team and all papers they've written, filtering out large amounts of false matches. This was impracticable.
The h and g-indexes don't actually show more than the raw citation counts when it comes to tendencies over time. Also, we had relatively few search queries to compare against each other, given that two were discarded for having various flaws (discussed just below). This meant that the h and g-index, while valuable methods, were not suitable for our particular data analysis.
Data analysis What does this data actually mean?
Is our data usable?
Yes. Mostly...
Given our doubts over accuracy of Google Scholar's data, we considered it a priority to exercise caution with our search query results. This paid off: data we compiled using the search queries of IDs 5 and 2 had fatal flaws. They were rejected from further analysis (discussed below). The other data sets were found to be suitable.
Our method to examine data suitability was empirical: Random samplings of each data set were passed under human eyes. For most queries, this observation of random subsets showed acceptably low levels of "background static" (i.e. results that Google Scholar had automatically matched to the query, which were not actually relevant). These would form only a drop of error in the ocean of relevant data.
...but only mostly.
Query 5 (iGEM OR "International Genetically Engineered Machine"
) was found to have an unacceptably large level of static. The reason was quickly identified: Because the two quoted terms in Query ID 5 ("iGEM"
, "International Genetically Engineered Machine"
) were separated by a disjunction (OR
), the query would easily match anything that contained just the acronym "IGEM"! This meant acronyms in economics such as "Inter-temporal General Equilibrium Model (IGEM)" or the British "Institution of Gas Engineers & Managers (IGEM)" and various medical terms and chemical names snuck in. The entire data set with ID 5 was dismissed from further analysis.
The lesson we took from this is not to search for short acronyms by themselves. Query 1 ((“synthetic biology” OR "genetic engineering") AND (“iGEM” OR “International Genetically Engineered Machine")
) could be thought of as the "Version Two" of the problematic Query 5. It searches for the same terms, but includes a conjunction (AND
) with either synthetic biology or genetic engineering, which bends results toward our iGEM. This tunes static down to an acceptable level.
Query 2 (synthetic biology
) had a different problem: It was too big. The query was an attempt to capture stats for the entire field of synthetic biology, so we could statistically determine the relative influence of the iGEM competition. However, we had forgotten the 1000-result cap imposed by Google Scholar. It is impossible to retrieve results beyond this 1k "event horizon". Google does not publish information regarding how the order of results is determined. Hence these first 1000 results (out of what are likely to be 10s or 100s of thousands of papers) are all biased by some unknown force. Were more cited papers favoured? Were papers published more recently favoured? No conclusions can be drawn from a biased and small subset of the full data. We also discounted data set 2 from any further analysis.
To emphasize: Data sets 2 and 5 are not included in any further analysis.
Is iGEM getting attention?
What does "attention" mean within a scientific context? We will operate under the assumption that getting attention correlates strongly with being mentioned in scholarly articles. Hence, we quantify the attention that a term is getting, by searching for that term, and summing up result counts.
Here is a chart summarising how many papers mention various terms floating about iGEM over time:
And the answer is...
Things are looking good for iGEM! Since the first iGEM competition in 2003, more teams are participating each year and more parts are being submitted. The efforts to expose iGEM to the community has paid off - the amount of scientific papers mentioning iGEM and the Registry of Standard Parts has risen, proportional to the increase of the competition itself.
Papers mentioning a specific Registry BioBrickTM have only begun to appear in recent years, but the numbers show growth. We hypothesize that the Registry's contents is only now reaching the critical mass to become a useful research tool. The founders' dream (iGEM Foundation, 2012b) of "making genetics modular" is becoming reality.
Where this data comes from
Data about participating teams and number of submitted BioBricksTM comes from the iGEM Foundation (2012a). The other data sets come from the results Google Scholar queries with IDs 1, 3 and 4 (see query summary table above).
Is the relationship between iGEM and the Registry clear?
-
Papers published in that year Times those papers were cited
Chart 2: "iGEM" attention
-
Papers published in that year Times those papers were cited
Chart 3: "iGEM and Registry" attention
-
Papers published in that year Times those papers were cited
Chart 4: "Registry" attention
These three graphs show the amount of papers published each year containing certain search queries, as well as the number of times these papers were cited. All graphs show positive tendencies: the competition is becoming more wide-spread and more iGEM-related papers are being published and recognized. The search queries were chosen to show which part of iGEM is usually cited: the iGEM competition, the Registry of Standard Parts, or both. The data shows that only around half of papers will cite both elements. It should be noted that some outlying data points were ignored as they are obvious mis-searches.
Expectations
One of the iGEM competition's important goals is to build up well-characterized BioBrickTM content in the Registry. Thus, the iGEM competition and the Registry are inherently linked. Hence we would expect publications that mention iGEM to also refer to the Registry. However, as the Registry is not only used by iGEM, we expected to also find a large number of papers that mention the Registry without mentioning iGEM.
In other words, we expected:
- a large proportion of papers that mention iGEM and the Registry,
- a large proportion that mention only the Registry and
- only a very small proportion that mention only iGEM.
Yes and No
The expected proportion of papers mentioning only the Registry was found. However, of the papers that mention iGEM, about half do not mention the Registry. This was a greater proportion than we expected.
We theorise that this may be due to the iGEM Foundation and iGEM teams under-emphasizing their importance to part standardisation. Indeed iGEM is usually presented by the Foundation and teams as "a synthetic biology competition", when really that's just half the picture. We're not just competing, but we also exist for a greater good: Make the Registry better and do our part in helping organise synthetic biology!
Citations How is iGEM cited and what can we learn from it?
Sourcing data
In order to analyse how exactly iGEM and the Registry are being cited, we decided to manually examine a set of the papers in our results. We had discarded Scopus and Web of Knowledge earlier when carrying out wide-range data collection. However, for this focused search, the small but certain selection of papers that Scopus and Web of Knowledge gave us was perfect!
We manually combined the results obtained from Scopus and Web of Knowledge. We then deleted duplicates and the few remaining irrelevant publications.
The keyword “iGEM” gave 41 publications combined from Scopus and WoK. Of these, we discarded 16 publications for various reasons:
- 5 texts were non-research articles, such as magazine articles
- 2 were articles on bioethics that mentioned iGEM only in passing
- 1 article cited an articles with “iGEM” in the title but was itself unrelated
- 6 were articles we couldn’t locate or access despite the University of St Andrews having subscriptions to various publishers
- 2 were in French.
This left 25 articles.
Discussion
We considered citations sufficient when they cited the Registry or named the specific BioBricksTM used (when appropriate) and citing either the Knight or Endy paper about BioBrickTM assembly.
We examined those 25 articles for how they cited iGEM and the Registry. We found that 11 articles cited iGEM and/or the Registry of Standard Biological Parts satisfactorily.
There were two common forms of citation:
- Registry of Standard Biological Parts [http://www.partsregistry.org].
- Knight, T. F. (2003). Idempotent Vector Design for Standard Assembly of Biobricks.
DOI: 1721.1/21168.
5 articles did not cite the Registry or BioBricksTM sufficiently according to our demands. For one of them, we question its content relevancy. The remaining 4 show examples of simple in-text citation of the Registry, with or without a hyperlink, no mention of the specific BioBrickTM used, or no mention of BioBricksTM at all. Surprisingly, each of these papers was associated with an iGEM team.
There are 9 additional papers we found in the journal IELTS Synthetic Biology, all of which show vast variations in content, research, and citation quality. The journal seems to have been published once, and seems to have asked all 2006 iGEM finalists to submit a paper based on their research. This resulted in some sub-par articles. Because of their generally somewhat unprofessional form and strong affiliation to iGEM, we disregarded them from our overall data set.
Our recommendations
Reviewing the data, we have concluded that there are standard methods of citation being used by the scientific community to refer to the Parts Registry. In order for the Registry to uphold referencing standards as well as Parts standards within synthetic biology, we think this method of referencing should be officially recommended on the Registry and iGEM website.
A clear and standard citation method would support teams who are attempting to publish and let them set an example of citation style to the rest of the scientific community. Additionally, clearly stating a standard method of citation would make citing of the Registry easier and so motivate its citation in general. Of course, more citations means more attention and adoption within the field of synthetic biology.
Teams participating in iGEM should be encouraged to cite properly and to try to publish their work. A tutorial of some kind hosted on the iGEM website would help. Such things (among many other bright ideas) were proposed in 2008 by Cowell , but haven't been implemented. We see a standardised citation method as a high priority for maximising iGEM's scientific influence.
Conclusion Our Human Practices in a nutshell...
We found that the iGEM competition is making a positive impact. The competition is growing in size and scope, and both iGEM and the Registry are netting a proportionally increasing amount of attention from the scientific community. We are doing well!
However, we also found that quite a number of discussions of iGEM miss the important connection between our iGEM competition and the Parts Registry. We recommend that the iGEM Foundation and future teams emphasise the iGEM competition's raison d'être clearly in the future.
We also noticed that some papers do not give sufficient or clear credit to the Registry. We interpret this as confusion as to how the Registry should be cited. We recommend that a standardised referencing should be introduced. This would support the publishing process for inexperienced undergraduates involved in the competition, gather greater attention for the iGEM Foundation and Parts Registry and hence further our aims of synthetic biology standardisation. ∎
References
"Ael 2" and "Vulpecula", 2012. h-index (Hirsch). Wikipedia. [image online] Available at: <http://en.wikipedia.org/wiki/File:H-index-en.svg> [Accessed Jul 27, 2012].
"Ael 2", 2012. Illustrated example for the g-index proposed by Egghe. Wikipedia [image online] Available at: <http://en.wikipedia.org/wiki/File:Gindex1.jpg> [Accessed Jul 27, 2012].
Cowell, M.L., 2008. Making iGEM Better. [web page] Available at <http://openwetware.org/wiki/User:Macowell/Making_iGEM_Better> [Accessed Jul 19, 2012].
Egghe, L., 2006. Theory and practise of the g-index. Scientometrics [online], Volume 69 (Issue 1), p.131-152. Available at: <www.springerlink.com/content/4119257t25h0852w/?MUD=MP> [Accessed Jun 7, 2012].
Harzing, A.W., 2007. Publish or Perish. [computer program] Available from <http://www.harzing.com/pop.htm>
Hirsch, J.E., 2005. An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, Volume 102 (Issue 46). [online] Available at: http://<www.ncbi.nlm.nih.gov/pmc/articles/PMC1283832/?tool=pmcentrez> [Accessed 5th Jul, 2012]
iGEM Foundation, 2012a. Previous iGEM Competitions. [web page] Available at: <https://igem.org/Previous_iGEM_Competitions> [Accessed Jul 30, 2012]
iGEM Foundation, 2012b. Press Kit. [web page] Available at: <https://igem.org/Press_Kit> [Accessed Aug 3, 2012]
Iselid, L., 2006. Research on citation search in Web of Science, Scopus and Google Scholar. One Entry to Research [blog] Available at: <http://oneentry.wordpress.com/2006/08/11/research-on-citation-search-in-web-of-science-scopus-and-google-scholar/> [Accessed Jun 20, 2012].
Péter J., 2006. Dubious hit counts and cuckoo's eggs. Online Information Review [online] Volume 30 (Issue 2) p.188-193. Available at: <http://www.emeraldinsight.com/journals.htm?articleid=1550726&show=abstract> [Accessed Jun 20, 2012].
Zhou Y., Liyan L. and Menghui L., 2012. Quantifying the influence of scientists and their publications: distinguishing between prestige and popularity. New Journal of Physics, [online] Volume 14 (March 2012) Available at: <http://iopscience.iop.org/1367-2630/14/3/033033/> [Accessed Jun 7, 2012].