Team:St Andrews/Human-practices

From 2012.igem.org

Revision as of 10:33, 9 August 2012 by Aku (Talk | contribs)

Scientific impact of iGEM

"Most influential synthetic biology competition" vs. "Just some kids playing"?

We wanted to determine just how relevant the iGEM competition is for the greater SynBio community. So we investigated the scientific attention garnered by both the iGEM and the Registry of Standard Parts. A data-driven approach was chosen: We extracted data from search results of various queries (such as ("iGEM" OR "International Genetically Engineered Machine") AND ("synthetic biology" OR "genetic engineering")) from various publication search engines. We searched Web of Knowledge, Scopus, PubMed and Google Scholar. Google Scholar was chosen to perform more detailed data analysis, as we found the alternatives to have various shortcomings.

We found that our data results are conclusive with our initial hypothesis: iGEM is an important contributor to the SynBio community. These findings have some implications for the iGEM competition, which we discuss.

In order to quantify these results further, we analyzed how exactly iGEM and the Registry has been cited. Looking at around 50 papers in closer detail, we found a standard way of referencing the iGEM resources. All in all, we would like to make a recommendation that this standard of citation be used by all papers related to iGEM/the Registry.

Overall, we found that the iGEM competition is making a positive impact. The competition is growing both in size and scope, and it is netting a proportionally increasing amount of attention from the scientific community. However, we found some examples in which iGEM was not given sufficient credit. This might be due to the involved parties not fully grasping the aims of the iGEM competition. We think that the iGEM Foundation and the competing teams could increase promotion of the competition's goals. We feel that our citation recommendation would be a viable method to achieve this.

Query summary

Here's a quick breakdown of what we queried for on Google Scholar and what sort of data was returned. (The ID matches the name of the data set in our data tables). The h- and g-indexes are explained just below!

Dataset ID Plain English query Query Nº Papers Nº Citations h-index g-index Query date
1 Papers mentioning iGEM in context of synbio (“synthetic biology” OR "genetic engineering") AND (“iGEM” OR “International Genetically Engineered Machine") 770 3253 26 45 17/7/2012
2 All synthetic biology synthetic biology 1000 68482 127 214 17/7/2012
3 Papers mentioning Registry of Parts "Registry of Standard Biological Parts" OR "partsregistry.org" OR "parts.mit.edu" 751 6442 39 69 17/7/2012
4 Papers citing a particular Part partsregistry.org/Part: 54 263 5 16 17/7/2012
5 Papers mentioning iGEM iGEM OR "International Genetically Engineered Machine" 1000 9095 36 64 17/7/2012
6 Papers mentioning iGEM and Registry ("iGEM" OR "International Genetically Engineered Machine") AND("Registry of Standard Biological Parts" OR "partsregistry.org" OR "parts.mit.edu") 330 2208 23 42 17/7/2012

Note Searches were capped at a maximum of 1000 results. Hence getting 1000 results for a query implies that more exist! Those first 1000 are only the ones the search engine judged most relevant.

Why we used Google Scholar

All in all, we found Google Scholar to most closely meet our analytic needs.

As WoK, Scopus and PubMed are strictly curated databases and limited in scope, they missed many obviously relevant publications. We also found their search options unsuitable: Many of them did not support either full text search (they looked at titles, keywords and abstracts only) or boolean operators. But for the following reasons, we needed a search to include both:

  • iGEM cannot be expected to always be the main subject of a paper, hence full text search.
  • There are many relevant terms floating about iGEM, hence boolean operators like "OR" to allow treating papers that contain "International Genetically Engineered Machine" or the acronym "iGEM" equally.

Just how much wider is Google's search scope? Here is an example: PubMed gave so few results (16 for iGEM genetic*) that we quickly discarded it. Manually merging the Web of Knowledge and Scopus results for the query iGEM AND genetic* (discarding obviously irrelevant results) gave 43 results. Then we queried Google Scholar. It gave us 770 for (“synthetic biology” OR "genetic engineering") AND (“iGEM” OR “International Genetically Engineered Machine").

Of course, Google Scholar too is but a bronze bullet: It brings its own drawbacks. It is engineered to pick up things that only seem like scholarly articles. Like Google's search results in general, the results are not curated by a human. This has been criticised in the literature (Péter J., 2006.). We found the occasional hilarious total miss. Google Scholar is also known to somewhat overestimate citation counts (Iselid, L., 2006.). However, we concluded from empirical manual examination of a random sample that the majority of the results are plausible and (most importantly) far greater in scope than searches in curated databases. Taking these aspects into consideration, we found Google Scholar best fulfilled our requirements.

Caveat! We only want statistics to identify trends. For this, large and coarse pieces of data suffice. We would discourage using this method for obtaining precise values!

Browse the data

...are online in a nifty and very usable Google Docs folder.

An introduction is included in case you get lost or want more information.

On extraction tools

We made extensive use of Harzing Publish or Perish (Harzing, A.W., 2007.) to scrape Google Scholar results. The tool has many limitations. However, in our experience it is the best available resource for managing the mess that scientific publication data scraping tends to become.

We did try other things: We quickly found manual methods too slow. Various Firefox browser plugins simply failed outright, were extremely awkward to use or produced clearly erroneous results. The Mac OS program Papers was easy to use and found huge numbers of papers (as it could access many sources), but had unacceptably high rates of error, problems with duplicates and could not export the results into a form we could easily process. Hence Publish or Perish.

Once we had scraped our data from Google Scholar, we needed a method to quantify the relevance of a given scientific article. There are many ways you can quantify success of a paper. Here are a few we investigated:

Plain citation count

It's good scientific practice to cite (i.e. credit work carried out by others). High citation count can generally be taken as an indicator of a high-quality or high-impact paper. This is the most traditional method of ranking the influence of papers.

The main disadvantage of the "citation count"-method is its lack of consistent standards. Even papers within a distinct scientific fields will have differing reference approaches and citation counts. It is also significant that old papers have an edge over newer papers, as they've had more time to be cited.

h-index

The h-index is an integer unique to a set of papers. It is used to measure the output and influence of a set of scientists. A greater h-index implies more productive and more influential authors. It was invented by physicist J.E. Hirsch (2005) and has since been automatically calculated by many citation databases. Here is its definition: "A set of papers has h-index h if h papers out of that set have been cited at least h times." An image ("Ael 2" and "Vulpecula", 2012.) clarifies:

g-index

The g-index is a citation index meant to quantify the influence of papers. It was proposed by Leo Egghe (2006) as a variation to the h-index. It puts more emphasis on the most cited papers and Egghe argues that it ranks highly cited authors more fairly. He gives the following definition: "A set of papers has a g-index g if g is the highest rank such that the top g papers have, together, at least citations." Here's a clarifying image by our Polish friend ("Ael 2", 2012.) again:

Algorithmic methods

It's worth noting that there are many other ways of quantifying productivity and impact of a set of papers or scientists. For example, Y.B. Zhou et al (2012) propose a more complete method for "distinguishing prestige from popularity". In their algorithm, the weight of a citation to the influence of a paper is also dependent on the (already calculated) influences of the citing papers and their authors. This requires running a recursive algorithm on sufficiently complete bipartite network of papers and their authors.

Though the alternatives look enticing, we ended up looking mainly at citation count.

The algorithmic methods were beyond our reach of data availability: We would have had to find the names of every involved person in every iGEM team and all papers they've written, filtering out large amounts of false matches. This was impracticable.

The h and g-indexes don't actually show more than the raw citation counts when it comes to tendencies over time. Also, we had relatively few search queries to compare against each other, given that two were discarded for having various flaws (discussed just below). This meant that the h and g-index, while valuable methods, were not suitable for our particular data analysis.

Is our data usable?

Yes. Mostly...

Given our doubts over accuracy of Google Scholar's data, we considered it a priority to exercise caution with our search query results. This paid off: data we compiled using the search queries of IDs 5 and 2 had fatal flaws. They were rejected from further discussion (discussed below). The other data sets were found to be suitable.

Our method to examine data suitability was empirical: Random samplings of each data set were passed under human eyes. For most queries, this observation of random subsets showed acceptably low levels of "background static" (i.e. results that Google Scholar had automatically matched to the query, which were not actually relevant). These would form only a drop of error in the ocean of relevant data.

...but only mostly.

Query 5 (iGEM OR "International Genetically Engineered Machine") was found to have an unacceptably large level of static. The reason was quickly identified: Because the two quoted terms in Query ID 5 ("iGEM", "International Genetically Engineered Machine") were separated by a disjunction (OR), the query would easily match anything that contained just the acronym "IGEM"! This meant acronyms in economics such as "Inter-temporal General Equilibrium Model (IGEM)" or the British "Institution of Gas Engineers & Managers (IGEM)" and various medical terms and chemical names snuck in. The entire data set with ID 5 was dismissed from further analysis.

The lesson we took from this is not to search for short acronyms by themselves. Query 1 ((“synthetic biology” OR "genetic engineering") AND (“iGEM” OR “International Genetically Engineered Machine")) could be thought of as the "Version Two" of the problematic Query 5. It searches for the same terms, but includes a conjunction (AND) with either synthetic biology or genetic engineering, which bends results toward our iGEM. This tunes static down to an acceptable level.

Query 2 (synthetic biology) had a different problem: It was too big. The query was an attempt to capture stats for the entire field of synthetic biology, so we could statistically determine the relative influence of the iGEM competition. However, we had forgotten the 1000-result cap imposed by Google Scholar. It is impossible to retrieve results beyond this 1k "event horizon". Google does not publish information regarding how the order of results is determined. Hence these first 1000 results (out of what are likely to be 10s or 100s of thousands of papers) are all biased by some unknown force. Were more cited papers favoured? Were papers published more recently favoured? No conclusions can be drawn from a biased and small subset of the full data. We also discounted data set 2 from any further analysis.

To emphasize: Data sets 2 and 5 are not included in any further analysis.

Is iGEM getting attention?

What does "attention" mean within a scientific context? We will operate under the assumption that getting attention correlates strongly with being mentioned in scholarly articles. Hence, we quantify the attention that some term is getting iby searching for that term and summing up result counts.

Here is a chart summarising how many papers mention various terms floating about iGEM over time:

Number of teams Parts submitted (10s) Papers mentioning iGEM Papers mentioning Registry of Parts Papers mentioning specific Registry Parts

Chart 1: Summary of iGEM-related attention over time

This chart shows the amount of "attention" the iGEM competition is receiving over time, shown relative to the growth of the competition itself (quantified by amount of registered teams). "Attention" is measured by amount of papers mentioning certain keywords deemed to be relevant to iGEM or the Registry of Standard Parts. An overall positive trend in proportion with the growth of the competition can be seen.

And the answer is...

Things are looking good for iGEM! Since the first iGEM competition in 2003, more teams are participating each year and more parts are being submitted. The efforts to expose iGEM to the community has paid off - the amount of scientific papers mentioning iGEM and the Registry of Standard Parts has risen, proportional to the increase of the competition itself.

Papers mentioning a specific Registry BioBrick have only begun to appear in recent years, but the numbers show growth. We hypothesize that the Registry's contents is only now reaching the critical mass to become a useful research tool. The founders' dream (iGEM Foundation, 2012b) of "making genetics modular" is becoming reality.

Where this data comes from

Data about participating teams and number of submitted BioBricks comes from the iGEM Foundation (2012a). The other data sets come from the results Google Scholar queries with IDs 1, 3 and 4 (see query summary table above).

Is the relationship between iGEM and the Registry clear?

Chart 2: iGEM attention
Chart 3: "iGEM and Registry" attention
Chart 4: Registry attention

These three graphs show... plus explain why some ignored. Lorem ipsum etc.

Expectations

One of the iGEM competition's important goals is to build up well-characterized BioBrick content in the Registry. Thus, the iGEM competition and the Registry are inherently linked. We would expect publications that mention iGEM to also refer to the Registry. However, as the Registry is not only used by iGEM, we expected to also find a large number of papers that mention the Registry without mentioning iGEM.

In other words, we expected:

  • a large proportion of papers that mention iGEM and the Registry,
  • a large propotion that mention only the Registry and
  • only a very small proportion that mention only iGEM.

Yes and No

The expected proportion of papers mentioning only the Registry was found. However, of the papers that mention iGEM, about half do not also mention the Registry. This was a greater proportion than we expected.

We theorise that this may be due to the iGEM Foundation and iGEM teams undercommunicating the importance of part standardisation and their contribution to this global effort. Indeed iGEM is usually presented by the Foundation and teams as "a synthetic biology competition", when really that's just half the picture. We should make a bigger deal of this!

Just iGEM iGEM and Parts Registry Just Parts Registry

Chart 5: Proportional mentions of iGEM, Registry and Both

This graph shows how unmotivated I am to write another legend at the moment. Lorem. Ipsum. Et tu mambien.

Text

"Ael 2" and "Vulpecula", 2012. h-index (Hirsch). Wikipedia. [image online] Available at: <http://en.wikipedia.org/wiki/File:H-index-en.svg> [Accessed Jul 27, 2012].

"Ael 2", 2012. Illustrated example for the g-index proposed by Egghe. Wikipedia [image online] Available at: <http://en.wikipedia.org/wiki/File:Gindex1.jpg> [Accessed Jul 27, 2012].

Egghe, L., 2006. Theory and practise of the g-index. Scientometrics [online], Volume 69 (Issue 1), p.131-152. Available at: <www.springerlink.com/content/4119257t25h0852w/?MUD=MP> [Accessed Jun 7, 2012].

Harzing, A.W., 2007. Publish or Perish. [computer program] Available from <http://www.harzing.com/pop.htm>

Hirsch, J.E., 2005. An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, Volume 102 (Issue 46). [online] Available at: http://<www.ncbi.nlm.nih.gov/pmc/articles/PMC1283832/?tool=pmcentrez> [Accessed 5th Jul, 2012]

iGEM Foundation, 2012a. Previous iGEM Competitions. [web page] Available at: <https://igem.org/Previous_iGEM_Competitions> [Accessed Jul 30, 2012]

iGEM Foundation, 2012b. Press Kit. [web page] Available at: <https://igem.org/Press_Kit> [Accessed Aug 3, 2012]

Iselid, L., 2006. Research on citation search in Web of Science, Scopus and Google Scholar. One Entry to Research [blog] Available at: <http://oneentry.wordpress.com/2006/08/11/research-on-citation-search-in-web-of-science-scopus-and-google-scholar/> [Accessed Jun 20, 2012].

Péter J., 2006. Dubious hit counts and cuckoo's eggs. Online Information Review [online] Volume 30 (Issue 2) p.188-193. Available at: <http://www.emeraldinsight.com/journals.htm?articleid=1550726&show=abstract> [Accessed Jun 20, 2012].

Zhou Y., Liyan L. and Menghui L., 2012. Quantifying the influence of scientists and their publications: distinguishing between prestige and popularity. New Journal of Physics, [online] Volume 14 (March 2012) Available at: <http://iopscience.iop.org/1367-2630/14/3/033033/> [Accessed Jun 7, 2012].

Back to top

University of St Andrews, 2012.

Contact us: igem2012@st-andrews.ac.uk, Twitter, Facebook

This iGEM team has been funded by the MSD Scottish Life Sciences Fund. The opinions expressed by this iGEM team are those of the team members and do not necessarily represent those of Merck Sharp & Dohme Limited, nor its Affiliates.