Bioinformatics: the next big thing in life sciences
Bioinformatics is starting to show its strength as a tool to interpret and analyse vast amounts of complex data in the search for new drugs.
While the field of bioinformatics has been in existence since the 1970s, it seems that this may be the decade where it finally comes of age and demonstrates its value to the life sciences industry. With the explosion of biological data from developments such as the Human Genome Project, traditional biological research methods are no longer able to cope with the quantity and complexity of the data. As a result, new approaches have been developed, which could lead to novel and improved treatments for a variety of diseases, including some of the world’s biggest killers, such as cancer and heart disease.
It has taken some time for bioinformatics to reach the spotlight and to be seen as a priority by many of the world’s biggest pharma companies. Only now are many realising the true potential of data intensive experiments and, as a result, bioinformatics is now a rapidly expanding sector. Its development has been fuelled by technological advances such as increased computing power, innovative algorithms and novel data management techniques, alongside some brilliant researchers.
Only now are genomics, proteomics and metabolomics starting to be used to their full potential. However, the ability to use all of these ‘omics’ technologies effectively is dependent on our ability to interpret and use the data, and this is where bioinformatics plays a key role.
As Next Generation Sequencing is used in more areas than ever before, it has become a business critical technology. With the identification of new targets dependent on new approaches, coupled with the growing importance of areas such as companion diagnostics, the importance of bioinformatics has grown significantly. However, the sheer amount of data now being generated as a result cannot be analysed using traditional techniques. As a result, bioinformatics has become crucial to managing the data and identifying meaningful connections, which can be turned into new treatments or vaccines later.
There are a number of different models for the funding of omics studies, and particularly genomics. Traditional academic sources are being supplemented by healthcare systems and private organisations. In the UK the NHS and the creation of Genomics England will see 100,000 genomes sequenced as part of the 100K Genome Project, and the data made publicly available. At the same time in the US, a private model is emerging, epitomised by biologist and entrepreneur Craig Venter’s latest venture, Human Longevity Inc.
After a period of losing ground to other countries when it comes to running clinical trials the UK government has indicated its commitment to the sector through the creation of the new role, Minister for Life Sciences. The 100K Genome Project seems to be a part of this strategy and, by using the data generated from these cohorts, cherry picking the most appropriate groups, and then using the latest bioinformatics technologies, we will be able to identify the causes of disease much more effectively, and develop new and improved treatments. This could also have a significant impact on how clinical trials are run, attracting clinical trials back to the UK due to the excellent quality of the available data.
With the inexorable march of genomics towards the clinic, the accuracy and precision of the resulting measurements becomes of paramount importance. An improved canonical reference representing a human genome is currently under development; a gold standard against which to compare genomics technologies objectively, allowing thorough quality assurance and comparison of methods, enabling researchers to identify the most appropriate for a given project.
Two approaches to this are underway. First, there is the public Genome in a Bottle initiative from the National Institute of Science and Technology. This will see a single genome subjected to measurement by a plethora of techniques. By contrast, the commercial Platinum Genomes project will see the market-leading Illumina platform applied to a family of genomes. Through these projects we will be able to set a benchmark for comparing the effectiveness of the increasing range of bioinformatics techniques and approaches.
However, even with all of this, the bioinformatics industry still faces a massive challenge with regard to ensuring quality. Multi-omics data integration is still incredibly challenging and understanding how these datasets correlate is a major challenge. The technology is improving all the time, but technology may not solve the one major limitation we have; bioinformatics still suffers from serious skill shortages, and particularly in the increasingly vital field of bio-curation. While we can develop new techniques and approaches to improve metadata analysis, the shortage of experts who can effectively manage and curate all of this information is a major roadblock.
However, despite all of these challenges, things have never looked better for bioinformatics. From being a niche and underused area of science just a decade ago, bioinformatics is now at the very forefront of science. Big pharma is now waking up to its potential impact in identifying possible new therapies, and in the rapidly expanding world of companion diagnostics. At the same time, we are also seeing significant commercial interest in bioinformatics with global organisations such as Qiagen and Illumina acquiring companies to improve their capabilities in this area.
Quite simply, without the expertise of the bioinformatics community, it is going to be increasingly difficult for new treatments or vaccines to be developed, making bioinformatics the biggest thing in life sciences now.
About the author:
Will Spooner is Chief Science Officer of Eagle Genomics, an expert provider of bioinformatics software and services across the life science and other sectors. www.eaglegenomics.com
Have your say: Where can bioinformatics take pharmaceutical research?