Illumina acquired Solexa, and shipped the first Genome Analyzer (the ancestor of the HiSeq). Clive Brown was a central figure at Solexa during their rise and acquisition, and has been at Oxford Nanopore for the last few years (he's the one who gave the talk).
There's a lot of hype in genomics, but Brown is one of the few who has delivered, which is part of the reason why MinION is getting the benefit of the doubt. Very exciting announcement.
If I remember correctly, in 2007, the Sanger method for sequencing (which involves nucleotide-by-nucleotide, laborious, gel-based sequencing) was quickly replaced by cheaper methods. We started using laser-based sequencers, along with statistics, to determine longer sequences quicker, and using computers instead of legions of grad students.
There are various versions of this graph floating around, most of which start in 2005 with the publication of the first next-generation (multiplexed) sequencing methods by pyrosequencing (454) and sequencing by ligation (George Church) for complete bacterial genomes. Costs came down very quickly after commercial release in 2006 of 454, ABI Solid (SbL), and Solexa (SbS-sequencing by synthesis). This graph in particular is referring to the cost of sequencing a human genome, which was first done on James Watson using these next generation technologies by 454 in late 2007 if I remember correctly.
That is so cool, and the mechanics they used to read the molecules and miniaturize the equipment look incredibly clever for me, as an ignorant in chemistry. Biology is getting more and more "hackable". It seems we are seeing the same kind of evolution once happened with computers - what was once unreachable and reserved for just a couple people (big mainframes) get increasingly cheap and ubiquitous.
They claim the accuracy is 96% and has the potential to get even better, which is even cooler given the long read length.
However, any error can be pretty upsetting and most next gen sequencers overcome this by doing many reads per base. I'm guessing that the current behemoth machines would be kept around for the time being to assemble the final, error free sequences?
current behemoth machines have errors too, often the best way to get a really polished sequence is to sequence on machines which have different type of errors, and interlace the data appropriately.
I don't understand the downvotes, because it seems to be an interesting question, and the article is not very clear.
Is $900 the price for the gadget alone? Is the price for the gadget + 1 time enzyme kit? Is it the price of each repetition of the analysis (enzyme + disposables), but there is an additional initial cost (gadget)? This thing is reusable, or you must buy a new whole kit each time?
Ars had its own article on this saying that you do not need enzymes. It's forcing the molecule through a tiny pore and reading the change in an electrical signal. That said, one problem mentioned is that the error rate is quite high.
"Both GridION and MinION operate using the same technology: DNA is added to a solution containing enzymes that bind to the end of each strand. When a current is applied across the solution these enzymes and DNA are drawn to hundreds of wells in a membrane at the bottom of the solution, each just 10 micrometres in diameter."
Perhaps this method use fewer enzymes than the old one.
10kbp reads would also be very useful for de novo assembly of larger genomes, particularly in repetitive regions. One company (PacBio) currently claims reads this long, but the error rate is around 15%.
Yeah, the 10 (100?) kb length is the huge thing here. An awful lot of sequencing bioinformatics is dealing with the implications of short read length. This problem is possibly going to go away now.
The biggest problem has been and will continue to be interpretation of the results, particularly in human clinical sequencing. Especially as more data becomes available with advances like this.
The original human genome project started in 1990 and took a decade.
Now you can do it in two to six hours for around a grand?
I wonder if we'll be seeing more people public domain their genome?
EDIT: And thats just the "Compare to two decades ago" position.
The real craziness comes when you realize that this will inevitably get cheaper, and smaller, setting the stage for more advancements in this field. (I think at this point were all expecting to be able to actually turn genomes into living things before 2050.)
Meh, it's more than just synthesis. Synthesis costs are falling faster than Moores law, too; but you don't get things like epigenetic modifications or packing for free, and that tech basically doesn't exist yet.
http://www.genome.gov/images/content/cost_per_genome.jpg
It's surpassing Moore's law. Looks like pretty soon we'll reach the mythical $1000 mark.