Large Scalability - Papers and implementations

Posted:   |   More posts about search Hacking Free Software Hadoop Software Foundation
In recent years the Googles and Amazons on this world have released papers on how to scale computing and processing to terrabytes of data. These publications have led to the implementation of various open source projects that benefit from that knowledge. However mapping the various open source projects to the original papers and assigning tasks that these projects solve is not always easy.

With no guarantee of completeness this lists provides a short mapping from open source project to publication.

There are further overviews available online as well as a set of slides from the NOSQL debrief.

Map Reduce Hadoop Core Map ReduceDistributed programming on rails, 5 Hadoop questions, 10 Map Reduce Tips
GFSHDFS (Hadoop File System)Distributed file system for unstructured data
BigtableHBase, HypertableDistributed storage for structured data, When to use HBase.
ChubbyZookeeperDistributed lock- and naming service
SawzallPIG, Cascading, JAQL, HiveHigher level langage for writing map reduce jobs
Protocol BuffersProtocol Buffers, Thrift, Avro, more traditional: Hessian, Java serializationData serialization, early benchmarks
Some NoSQL storage solutionsCouchDB, MongoDBCouchDB: document database
DynamoDynomite, Voldemort, CassandraDistributed key-value stores
IndexLuceneSearch index
Index distributionkatta, Solr, nutchDistributed Lucene indexes
Crawlingnutch, Heritrix, droids, Grub, ApertureCrawling linked pages