Apache Hadoop Get Together - February 2012

2012-02-23 00:14
Today the first Hadoop Get Together Berlin 2012 took place - David got the event hosted by and at Axel Springer who kindly also paid for the (soon to be published) videos. Thanks also to the unbelievable Machine company for the tasty buffet after the meetup. Another thanks to Open Source Press for donating three of their Hadoop books.



Today's selection was quite diverse: The event started with a presentation by Markus Andrezak who gave an overview of Kanban and how it helped him change the development workflow over at eBay/mobile. Being well suited for environments that require flexibility Kanban is well suited to decrease risk associated with any single release by bringing the number of features released down to an absolute minimum. At Mobile his team got release cycles down to once a day. More than ten times a day however aren't unheard of as well. The general goal for him was to reduce the risk associated with releases by reducing the number of features released per release, reducing the number of moving parts in one release and as a result reducing the number of potential sources for problems: If anything goes wrong, rolling back is no issue - nor is narrowing down on the potential sources of bugs in the changed software that were introduced in that particular release.

This development and output focused part of the process is complemented by an input focused Kanban cycle for product design: Products are going from idea to vision to a more detailed backlog to development and finally live the same as issues in development itself move from Todo to in progress, under review and done.

With both cycles the main goal is to keep the number of items in progress as low as possible. This will result in more focus for each developer and greatly reduce overhead: Don't do more than one or two things at a time. Only catch: Most companies are focused on keeping development busy at all times - their goal is to reach 100% utilization. This however is in no way correlated to actual efficiency: By having 100% utilization there is not way you can deal with problems along the way, there is no buffer. Instead the idea should be to concentrate on a constant flow of released and live features instead.



Now what is the link of all that to Hadoop? (Hint: No, this is no pun on the project's slow release cycle.) The process of Kanban allows for frequent releases, it allows for frequent feedback. This enables a model of development that starts out from a model of your business case (no matter how coarse that may be), start building some code, measure your performance with that code based on actual usage data and adjust the model accordingly. Kanban lets you iterate very quickly on that loop getting you ahead of competitors eventually. In terms of technology one strong tool in their toolbox to really do data analytics on their incoming data is to use Hadoop and scale up analysing business data.

In the second talk Martin Scholl started out by drawing a comparison from music vs. printed music sheets to the actual performance of musicians in a concert: The former represents static, factual data. The latter represents a process that may be recorded, but may not by copied itself as it lives by the interactions with the audience. The same holds true for social networks: Their current state and the way you look at them is deeply influenced by your way of interacting with the system in realtime.

So in addition to data storage solutions for static data, he argues, we also need a way to process streaming data in an efficient and fault tolerant way. The system he uses for that purpose is Storm that was open-sourced by Twitter late last year. Built on top of zeroMQ it allows for flexible and fault tolerant messaging. Example applications mentioned are event analysis (filtering, aggregation, counting, monitoring), parallel distributed rpc based on message passing.

Two concrete examples include setting up a live A/B testing environment that is dynamically reconfigurable based on it's input as well as event handling in a social network environment where interactions might trigger messages being sent by mail and instant message but also trigger updates in a recommendation model.

In the last talk Fabian Hüske from TU Berlin introduced Stratosphere - an EU founded research project that is working on an extended computational model on top of HDFS that provides more flexibility and better performance. Being developed before the rise of Apache Hadoop YARN unfortunately essentially what they did was to re-implement the whole map/reduce computational layer and put their system into that. Would be interesting to see how a port to YARN performs and what sort of advantages it gives in production.

Looking forward to seeing you all in June for Berlin Buzzwords - make sure to submit your presentation soon, call for presentations won't be extended this year.

February 2012 Apache Hadoop Get Together Berlin

2012-01-31 20:34
The upcoming Apache Hadoop Get-Together is scheduled for 22. February, 6 p.m. - taking place at Axel Springer, Axel-Springer-Str. 65, 10888 Berlin. Thanks to Springer for sponsoring the location!

Note: It is important to indicate attendance. Due to security restrictions at the venue only registered visitors will be permitted. Get your ticket here: https://www.xing.com/events/hadoop-22-02-859807

Talks scheduled thus far:

Markus Andrezak : "Queue Management in Product Development with Kanban - enabling flow and fast feedback along the value chain" - It's a truism today that fast feedback from your market is a key advantage. This talk is about how you can deliver smallest product increments or MVPs (minimal viable products) quickly to your market to get fastest possible feedback on cause and effect of your product changes. To achieve that, it helps to provide a continuous deployment infrastructure as well as all you need for A/B testing and other feedback instruments. To make the most of these achievements, Kanban helps to limit work in progress, thus manage queues and speed up lead times (time from order to delivery or concept to cash). This helps us speed through the OODA Loop, i.e. Eric Ries' (The Lean Startup) Model -> Build -> Code -> Measure -> Data -> Validate -> Model. The more we can go through the loop, the more we have a chance to fine tune and validate our model of the business and finally make the right decisions.

Markus is one of Germany’s leading Kanban practitioners - writing and presenting talks about it in numerous publications and conferences. He will provide a brief view into how he is achieving fast feedback in diverse contexts.
Currently he is Head of mobile commerce at mobile.de.

Martin Scholl : "On Firehoses and Storms: Event Thinking, Event Processing" - The SQL doctrine is still in full effect and still fundamentally affects the way software is designed, the state it is stored in as well as the system architecture. With the NoSQL movement people have started to realize that the manner in which data is stored affects the full stack -- and that reduction of impedance mismatch is a good thing(TM). "Thinking in events" follows this tradition of questioning what is state-of-the-art. Modeling a system not in mutable entities (as with data stores) but as a stream of immutable events that incrementally modify state, yields results that will exceed your expectations. This talk will be about event thinking, event software modeling and how Twitter's Storm can help you process events at large.

Martin Scholl is interested in data management systems. He is also a Founder of infinipool GmbH.


Fabian Hüske : "Large-Scale Data Analysis Beyond Map/Reduce" - Stratosphere is a joint project by TU Berlin, HU Berlin, and HPI Potsdam and researches "Information Management on the Cloud". In the course of the project, a massively parallel data processing system is built. The current version of the system consists of the parallel PACT programming model, a database inspired optimizer, and the parallel dataflow processing engine, Nephele. Stratosphere has been released as open source. This talk will focus on the PACT programming model, which is a generalization of Map/Reduce, and show how PACT eases the specification of complex data analysis tasks. At the end of the talk, an overview of Stratosphere's upcoming release will be given.

Fabian has been a research associate at the Database Systems and Information Management (DIMA) group at the Technische Universität Berlin since June 2008. He is working in the Stratosphere research project, focusing on parallel programming models, parallel data processing, and query optimization. Fabian started his studies at the University of Cooperative Education, Stuttgart, in cooperation with IBM Germany in 2003. During that course, he visited the IBM Almaden Research Center in San Jose, USA, twice and finished in 2006. Fabian undertook his studies at Universität Ulm and earned a master's degree in 2008. His research interests include distributed information management, query processing, and query optimization.


A big Thank You goes to Axel Springer for providing the venue at no cost for our event and for paying for videos to be taped of the presentations. A huge thanks also to David Obermann for organising the event.

Looking forward to seeing you in Berlin.

Video up: Douwe Osinga

2011-12-09 22:01

Video: Max Jacob on Pig for NLP

2011-12-09 21:26

Slides online

2011-12-09 06:55
Slides of this week's Apache Hadoop Get Together Berlin are online at:



Overall a great event, well organised - looking forward to seeing you at the next Get Together. If you want to get in touch with our participants, learn about new events or simply chat between meetups join our Apache Hadoop Get Together Linked.In Group.

Apache Hadoop Get Together Berlin December 2011

2011-12-08 01:50
First of all a huge Thank You to David Obermann for organising today's Apache Hadoop Get Together Berlin: After a successful Berlin Buzzwords and a rather long pause following that finally a Christmas meetup took place today at Smarthouse, kindly sponsored by Axel Springer and organised by David Obermann from idealo. About 40 guests from Neofonie, Nokia, Amen, StudiVZ, Gameduell, TU Berlin, nurago, Soundcloud, nugg.ad and many others made it to the event.



In the first presentation Douwe Osinga from triposo went into some details on what Triposo is all about, how development for it differs in terms of scope and focus at larger corporations and what patterns they use for getting the data crawled, cleaned and served to users.

The goal of Triposo is to be able to build travel guides in a fully automated way. In contrast to simply creating a catalog of places to go to the goal is to have an application that is competitive to Lonely Planet books: Have tours, detailed background information, recommend places to visit based on wheather and seasonal signals, allow users to create travel books.

Joining Triposo from Google, Douwe gave a rather interesting perspective on what makes a startup interesting for innovative ideas. There are four interesting aspects of application development that according to his talk matter for Google projects: First is embracing failure. Not only can single hard disks fail, but servers might be switched off automatically for maintenance, even entire datacenters going offline must not affect your application. Second is a strong focus on speed: Developers working with dynamic languages like Python that allow for rapid prototyping at the expense of slower runtime are generally frowned upon. Third building block is the focus on search that is ingrained in every piece of architecture and thinking. Fourth and last is a strong mentality to build your own which may lead to great software but leaves software developers in an isolated island of proprietary software that can limit but at least shapes your way of thinking.

He gave Youtube as an example: Though built on top of MySQL, implemented in Python and certainly not failure proof in every aspect they succeeded by concentrating on users' needs, time to market and iteratively improving their software with a frequent (as in one week) develop-release-deploy cycle. When entering new markets and providing innovative applications it often is crucial to be able to move quickly at the expense of speed and stability. It certainly is important to consider different architectures and chose the one that is appropriate to solve the problem at hand. Same reasoning applies for Apache Hadoop as well: Do not try to solve problems with it that it is not made to solve. Instead first think what is the right tool for your job.

Triposo itself is built on top of 12 data sources. Most are freely available, integrated to build a usable and valuable travel guide application for iOS and Android. The features available in Triposo can be phrased in terms of a search and information retrieval problem setting and as such lend itself well for integrated sources. With offers from Amazon, Google itself, Dropbox and the like it has become easy to deploy applications in an elastic way and scale with your user base and demand for more country coverage. For them it proved advantages to go for an implementation based on dynamic languages for pure development speed.

When it comes to QA they take a semi-manual approach: There are scripts checking recall (Brandenburger Tor must be found for the Berlin guide) as well as precision (there must be only one Brandenburger Tor in Berlin). Those rules need to be manually tuned.

When integrating different sources you quickly run into a duplicate discovery problem. Their approach is pretty pragmatic: Merge anything that you are confident enough to say it is a duplicate. Kill everything that is likely a duplicate but you are not confident enough to merge. The general guideline is to rather miss a place than have it twice.

For the wikipedia source so far they are only parsing the English version. There are plans to also support other languages - in particular for parsing to increase data quality as e.g. for some places geo coordinates may be available in the German article but not in the English one.

Though not going into too many technical details the talk gave some nice insights as to the strengths and weaknesses of different company sizes and mindsets when it comes to innovation as well as stabilization. Certainly a startup to watch, glad to hear that though incorporated in the US most developers actually live in Berlin now.

The second talk was given by Max Jakob from Neofonie GmbH (working on EU funded research project Dicode) gave an overview of their pipeline for named entity extraction and disambiguation based on a language model extracted from the raw German wikipedia dump. They used Pig to scale a pipeline down from about a week to 1.5 hours with not much development overhead: Quite some logic could be re-used from the open source project pignlproc initiated by Olivier Grisel. This project already features a Wikipedia loader, a UDF for extracting information from Wikipedia documents and additional scripts for training and building corpora.



Based on that they defined the ML probability of a surface form being a named entity. The script itself is not very magical: The whole process can be expressed as a few steps involving grouping and couting tuples. The effect in terms of runtime vs. development time however is impressive.

Checkout their DICODE github project for further details on the code itself.

After the meetup about 20 attendees followed David to a bar nearby. It is always great to get a chance to talk to the speakers, exchange experiences with others and learn more on what people are actually working on with Hadoop after the event.

Slides of all talks are going to be posted soon, videos go online as soon as they are post processed so stay tuned for further information.

Looking forward to seeing you again for the next meetup. If you could not make it this time, there is a very easy way to not have that happen again next time: First speaker to submit a talk proposal to David sets the date and time of the meetup (taking into account any constraints with venue and video taping of course).

December Apache Hadoop Get Together Berlin

2011-11-24 20:14
First of all please note that meetup organisation is being transitioned over to our xing meetup group. So in order to be notified of future meetings, make sure to join that group. Please make also sure to register for the December event as in contrast to past meetups this time space will be limited, so make sure to grab a ticket. If you cannot make it, please let the organiser know so he can issue additional tickets.

For those of you currently following this blog only for announcements:

When: December 7th 2011, 7 p.m.

Where: Smarthouse GmbH, Erich-Weinert-Str. 145, 10409 Berlin

Speaker: Martin Scholl
Title: On Firehoses and Storms: Event Thinking, Event Processing


Speaker: Douwe Osinga
Title: Overview of the Data Processing Pipeline at Triposo

Looking forward to seeing you at the next Apache Hadoop Get Together Berlin in December.

Cloudera in Berlin

2011-11-14 20:24
Cloudera is hosting another round of trainings in Berlin in November this year. In addition to the trainings on Apache Hadoop this time around there will also be trainings on Apache HBase.

Register online via:



Apache Hadoop Get Together - Hand over

2011-11-02 16:20
Apache Hadoop receives lots of attention from large US corporations who are using the project to scale their data processing pipelines:

“Facebook uses Hadoop and Hive extensively to process large data sets. [...]” (Ashish Thusoo, Engineering Manager at Facebook), "Hadoop is a key ingredient in allowing LinkedIn to build many of our most computationally difficult features [...]" (Jay Kreps, Principal Engineer, LinkedIn), "Hadoop enables [Twitter] to store, process, and derive insights from our data in ways that wouldn't otherwise be possible. [...]" (Kevin Weil, Analytics Lead, Twitter). Found on Yahoo developer blog.

However the system's use is not limited to large corporations only: With 101tec, Zanox, nugg.ad, nurago also local German players are using the project to enable new applications. Add components like Lucene, Redis, CouchDB, HBase and UIMA to the mix and you end up with a set of majour open source components that allow developers to rapidly develop systems that until a few years ago were possible only either in Google-like companies or in research.

The Berlin Apache Hadoop Get Together started in 2008 allowed to learn more on how the average local company leveraged this software. It is a platform to get in touch informally, exchange knowledge and best practices across corporate boundaries.

After three years of organising that event it is time to hand it over to new caring hands: David Obermann from Idealo kindly volunteered to take over organisation. He is a long-term attendee of the event and will continue it in the roughly the same spirit as before: Technical talks on success stories by users, new features by developers - not solely restricted to Hadoop only but also taking into account related projects.

A huge Thank You for taking up the work of co-ordinating, finding a venue and a sponsor for the videos goes to David! If any of you attending the event think that you have an interesting story to share, would like to support the event financially or just help out please get in touch with David.

Looking forward to the next Apache Hadoop Get Together Berlin. Watch this space for updates on when and where it will take place.

Video is up - Simon Willnauer on Lucene 4 Performance improvements

2011-02-22 21:21