Teddy in Atlanta

2010-11-28 23:24
While I was happily attending Apache Con US in Atlanta/GA my teddy had a closer look at the city: He first went to the centennial olympic park, took a picture of the world of coca-cola (wondering what strange kinds of museums there are in the US.



After that he headed over to Midtown having a quiet time in the Piedmont park. And finally had a closer look at the private houses still decorated for Halloween. Seems like it was squirrel day that day: Met more than ten squirrels he told me.



I found quite some impressive pictures of the arts museum on my camera after his trip out – as well as several images taken at the campus of the Georgia tech university. It's amazing to see what facilities are available to students – especially compared to the equipment of German universities.

Apache Con – last day

2010-11-27 23:23
Day three of Apache Con started with interesting talks on Tomcat 7, including an introduction to the new features of that release. Those include better memory leak prevention and detection capabilities – the implementation of these capabilities have lead to the discovery of various leaks that appear under more or less weird circumstances in famous open source libraries and the JVM itself. But also better management and reporting facilities are part of the new release.

As I started the third day over at the Tomcat track, unfortunately I missed the Tika and Nutch presentations by Chris Mattman – so happy, that at least the slides were published online: $LINK. The development of nutch was especially interesting for me as that was the first Apache project I got involved with back in 2004. Nutch started out as a project with the goal of providing an open source alternative internet-scale search engine. Based on Lucene as a indexer kernel, it also providing crawling, content extraction and link analysis.

Focussed on building an internet scale search engine the need for a distributed processing environment quickly became apparent. Initial implementations of a nutch distributed file system and a map reduce engine lead to the creation of the Apache Hadoop project.

In recent years it was comparably quiet around nutch. Besides Hadoop also content extraction was factored out of the project into Apache Tika. At the moment development is getting more momentum again. Future developments are supposed to be focussed on building an efficient crawling engine. As storage backend the project wants to leverage Apache HBase, for content extraction Tika is to be used, as indexing backend Solr.

I loved the presentation by Geoffrey Young on how they used Solr to replace their old MySQL search based system for better performance and more features at Ticketmaster. Indexing documents representing music CDs presents some special challenges when it comes to domain modeling: There are bands with names like “!!!”. In addition users are very likely to misspell certain artists names. In contrast to large search providers like Google these businesses usually have neither human resources nor enough log data to provide comparable coverage e.g. when implementing spell-checking. A very promising and agile approach taking instead was to parse log files for most common failing queries and from that learn more about features needed by users: There were many queries including geo information coming from users looking for an event at one specific location. As a result geo information was added to the index leading to happier users.

Apache Con – Mahout, commons and Lucene

2010-11-26 23:21
The second day the track interesting to me provided an overview of some of the Apache commons projects. So seemingly small in scope and light-weight in implementation and dependencies these projects provide vital features not yet well supported by the Sun JVM. There is a commons math implementation featuring a fair amount of algebraic, numeric and trigonometric functions (among others), the commons exec framework for executing processes externally to the JVM w/o running into the danger of creating dead-locks or wasting resources.

After that the Mahout and Lucene presentations were up. Grant gave a great overview of various use-cases of machine learning in the wild, rightly claiming that anyone using the internet today makes use of some machine learning powered application each day – be it e-mail spam filtering, the Gmail priority inbox, recommendaed articles on news sites, recommended items to buy at shopping sites or targeted advertisements shown when browsing. The talk was concluded by a more detailed presentation of how to successfully combine the features of Mahout and Lucene/Solr to built next generation web services that integrate user feedback into their user experience.

ApacheCon - Keynotes

2010-11-25 23:20
The first keynote was given by Dana Blankenhorn – a journalist and blogger regularly publishing tech articles with a clear focus on open source projects. Focussed on the evolution of open source projects with a special focus on Apache.

Coming from a research background the keynote given by Daniel Crichton from NASA was very interesting to me: According to the speaker scientists are facing challenges that are all to known to large and distributed corporations. Most areas in science is currently becoming more and more dependent on data intensive experiments. Examples include but are not limited to

  • The field of biology where huge numbers of experiments are needed to decipher the internal workings of proteins, or to be able to understand the fundamental concepts underlying data encoded in DNA.
  • In physics hadron collider experiments huge amounts of data are generated with each experiment. With facilities for running such experiments are expensive to build and the amount of data generated is for too large to be analysed by just one team groups of scientists are suddenly facing the issue of exchanging data with remote research groups. They suddenly run into the requirement of integrating their system to those of other groups. All of a sudden data formats and interfaces have to somehow be standardised.
  • Running space missions used to be limited to just a very small number of research institutions in a very tiny number of countries. However this is about to change as more countries are gaining the knowledge and facilities to run space missions. Again this leads to the need to be able to collaborate towards one common goal.

Not only are software systems so far distinct and incompatible. Even data formats used usually are incompatible. The result are scientists spending most of their time re-formatting, converting and importing datasets before being able to get any real work done. At the moment research groups are not used to working collaboratively in distributed teams. Usually experiments are run on specially crafted, one-of software that cannot be easily re-used, that does not adhere to any standards and that is being re-written over and over again by every research group. Re-using existing libraries is oftentimes a huge cultural shift as researchers seemingly are afraid of external dependencies, afraid of giving up control over part of their system.

One step into the right direction was taken by NASA earlier this year: They released their decision making support system OODT under a free software license (namely the Apache Software License) and put the project under incubation at Apache. The project currently is about to graduate to its own top level Apache project. This step is especially remarkable as successfully going through the incubator also means to have established a healthy community that is not only diverse but also open to accepting incoming patches and changes to the software. This means to not only give up control over your external dependencies but also having the project run in a meritocratic, community driven model. For the contributing organisation, this boils down to no longer having total control over the future roadmap of the project. In return this usually leads to higher community participation, and higher adoption in the wild.

Apache Con – Hadoop, HBase, Httpd

2010-11-24 23:19
The first Apache Con day featured several presentations on NoSQL databases (track sponsored by Day software), a Hadoop track as well as presentations on Httpd and an Open source business track.

Since its inception Hadoop always was intended to be run in trusted environments firewalled from hostile users or even attackers. As such it never really supported any security features. This is about the change with the new Hadoop release including better Kerberos based security.

When creating files in Hadoop a long awaited feature was append support. Basically up to now writing to Hadoop was a one-of job: Open a file, write your data, close it and be done. Re-opening and appending data was not possible. This situation is especially bad for HBase as its design relies on being able to append data to an existing file. There have been efforts for adding append support to HDFS earlier as well as an integration of such patches by third party vendors. However only with a current Hadoop version Append is officially supported by HDFS.

A very interesting use case-wise of the Hadoop stack was presented by $name-here from $name. They are using a Hadoop cluster to provide a repository of code released under free software licenses. The business case is to enable corporations to check their source code against existing code and spot license infringements. This does not only include linking to free software under incompatible licenses but also developers copying pieces of free code, e.g. copying entire classes or functions into internal projects that originally were available only under copyleft licenses.

The speaker went into some detail explaining the initial problems they had run into: Obviously it's no good idea to mix and match Hadoop and HBase versions freely. Instead it is best practice to use only versions claimed to be compatible by the developers. Another common mistake is to leave parameters of both projects at their defaults. The default parameters are supposed to be fool-proof. However they are optimised to work well for Hadoop newbies who want to try out the system on a single node cluster and in a distributed setting obviously need more attention. Other anti-patterns include storing only tiny little files in the cluster thus quickly running out of memory on the namenode (that stores all file information including block mappings in main memory for faster access).

In the NoSQL track Jonathan Grey from Facebook gave a very interesting overview on the current state of HBase. Turns out that Facebook would announce only a few days after that their internal use of HBase for the newly launched feature of Facebook messaging.

HBase has adopted a release cycle including development/ production releases to get their systems into interested users' hands more quickly: Users willing to try out new experimental features can use the development releases of HBase. Those who are not should go for the stable releases.

After focussing on improving performance in the past months the project is currently focussing on stability: Data loss is to be avoided by all means. Zookeeper is to be integrated more tightly for storing configuration information thus enabling live reconfiguration (at least to some extend). In addition also HBase is targeting to integrate stored procedures like behaviour: As explained in Googles Percolator paper $LINK batch oriented processing get's you only so far. If data that gets added constantly it makes sense to give up on some of the throughput batch-based systems provide and instead optimise for shorter processing cycles by implementing event triggered processing.

On recommendation of one of neofonie's sys-ops I visited some of the httpd talks: First Rich Bowen gave an overview of unusual tasks one can solve with httpd. The list included things like automatically re-writing http response content to match your application. There is even a spell checker for request URLs: Given marketing has given your flyer to the press with a typo in the url, chances are that the spellchecking module can fix these automatically for each request: Common mistakes covered are switched letters, numbers replaced by letters etc. The performance cost has to be paid only in case no hit could be found – so instead of returning a 404 right away the server first tries to find the document by taking into account common mis-spellings.

Apache Con – Hackathon days

2010-11-23 23:17
This year on Halloween I left for a trip to Atlanta/GA. Apache Con US was supposed to take place there featuring two presentations on Apache Mahout – one by Grant Ingersoll explaining how to use Mahout to provide better search features in Solr, one by myself with a general introduction to what features Mahout provides, giving a bit more detailed information on how to use Mahout for classificaiton.

I spent most of Monday in Sally Khudairi's media training. In the morning session she explained the Ins and Outs of successfully marketing your open source project: One of the most important questions is to be able to provide a dense but still accessible explanation of what your project is all about and how it differentiates from other projects potentially in the same space. As a first exercise attendees would meet in pairs interviewing each other about their respective project. When summarising the information I had gotten, Sally quickly pointed out additional pieces of valuable information I had totally forgotten to ask about:


  • First of all the full name of the interviewee, including the sur-name.
  • Second the background of the person with respect to the project. It seemed all to natural that someone you meet at Apache Con in a Media Training almost certainly is either founder or core-committer to the project. Still it is interesting to know more on how long he has been contributing, whether he maybe even co-founded the project.


After that first exercise we would go into detail on various publication formats. When releasing project information the first format that comes to mind are press releases. For software projects at the ASF these are created in a semi-standardised format containing

  • Background on the foundation itself.
  • Some general background on the project.
  • A few paragraphs on the news to be published on the project in an easily digestible format.
  • Contact information for more details.


Some of these parts can be re-used across different publications and occasions. It does make sense to keep these building blocks as a set of boilerplates ready to use when needed.

After lunch Michael Coté from redmonk visited us. Michael has a development background, currently he works as business analyst for redmonk. It is fairly simple to explain technical projects to fellow developers. To get some experience in explaining our project also to non-technical people Sally invited Michael to interview us. By the end of the interview Michael asked each whether they had any question for him. As understanding what machine learning can do for your average Joe programmer is not all to trivial I simply asked him for strategies for better explaining or show-casing our project. One option that came to his mind was to come up with one – or a few – example show cases where Mahout is applied to freely available datasets. Currently most data analysis systems are rather simple or based only on a very limited set of data. Showing on a few selected use cases what can be done with Mahout should be a good way to get quite some media attention for the project.

During the remaining time of the afternoon I started working a short explanation of Mahout and our latest release. The text was reviewed by the Mahout community. The text was published by Sally on the blog of the Apache Software foundation. I also used it as a basis for an article on heise open that got published that same day.

The second day was reserved for a mixture of attending the Barcamp session and hacking away at the Hackathon. Ross had talked me into giving an overview of various Hadoop use cases as that was requested by one of the attendees. However it turned out the guy wasn't really interested in specific use cases: The discussion quickly turned into the more fundamental question of how far the ASF should go in promoting its projects. Should there be a budget for case studies? Should there even be some marketing department. Well, clearly that is out of scope for the foundation. And in addition would run contrary to it being a neutral ground for vendors to collaborate towards common goals while still separately making money providing consulting services, selling case studies etc.

During the Hackathon I was turned into a Mentor for Stanbol, a new project entering incubation just now. In addition I spent some time to finally catch up with the Mahout mailing list.

Travelling

2010-11-22 23:17
Currently on my way back from a series of conferences in the past three weeks in the IC from Schiphol. After three weeks of conferences, lots of new input and lots of interesting projects I learned about it is finally time to head back and put the stuff I have learned to good use.


View Travelling in a larger map

As seems normal with open source conferences I got far more input on interesting projects than I can expect to ever get applied in on a daily basis. Still it is always inspiring to meet with other developers in the same field – or even quite different fields and learn more on what projects they are working on, how they solve various problems.

A huge Thank You goes to the DICODE EU research project for sponsoring the Apache Con and Devoxx trips, another Thanks to Sapo.pt for inviting me to Lisbon and covering travel expenses. A special thank you to the assistant at neofonie who made travel arrangements for Atlanta and Antwerp: It all worked without problems even up to me having a power outlet in the train that is finally taking me back.

Apache Con Europe 2009 - part 3

2009-03-29 19:56
Friday was the last conference day. I enjoyed the Apache pioneers panel with a brief history of the Apache Software Foundation as well as lots of stories on how people first got in contact with the ASF.

After lunch I went to the testing and cloud session. I enjoyed the talk on continuum and its uses by Wendy Smoak. She gave a basic overview of why one would want a CI system and provided a brief introduction to continuum. After that Carlos Sanchez showed how to use the cloud to automate interface tests with Selenium: The basic idea is to automatically (initiated through maven) start up AMIs on EC2, each configured with another operating system and run Selenium tests against the application under development in these. Really nice system for running automated interface tests.

The final session for me was the talk by Chris Anderson and Jan Lehnardt on CouchDB deployments.

The day ended with the Closing Event and Raffle. Big Thank You to Ross Gardler for including the Berlin Apache Hadoop Get Together in newthinking store in the announcements! Will sent the CfP to concom really soon, as promised. Finally I won one package of caffeinated sweets at the Raffle - does that mean less sleep for me in the coming weeks?

Now I am finally back home and had some time to do a quick writeup. If you are interested in the complete notes, go to http://fotos.isabel-drost.de (default login is published in the login window). Looking forward to the Hadoop User Group UK on 14th of April. If you have not signed up yet - do so now: http://huguk.org

Apache Con Europe 2009 - part 2

2009-03-29 19:42
Thursday morning started with an interesting talk on open source collaboration tools and how they can help resolving some collaboration overhead on commercial software projects. Four goals can be reached with the help of the right tools: Sharing the project vision, tracking the current status of the project, finding places to help the project and documenting the project history as well as the reasons for decisions along the way. The exact tool used is irrelevant as long as it helps to solve the four tasks above.

The second talk was on cloud architectures by Steve Loughran. He explained what reasons there are to go into the cloud, what a typical cloud architecture looks like. Steve described Amazon's offer, mentioned other cloud service providers and highlighted some options for a private cloud. However his main interest is in building a standardised cloud stack. Currently choosing one of the cloud provides means vendor lock-in: Your application uses a special API, your data are stored on special servers. There are quite a few tools necessary for building a cloud stack available at Apache (Hadoop, HBase, CouchDB, Pig, Zookeeper...). The question that remains is how to integrate the various pieces and extend where necessary to arrive at a solution that can compete with AppEngine or Azure?

After lunch I went to the Solr case study by JTeam. Basically one great commercial for Solr. They even brought the happy customer to Apache Con to talk about the experience of Solr from his point of view. Great work, really!

The Lightning Talk session ended the day - with a "Happy birthday to you" from the community.

After having spent the last 4 days from 8a.m. to 12p.m. at Apache Con I really did need some rest on Thursday and went to bed pretty early: At 11p.m. ...

Apache Con Europe 2009 - part 1

2009-03-29 18:41
The past week members, committers and users of Apache software projects gathered in Amsterdam for another Apache Con EU - and to celebrate the 10th birthday of the ASF. One week dedicated to the development and use of Free Software and the Apache Way.

Monday was BarCamp day for me, the first BarCamp I ever attended. Unfortunately not all participants proposed talks. So some of the atmosphere of an unconference was missing. The first talk by Danese Cooper was on "HowTo: Amsterdam Coffee Shops". She explained the ins and outs of going to coffee shops in Amsterdam, gave both legal and practical advise. There was a presentation of the Open Street Map project, several Apache projects. One talk discussed transfering the ideas of Free Software to other parts of life. Ross Gardler started a discussion on how to advocate contributions to Free Software projects in science and education.

Tuesday for me meant having some time for Mahout during the Hackathon. Specifically I looked into enhancing matrices with meta information. In the evening there were quite a few interesting talks at the Lucene Meetup: Jukka gave an overview of Tika, Grant introduced Solr. After Grant's talk some of the participants shared numbers on their Solr installations (number of documents per index, query volumn, machine setup). To me it was extremely interesting to gain some insight into what people actually accomplish with Solr. The final talk was on Apache Droids, a still incubating crawling framework.

The Wednesday tracks were a little unfair: The Hadoop track (videos available online for a small fee) was right in parallel to the Lucene track. The day started with a very interesting keynote by Raghu from Yahoo! on their storage system PNUTS. He went into quite some technical detail. Obviously there is interest in publishing the underlying code under an open source license.

After the Mahout introduction by Grant Ingersoll I changed room to the Hadoop track. Arun Murthy shared his experience on tuning and debugging Hadoop applications. After lunch Olga Natkovich gave an introduction to Pig - a higher language on top of Hadoop that allows for specifications of filter operations, joins and basic control flow of map reduce jobs in just a few lines of Pig Latin code. Tom White gave an overview of what it means to run Hadoop on the EC2 cloud. He compared several options for storing the data to process. Today it is very likely that there will soon be quite a few more providers of cloud services in addition to Amazon.

Allen Wittenauer gave an overview of Hadoop from the operations point of view. Steve Lougran finally covered the topic of running Hadoop on dynamically allocated servers.

The day finished with a pretty interesting BOF on Hadoop. There still are people that do not clearly see the differences of Hadoop based systems to database backed applications. Best way to find out whether the model fits: Set up a trial cluster and do experiment yourself. Noone can tell which solution is best for you except for yourself (and maybe Cloudera setting up the cluster for you :) ).

After that the Mahout/UIMA BOF was scheduled - there were quite a few interesting discussions on what UIMA can be used for and how it integrates with Mahout. One major take home message: We need more examples integrating both. We developers do see the clear connections. But users often do not realize that many Apache projects should be used together to get the biggest value out.