You can pre-order it on Amazon and O'Reilly. You can also get the Rough Cuts version from O'Reilly to read today, although it hasn't yet been refreshed with my latest draft (I hope that will happen in the next few days).
Here's the final chapter listing. Readers of earlier drafts will notice that the number of chapters has grown: this is because the elephantine MapReduce chapter has been split into three (chapters 6, 7, and 8) to make things more digestible.
- Meet Hadoop
- MapReduce
- The Hadoop Distributed Filesystem
- Hadoop I/O
- Developing a MapReduce Application
- How MapReduce Works
- MapReduce Types and Formats
- MapReduce Features
- Setting Up a Hadoop Cluster
- Administering Hadoop
- Pig
- HBase
- ZooKeeper
- Case Studies
9 comments:
Congratulations, Tom!
Having worked with a few other authors, I have some appreciation for how much effort it can be.
I look forward to buying my copy.
Congratulations for finishing it! I shall place my UK order in and hope to be in there with support calls shortly afterwards!
Congrats Tom, I'm looking forward to reading it.
Thanks everyone!
@Steve - that's helping out with the support calls, right :)
Tom, was it really only a couple of months? It took us over 12 months to write Lucene in Action! :)
Congrats Tom!
Congratulations Tom!
Hi...
I bought your book, but I didnt find any info about how to do subclusters.
in my research a want to specialize some regions of my hadoop cluster for example Sorting. other region for only joins, ... other region for only group by,...
I dont know if you know what I mean.
Where can i post problem?
Thanks for all
Hi Harold,
Most people run jobs across the whole cluster. In this way, you get to multiplex the work, which leads to better overall efficiency. There are a number of work schedulers available now (e.g. the fair share scheduler) which can be used to segregate jobs, and give them guaranteed resource allocations. See http://hadoop.apache.org/common/docs/r0.20.0/fair_scheduler.html for more details.
Regarding your question about where to ask these questions - see the mailing lists at http://hadoop.apache.org/common/mailing_lists.html.
Cheers,
Tom
Post a Comment