Ready to unleash the power of your massive dataset? With the latest edition of this comprehensive resource, youíll learn how to use Apache Hadoop to build and maintain reliable, scalable, distributed systems. Itís ideal for programmers looking to analyze datasets of any size, and for administrators who want to set up and run Hadoop clusters.Store large datasets with the Hadoop Distributed File System (HDFS), then run distributed computations with MapReduce
Use Hadoopís data and I/O building blocks for compression, data integrity, serialization (including Avro), and persistence
Discover common pitfalls and advanced features for writing real-world MapReduce programs
Design, build, and administer a dedicated Hadoop cluster, or run Hadoop in the cloud
Use Pig, a high-level query language for large-scale data processing
Analyze datasets with Hive, Hadoopís data warehousing system
Load data from relational databases into HDFS, using Sqoop
Take advantage of HBase, the database for structured and semi-structured data
Use ZooKeeper, the toolkit for building distributed systems
This third edition covers recent changes to Hadoop, including new material on the new MapReduce API, as well as version 2 of the MapReduce runtime (YARN) and its more flexible execution model. Youíll also find illuminating case studies that demonstrate how Hadoop is used to solve specific problems.
Unless otherwise noted above, most orders ship within 1 to 2 days. We will promptly notify you if there is a stock problem with any items on your order and provide you with an estimated delivery date. If you have a firm need by date, please provide such information in the comment section at checkout.
Page Count (est.): 657
Pub Date: 5/26/2012