Handling Big Data with HBase Part 4: The Java API
Join the DZone community and get the full member experience.
Join For FreeEditor's note: Be sure to check out part 2 as well.
This is the fourth of an introductory series of blogs on Apache HBase. In the third part, we saw a high level view of HBase architecture . In this part, we'll use the HBase Java API to create tables, insert new data, and retrieve data by row key. We'll also see how to setup a basic table scan which restricts the columns retrieved and also uses a filter to page the results.
Having just learned about HBase high-level architecture, now let's look at the Java client API since it is the way your applications interact with HBase. As mentioned earlier you can also interact with HBase via several flavors of RPC technologies like Apache Thrift plus a REST gateway, but we're going to concentrate on the native Java API. The client APIs provide both DDL (data definition language) and DML (data manipulation language) semantics very much like what you find in SQL for relational databases. Suppose we are going to store information about people in HBase, and we want to start by creating a new table. The following listing shows how to create a new table using the HBaseAdmin
class.
Configuration conf = HBaseConfiguration.create();
HBaseAdmin admin = new HBaseAdmin(conf);
HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("people"));
tableDescriptor.addFamily(new HColumnDescriptor("personal"));
tableDescriptor.addFamily(new HColumnDescriptor("contactinfo"));
tableDescriptor.addFamily(new HColumnDescriptor("creditcard"));
admin.createTable(tableDescriptor);
The people
table defined in preceding listing contains three column families: personal
, contactinfo
, and creditcard
. To create a table you create an HTableDescriptor
and add one or more column families by adding HColumnDescriptor
objects. You then call createTable
to create the table. Now we have a table, so let's add some data. The next listing shows how to use the Put
class to insert data on John Doe, specifically his name and email address (omitting proper error handling for brevity).
Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, "people");
Put put = new Put(Bytes.toBytes("doe-john-m-12345"));
put.add(Bytes.toBytes("personal"), Bytes.toBytes("givenName"), Bytes.toBytes("John"));
put.add(Bytes.toBytes("personal"), Bytes.toBytes("mi"), Bytes.toBytes("M"));
put.add(Bytes.toBytes("personal"), Bytes.toBytes("surame"), Bytes.toBytes("Doe"));
put.add(Bytes.toBytes("contactinfo"), Bytes.toBytes("email"), Bytes.toBytes("john.m.doe@gmail.com"));
table.put(put);
table.flushCommits();
table.close();
In the above listing we instantiate a Put
providing the unique row key to the constructor. We then add values, which must include the column family, column qualifier, and the value all as byte arrays. As you probably noticed, the HBase API's utility Bytes
class is used a lot; it provides methods to convert to and from byte[]
for primitive types and strings. (Adding a static import for the toBytes()
method would cut out a lot of boilerplate code.) We then put the data into the table, flush the commits to ensure locally buffered changes take effect, and finally close the table. Updating data is also done via the Put
class in exactly the same manner as just shown in the prior listing. Unlike relational databases in which updates must update entire rows even if only one column changed, if you only need to update a single column then that's all you specify in the Put
and HBase will only update that column. There is also a checkAndPut
operation which is essentially a form of optimistic concurrency control - the operation will only put the new data if the current values are what the client says they should be.
Retrieving the row we just created is accomplished using the Get
class, as shown in the next listing. (From this point forward, listings will omit the boilerplate code to create a configuration, instantiate the HTable
, and the flush and close calls.)
Get get = new Get(Bytes.toBytes("doe-john-m-12345"));
get.addFamily(Bytes.toBytes("personal"));
get.setMaxVersions(3);
Result result = table.get(get);
The code in the previous listing instantiates a Get
instance supplying the row key we want to find. Next we use addFamily
to instruct HBase that we only need data from the personal
column family, which also cuts down the amount of work HBase must do when reading information from disk. We also specify that we'd like up to three versions of each column in our result, perhaps so we can list historical values of each column. Finally, calling get
returns a Result
instance which can then be used to inspect all the column values returned.
In many cases you need to find more than one row. HBase lets you do this by scanning rows, as shown in the second part which showed using a scan in the HBase shell session. The corresponding class is the Scan
class. You can specify various options, such as the start and ending row key to scan, which columns and column families to include and the maximum versions to retrieve. You can also add filters, which allow you to implement custom filtering logic to further restrict which rows and columns are returned. A common use case for filters is pagination. For example, we might want to scan through all people whose last name is Smith one page (e.g. 25 people) at a time. The next listing shows how to perform a basic scan.
Scan scan = new Scan(Bytes.toBytes("smith-"));
scan.addColumn(Bytes.toBytes("personal"), Bytes.toBytes("givenName"));
scan.addColumn(Bytes.toBytes("contactinfo"), Bytes.toBytes("email"));
scan.setFilter(new PageFilter(25));
ResultScanner scanner = table.getScanner(scan);
for (Result result : scanner) {
// ...
}
In the above listing we create a new Scan
that starts from the row key smith-
and we then use addColumn
to restrict the columns returned (thus reducing the amount of disk transfer HBase must perform) to personal:givenName
and contactinfo:email
. A PageFilter
is set on the scan to limit the number of rows scanned to 25. (An alternative to using the page filter would be to specify a stop row key when constructing the Scan
.) We then get a ResultScanner
for the Scan
just created, and loop through the results performing whatever actions are necessary. Since the only method in HBase to retrieve multiple rows of data is scanning by sorted row keys, how you design the row key values is very important. We'll come back to this topic later.
You can also delete data in HBase using the Delete
class, analogous to the Put
class to delete all columns in a row (thus deleting the row itself), delete column families, delete columns, or some combination of those.
Connection Handling
In the above examples not much attention was paid to connection handling and RPCs (remote procedure calls). HBase provides the HConnection
class which provides functionality similar to connection pool classes to share connections, for example you use the getTable()
method to get a reference to an HTable
instance. There is also an HConnectionManager
class which is how you get instances of HConnection
. Similar to avoiding network round trips in web applications, effectively managing the number of RPCs and amount of data returned when using HBase is important, and something to consider when writing HBase applications.
Conclusion to Part 4
In this part we used the HBase Java API to create a people
table, insert a new person, and find the newly inserted person information. We also used the Scan
class to scan the people
table for people with last name "Smith" and showed how to restrict the data retrieved and finally how to use a filter to limit the number of results.
In the next part, we'll learn how to deal with the absence of SQL and relations when modeling schemas in HBase.
References
- HBase web site, http://hbase.apache.org/
- HBase wiki, http://wiki.apache.org/hadoop/Hbase
- HBase Reference Guide http://hbase.apache.org/book/book.html
- HBase: The Definitive Guide, http://bit.ly/hbase-definitive-guide
- Google Bigtable Paper, http://labs.google.com/papers/bigtable.html
- Hadoop web site, http://hadoop.apache.org/
- Hadoop: The Definitive Guide, http://bit.ly/hadoop-definitive-guide
- Fallacies of Distributed Computing, http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing
- HBase lightning talk slides, http://www.slideshare.net/scottleber/hbase-lightningtalk
- Sample code, https://github.com/sleberknight/basic-hbase-examples
Published at DZone with permission of Scott Leberknight, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments