cloud Tag

HSDS (Highly Scalable Data Service) is a REST based service for HDF data, part of HDF Cloud, our set of solutions for cloud deployments. In the recent blog about the latest release of HSDS we discussed many of the new features in the 0.6 release including support for Azure....

The HDF Group’s HDF Server has been nominated for Best Use of HPC in the Cloud, and Best HPC Software Product or Technology in HPCWire’s 2016  Readers’ Choice Awards. HDF Server is a Python-based web service that enables full read/write web access to HDF data – it can be used to send and receive HDF5 data using an HTTP-based REST interface. While HDF5 provides powerful scalability and speed for complex datasets of all sizes, many HDF5 data sets used in HPC environments are extremely large and cannot easily be downloaded or moved across the internet to access data on an as-needed basis.  Users often only need to access a small subset of the data.  Using HDF Server, data can be kept in one...

Gerd Heber, The HDF Group

Editor’s Note: Since this post was written in 2015, The HDF Group has developed HDF5 Connector for Apache Spark™, a new product that addresses the challenges of adapting large scale array-based computing to the cloud and object storage while intelligently handling the full data management life cycle. If this is something that interests you, we’d love to hear from you.

“I would like to do something with all the datasets in all the HDF5 files in this directory, but I’m not sure how to proceed.”

If this sounds all too familiar, then reading this article might be worth your while. The accepted general answer is to write a Python script (and use h5py [1]), but I am not going to repeat here what you know already. Instead, I will show you how to hot-wire one of the new shiny engines, Apache Spark [2], and make a few suggestions on how to reduce the coding on your part while opening the door to new opportunities.

But what about Hadoop? There is no out-of-the-box interoperability between HDF5 and Hadoop. See our BigHDF FAQs [3] for a few glimmers of hope. Major points of contention remain such as HDFS’s “blocked” worldview and its aversion to relatively small objects, and then there is HDF5’s determination to keep its smarts away from prying eyes. Spark is more relaxed and works happily with HDFS, Amazon S3, and, yes, a local file system or NFS. More importantly, with its Resilient Distributed Datasets (RDD) [4] it raises the level of abstraction and overcomes several Hadoop/MapReduce shortcomings when dealing with iterative methods. See reference [5] for an in-depth discussion.

HDF5 Apache Spark RDD

Figure 1. A simple HDF5/Spark scenario

As our model problem (see Figure 1), consider the following scenario: