HDF Server is a new product from The HDF Group which enables HDF5 resources to be accessed and modified using Hypertext Transfer Protocol (HTTP).
HDF Server , released in February 2015, was first developed as a proof of concept that enabled remote access to HDF5 content using a RESTful API. HDF Server version 0.1.0 wasn’t yet intended for use in a production environment since it didn’t initially provide a set of security features and controls. Following its successful debut, The HDF Group incorporated additional planned features. The newest version of HDF Server provides exciting capabilities for accessing HDF5 data in an easy and secure way. Continue reading →
We’re pleased to announce that The HDF Group is now a member of the Open Commons Consortium(formerly Open Cloud Consortium), a not for profit that manages and operates cloud computing and data commons infrastructure to support scientific, medical, health care and environmental research.
The HDF Group will be participating in the NOAA Data Alliance Working Group (WG) on the WG committee that will determine the datasets to be hosted in the NOAA data commons as well as tools to be used in the computational ecosystem surrounding the NOAA data commons.
“The Open Commons Consortium (OCC) is a truly innovative concept for supporting scientific computing,” said Mike Folk, The HDF Group’s President. “Their cloud computing and data commons infrastructure supports a wide range of research, and OCC’s membership spans government, academia, and the private sector. This is a good opportunity for us to learn about how we can best serve these communities.”
The HDF Group will also participate in the Open Science Data Cloud working group and receive resource allocations on the OSDC Griffin resource. The HDF Group’s John Readey is working with the OCC and others to investigate ways to use Griffin effectively. Readey says, “Griffin is a great testbed for cloud-based systems. With access to object storage (using the AWS/S3 api) and the ability to programmatically create VM’s, we will explore new methods for the analysis of scientific datasets.” Continue reading →
The 2015 HDF workshop held during the ESIP Summer Meeting was a great success thanks to more than 40 participants throughout the four sessions. The workshop was an excellent opportunity for us to interact with HDF community members to better understand their needs and introduce them to new technologies. You can view the slide presentations from the workshop here.
From my perspective, the highlight of the workshop was the Vendors and Tools Session where we heard from Ellen Johnson (Mathworks), Christine White (Esri), Brian Tisdale (NASA), and Gerd Heber (The HDF Group) talk about new, and improved applications of HDF technologies. For example: Continue reading →
Please join us to learn about new HDF tools, projects and perspectives.
The HDF Group will be hosting a one-day workshop at the upcoming Federation for Earth Science Information Partners (ESIP) Summer Meetingin Asilomar, CA on Tuesday, July 14th.
There will also be an HDF Town Hall meeting on Wednesday afternoon, July 15th.
Please join us for any and all of the events. If you are unable to join us in person, you may participate through remote access. Remote access details will be made available through the ESIP meeting website. Questions? Contact Lindsay at email@example.com.
Before the recent release of our PyHexad Excel add-in for HDF5, the title might have sounded like the slogan of a global coffee and baked goods chain. That was then. Today, it is an expression of hope for the spreadsheet users who run this country and who either felt neglected by the HDF5 community or who might suffer from a medical condition known as data-bulging workbook stress disorder. In this article, I would like to give you a quick overview of the novel PyHexad therapy and invite you to get involved (after consulting with your doctor).
To access the data in HDF5 files from Excel is a frontrunner among the all-time TOP 10 most frequently asked for features. A spreadsheet tool might be a convenient window into, and user interface for, certain data stored in HDF5 files. Such a tool could help overcome Excel storage and performance limitations, and allow data to be freely “shuttled” between worksheets and HDF5 data containers. PyHexad (,,,) is an attempt to further explore this concept. Continue reading →
Building a well-designed data standard that incorporates the needs of a science community has a long-lasting value to that community (and beyond).
It vastly outweighs the momentary benefits of particular hardware or software choices at any individual experimental site – the science data lifecycle involves more that just “speeds & feeds” during production. Creating a standard that captures the necessary metadata required to characterize experimental and simulation data, while accommodating future expansion and providing flexibility for the special needs of individual researchers is a challenging, but worthwhile endeavor.
Community data standards have taken root in many domains, giving researchers the ability to collaborate on larger science projects than previously possible. For example, Continue reading →
HDF5 is a great way to store large data collections, but size can pose its own challenges. As a thought experiment, imagine this scenario:
You write an application that creates the ultimate Monte Carlo simulation of the Monopoly game. The application plays through 1000’s of simulated games for a hundred different strategies and saves its results to an HDF5 file. Given that we want to capture all the data from each simulation, let’s suppose the resultant HDF5 file is over a gigabyte in size.
Naturally, you’d like to share these results with all your Monopoly-playing, statistically-minded friends, but herein lies the problem: How can you make this data accessible? Your file is too large to put on Dropbox, and even if you did use an online storage provider, interested parties would need to download the entire file when perhaps they are only interested in the results for “Strategy #89: Buy just Park Place and Boardwalk.” If we could store the data in one place, but enable access to it over the web using all the typical HDF5 operations (listing links, getting type information, dataset slices, etc.) that would be the answer to our conundrum. Continue reading →