Elena Pourmal and Quincey Koziol – The HDF Group
UPDATE Wednesday, March 23, 2016: The HDF5-1.10.0-pre2 release is now available, featuring:
– Concurrent Access to an HDF5 File: Single Writer / Multiple Reader (SWMR)
– Virtual Dataset (VDS)
– Scalable Chunk Indexing
– Persistent Free Filespace Tracking
– Collective Metadata I/O
– Integration of Java HDF5 JNI into HDF5
– Many changes have been made to the HDF5 configuration
–Unfortunately, parallel HDF5 enhancement has been postponed
This version contains a fix for an issue which occurred when building HDF5 within the source code directory.
Check our downloads page for more information. We are still on target for releasing HDF5-1.10.0 next week, let us know if you have any comments!
The HDF Group is committed to meeting our users’ needs and expectations for managing data in today’s fast evolving computational environment. We are pleased to report that the upcoming major new release of HDF5 (HDF5 1.10.0) will have new capabilities that address important data challenges faced by our community. In this blog we introduce you to some of these exciting new features and capabilities.
More powerful than ever before and packed with new features, the release is scheduled for March, 2016. Among many enhancements, HDF5 1.10.0 addresses:
If you have encountered challenges in any of these areas, then we are certain that the upcoming HDF5 1.10.0 will be of interest to you. Continue reading
Joe Lee, The HDF Group
Sprint has recently hit the airwaves with a promotion claiming that they will cut your data bill in half. But there’s no free lunch in this connected world we live in. Unlimited data plans always come with a steep price tag.
While the internet has been around awhile, there has recently been an explosion of data – email, the World Wide Web, social media, cloud computing, mobile apps for everything, and Big Data. At the same time, the overall global population of people using the internet has skyrocketed, as has the “Internet of Things.” Getting around can be a challenge.
The overcrowded and congested internet will continue to throw more data on us. Consequently, getting the right amount of the right data can also be a great challenge. When it’s delivered over the internet, getting the right amount of data also helps ensure that your data delivery time will be dramatically shortened, and your data delivery costs minimized. Continue reading
Gerd Heber and Quincey Koziol, The HDF Group
It Takes a Village to Publish a Paper
Building a well-designed data standard that incorporates the needs of a science community has a long-lasting value to that community (and beyond).
It vastly outweighs the momentary benefits of particular hardware or software choices at any individual experimental site – the science data lifecycle involves more that just “speeds & feeds” during production. Creating a standard that captures the necessary metadata required to characterize experimental and simulation data, while accommodating future expansion and providing flexibility for the special needs of individual researchers is a challenging, but worthwhile endeavor.
Community data standards have taken root in many domains, giving researchers the ability to collaborate on larger science projects than previously possible. For example, Continue reading