ESIP Summer Meeting – HDF Workshop and Town Hall

HDF Group is hosting a one-day workshop at the upcoming Federation for Earth Science Information Partners (ESIP) Summer Meeting in Asilomar, CA on July 14th. Please join us to learn about new HDF tools, projects and perspectives. There will also be an HDF Town Hall meeting on Wednesday afternoon July 15th

Get your Bearings with HDF Compass

John Readey, The HDF Group   We’ve recently announced a new viewer application for HDF5 files: HDF Compass. In this blog post we’ll explore the motivations for providing this tool, review its features, and speculate a bit about future direction for Compass. HDF Compass is a desktop viewer application for HDF5 and other file formats.

Letter to the HDF User Community

Lindsay Powers – The HDF Group The HDF Group provides free, open-source software that is widely used in government, academia and industry. The goal of The HDF Group is to ensure the sustainable development of HDF (Hierarchical Data Format) technologies and the ongoing accessibility of HDF-stored data because users and organizations have mission-critical systems and

America Runs on Excel and HDF5*

* With Python’s Help Gerd Heber, The HDF Group Before the recent release of our PyHexad Excel add-in for HDF5[1], the title might have sounded like the slogan of a global coffee and baked goods chain. That was then. Today, it is an expression of hope for the spreadsheet users who run this country and

HDF5 Data Compression Demystified #1

Elena Pourmal, The HDF Group What happened to my compression? One of the most powerful features of HDF5 is the ability to compress or otherwise modify, or “filter,” your data during I/O. By far, the most common user-defined filters are ones that perform data compression.  As you know, there are many compression options. There are

Putting some Spark into HDF-EOS

…we focus on how far we can push our personal computing devices with Spark. It consists of 7,850 HDF-EOS5 files covering 27 years and totals about 120 GB. We use a driver script, which reads a dataset of interest from each file in the collection, computes per-file quantities of interest, and gathers them in a CSV file for visualization. The processing time on our reference tablet machine for 3.5 years of data using 4 logical processors was about 10 seconds.

Parallel I/O – Why, How, and Where to?

Mohamad Chaarawi, The HDF Group First in a series: parallel HDF5 What costs applications a lot of time and resources rather than doing actual computation?  Slow I/O.  It is well known that I/O subsystems are very slow compared to other parts of a computing system.  Applications use I/O to store simulation output for future use

Scroll to Top