Efficiently creating and writing to 20,000 datasets

Hi,

I'm helping a user at NERSC modify an out-of-core matrix calculation
code to use HDF5 for temporary storage. Each of his 30 MPI tasks is
writing to its own file using the MPI-IO VFD in independent mode with
the MPI_COMM_SELF communicator. He is creating about 20,000 datasets
and writing anywhere from 4KB to 32MB to each one. In IO profiles, we
are seeing a huge spike in <1KB writes (about 100,000). My questions
are:

* Are these small writes we are seeing associated with dataset metadata?

* Is there a "best practice" for handling this number of datasets? For
instance, is it better to pre-allocate the datasets before writing to
them?

Thanks
Mark

Hi Mark,

Since you didn't explicitly describe the H5Dcreate/H5Dwrite calls, I'll
probably wind up asking some silly questions, but...

How big are the dataspaces being written in H5Dwrite?

Are the datasets being created with chunked or contiguous storage?

Why are you even bothering with MPI-IO in this case? Since each
processor is writing to its own file, why not use sec2 vfd or maybe even
stdio vfd, or mpiposix? Or, you could try split vfd and use 'core' vfd
for metadata and either sec2, stdio or mpiposix vfd for raw. That
results in two actual 'files' on disk for every 'file' a task creates
but if this is for out-of-core, you'll soon be deleting them anyways.
Using the split vfd in this way means that all metadata will get held in
memory (in the core vfd) until file is closed and then it'll get written
in one large I/O request. Raw data gets handled as usual.

Well, thats some options to try at least.

Good luck.

Mark

What version of HDF5 is this?

···

On Tue, 2010-05-11 at 16:23 -0700, Mark Howison wrote:

Hi,

I'm helping a user at NERSC modify an out-of-core matrix calculation
code to use HDF5 for temporary storage. Each of his 30 MPI tasks is
writing to its own file using the MPI-IO VFD in independent mode with
the MPI_COMM_SELF communicator. He is creating about 20,000 datasets
and writing anywhere from 4KB to 32MB to each one. In IO profiles, we
are seeing a huge spike in <1KB writes (about 100,000). My questions
are:

* Are these small writes we are seeing associated with dataset metadata?

* Is there a "best practice" for handling this number of datasets? For
instance, is it better to pre-allocate the datasets before writing to
them?

Thanks
Mark

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

--
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
miller86@llnl.gov urgent: miller86@pager.llnl.gov
T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-8511

Hi Mark,

All dataspaces are 1D. Currently, the datasets are contiguous. The
size of each dataset is available before the writes occur.

There is a phase later where a large MPI communicator performs
parallel reads of the data, which is why we are using the parallel
version of the library. I think that the VFDs you are suggesting are
only available in the serial library, but I could be mistaken.

Thanks,
Mark

···

On Tue, May 11, 2010 at 4:33 PM, Mark Miller <miller86@llnl.gov> wrote:

Hi Mark,

Since you didn't explicitly describe the H5Dcreate/H5Dwrite calls, I'll
probably wind up asking some silly questions, but...

How big are the dataspaces being written in H5Dwrite?

Are the datasets being created with chunked or contiguous storage?

Why are you even bothering with MPI-IO in this case? Since each
processor is writing to its own file, why not use sec2 vfd or maybe even
stdio vfd, or mpiposix? Or, you could try split vfd and use 'core' vfd
for metadata and either sec2, stdio or mpiposix vfd for raw. That
results in two actual 'files' on disk for every 'file' a task creates
but if this is for out-of-core, you'll soon be deleting them anyways.
Using the split vfd in this way means that all metadata will get held in
memory (in the core vfd) until file is closed and then it'll get written
in one large I/O request. Raw data gets handled as usual.

Well, thats some options to try at least.

Good luck.

Mark

What version of HDF5 is this?
On Tue, 2010-05-11 at 16:23 -0700, Mark Howison wrote:

Hi,

I'm helping a user at NERSC modify an out-of-core matrix calculation
code to use HDF5 for temporary storage. Each of his 30 MPI tasks is
writing to its own file using the MPI-IO VFD in independent mode with
the MPI_COMM_SELF communicator. He is creating about 20,000 datasets
and writing anywhere from 4KB to 32MB to each one. In IO profiles, we
are seeing a huge spike in <1KB writes (about 100,000). My questions
are:

* Are these small writes we are seeing associated with dataset metadata?

* Is there a "best practice" for handling this number of datasets? For
instance, is it better to pre-allocate the datasets before writing to
them?

Thanks
Mark

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

--
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
miller86@llnl.gov urgent: miller86@pager.llnl.gov
T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-8511

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Hi Mark,

Hi Mark,

All dataspaces are 1D. Currently, the datasets are contiguous. The
size of each dataset is available before the writes occur.

There is a phase later where a large MPI communicator performs
parallel reads of the data, which is why we are using the parallel
version of the library. I think that the VFDs you are suggesting are
only available in the serial library, but I could be mistaken.

Well, for any given libhdf5.a, the other vfds are generally always
available. I think direct and mpi-related vfds are the only ones which
might not be available depending on how HDF5 was configured prior to
installation. So, if they are suitable for your needs, you should be
able to use those other vfds, even from a parallel application .

Mark

···

On Wed, 2010-05-12 at 12:01, Mark Howison wrote:

Thanks,
Mark

On Tue, May 11, 2010 at 4:33 PM, Mark Miller <miller86@llnl.gov> wrote:
> Hi Mark,
>
> Since you didn't explicitly describe the H5Dcreate/H5Dwrite calls, I'll
> probably wind up asking some silly questions, but...
>
> How big are the dataspaces being written in H5Dwrite?
>
> Are the datasets being created with chunked or contiguous storage?
>
> Why are you even bothering with MPI-IO in this case? Since each
> processor is writing to its own file, why not use sec2 vfd or maybe even
> stdio vfd, or mpiposix? Or, you could try split vfd and use 'core' vfd
> for metadata and either sec2, stdio or mpiposix vfd for raw. That
> results in two actual 'files' on disk for every 'file' a task creates
> but if this is for out-of-core, you'll soon be deleting them anyways.
> Using the split vfd in this way means that all metadata will get held in
> memory (in the core vfd) until file is closed and then it'll get written
> in one large I/O request. Raw data gets handled as usual.
>
> Well, thats some options to try at least.
>
> Good luck.
>
> Mark
>
> What version of HDF5 is this?
> On Tue, 2010-05-11 at 16:23 -0700, Mark Howison wrote:
>> Hi,
>>
>> I'm helping a user at NERSC modify an out-of-core matrix calculation
>> code to use HDF5 for temporary storage. Each of his 30 MPI tasks is
>> writing to its own file using the MPI-IO VFD in independent mode with
>> the MPI_COMM_SELF communicator. He is creating about 20,000 datasets
>> and writing anywhere from 4KB to 32MB to each one. In IO profiles, we
>> are seeing a huge spike in <1KB writes (about 100,000). My questions
>> are:
>>
>> * Are these small writes we are seeing associated with dataset metadata?
>>
>> * Is there a "best practice" for handling this number of datasets? For
>> instance, is it better to pre-allocate the datasets before writing to
>> them?
>>
>> Thanks
>> Mark
>>
>> _______________________________________________
>> Hdf-forum is for HDF software users discussion.
>> Hdf-forum@hdfgroup.org
>> http://**mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>>
> --
> Mark C. Miller, Lawrence Livermore National Laboratory
> ================!!LLNL BUSINESS ONLY!!================
> miller86@llnl.gov urgent: miller86@pager.llnl.gov
> T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-8511
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@hdfgroup.org
> http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

--
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
miller86@llnl.gov urgent: miller86@pager.llnl.gov
T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-851

Hi Mark & Mark, :slight_smile:

Hi Mark,

Hi Mark,

All dataspaces are 1D. Currently, the datasets are contiguous. The
size of each dataset is available before the writes occur.

There is a phase later where a large MPI communicator performs
parallel reads of the data, which is why we are using the parallel
version of the library. I think that the VFDs you are suggesting are
only available in the serial library, but I could be mistaken.

Well, for any given libhdf5.a, the other vfds are generally always
available. I think direct and mpi-related vfds are the only ones which
might not be available depending on how HDF5 was configured prior to
installation. So, if they are suitable for your needs, you should be
able to use those other vfds, even from a parallel application.

  Yes, parallel HDF5 is a superset of serial HDF5 and all the VFDs are available.

  Is each individual file created in the first phase accessed in parallel later? If so, it might be reasonable to use the core VFD for creating the files, then close all the files and re-open them with the MPI-IO VFD.

  Quincey

···

On May 12, 2010, at 2:13 PM, Mark Miller wrote:

On Wed, 2010-05-12 at 12:01, Mark Howison wrote:

Mark

Thanks,
Mark

On Tue, May 11, 2010 at 4:33 PM, Mark Miller <miller86@llnl.gov> wrote:

Hi Mark,

Since you didn't explicitly describe the H5Dcreate/H5Dwrite calls, I'll
probably wind up asking some silly questions, but...

How big are the dataspaces being written in H5Dwrite?

Are the datasets being created with chunked or contiguous storage?

Why are you even bothering with MPI-IO in this case? Since each
processor is writing to its own file, why not use sec2 vfd or maybe even
stdio vfd, or mpiposix? Or, you could try split vfd and use 'core' vfd
for metadata and either sec2, stdio or mpiposix vfd for raw. That
results in two actual 'files' on disk for every 'file' a task creates
but if this is for out-of-core, you'll soon be deleting them anyways.
Using the split vfd in this way means that all metadata will get held in
memory (in the core vfd) until file is closed and then it'll get written
in one large I/O request. Raw data gets handled as usual.

Well, thats some options to try at least.

Good luck.

Mark

What version of HDF5 is this?
On Tue, 2010-05-11 at 16:23 -0700, Mark Howison wrote:

Hi,

I'm helping a user at NERSC modify an out-of-core matrix calculation
code to use HDF5 for temporary storage. Each of his 30 MPI tasks is
writing to its own file using the MPI-IO VFD in independent mode with
the MPI_COMM_SELF communicator. He is creating about 20,000 datasets
and writing anywhere from 4KB to 32MB to each one. In IO profiles, we
are seeing a huge spike in <1KB writes (about 100,000). My questions
are:

* Are these small writes we are seeing associated with dataset metadata?

* Is there a "best practice" for handling this number of datasets? For
instance, is it better to pre-allocate the datasets before writing to
them?

Thanks
Mark

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://**mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

--
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
miller86@llnl.gov urgent: miller86@pager.llnl.gov
T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-8511

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

--
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
miller86@llnl.gov urgent: miller86@pager.llnl.gov
T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-851

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Yes, that is exactly how we are accessing the files. So if we use the
core VFD for the file writes, the metadata will be cached in memory
and only written at file close, like Mark described? And it will be
written in one large, contiguous piece?

Thanks
Mark

···

On Thu, May 13, 2010 at 5:08 AM, Quincey Koziol <koziol@hdfgroup.org> wrote:

   Is each individual file created in the first phase accessed in parallel later?  If so, it might be reasonable to use the core VFD for creating the files, then close all the files and re\-open them with the MPI\-IO VFD\.

Hi Mark,

···

On May 13, 2010, at 8:42 AM, Mark Howison wrote:

On Thu, May 13, 2010 at 5:08 AM, Quincey Koziol <koziol@hdfgroup.org> wrote:

       Is each individual file created in the first phase accessed in parallel later? If so, it might be reasonable to use the core VFD for creating the files, then close all the files and re-open them with the MPI-IO VFD.

Yes, that is exactly how we are accessing the files. So if we use the
core VFD for the file writes, the metadata will be cached in memory
and only written at file close, like Mark described? And it will be
written in one large, contiguous piece?

  Yes & yes.

    Quincey

Hi Quincey,
My understanding on parallel HDF5 is that it depends on the availability of parallel file system, i.e. GPFS. For instance, I am out of luck whether I am using Windows XP/7 or Windows server (2008), right?
As for Linux (kernel > 2.4), according to
ftp://ftp.hdfgroup.org/HDF5/current/src/unpacked/release_docs/INSTALL_parallel
even on a multi-core laptop, I should be able to access PHDF5 functionalities.
Is this correct? Thanks a lot.

Best,
xunlei

···

On 5/13/2010 8:08 AM, Quincey Koziol wrote:

Hi Mark& Mark, :slight_smile:

On May 12, 2010, at 2:13 PM, Mark Miller wrote:

Hi Mark,

On Wed, 2010-05-12 at 12:01, Mark Howison wrote:
     

Hi Mark,

All dataspaces are 1D. Currently, the datasets are contiguous. The
size of each dataset is available before the writes occur.

There is a phase later where a large MPI communicator performs
parallel reads of the data, which is why we are using the parallel
version of the library. I think that the VFDs you are suggesting are
only available in the serial library, but I could be mistaken.
       

Well, for any given libhdf5.a, the other vfds are generally always
available. I think direct and mpi-related vfds are the only ones which
might not be available depending on how HDF5 was configured prior to
installation. So, if they are suitable for your needs, you should be
able to use those other vfds, even from a parallel application.
     

  Yes, parallel HDF5 is a superset of serial HDF5 and all the VFDs are available.

  Is each individual file created in the first phase accessed in parallel later? If so, it might be reasonable to use the core VFD for creating the files, then close all the files and re-open them with the MPI-IO VFD.

  Quincey

Mark

Thanks,
Mark

On Tue, May 11, 2010 at 4:33 PM, Mark Miller<miller86@llnl.gov> wrote:
       

Hi Mark,

Since you didn't explicitly describe the H5Dcreate/H5Dwrite calls, I'll
probably wind up asking some silly questions, but...

How big are the dataspaces being written in H5Dwrite?

Are the datasets being created with chunked or contiguous storage?

Why are you even bothering with MPI-IO in this case? Since each
processor is writing to its own file, why not use sec2 vfd or maybe even
stdio vfd, or mpiposix? Or, you could try split vfd and use 'core' vfd
for metadata and either sec2, stdio or mpiposix vfd for raw. That
results in two actual 'files' on disk for every 'file' a task creates
but if this is for out-of-core, you'll soon be deleting them anyways.
Using the split vfd in this way means that all metadata will get held in
memory (in the core vfd) until file is closed and then it'll get written
in one large I/O request. Raw data gets handled as usual.

Well, thats some options to try at least.

Good luck.

Mark

What version of HDF5 is this?
On Tue, 2010-05-11 at 16:23 -0700, Mark Howison wrote:
         

Hi,

I'm helping a user at NERSC modify an out-of-core matrix calculation
code to use HDF5 for temporary storage. Each of his 30 MPI tasks is
writing to its own file using the MPI-IO VFD in independent mode with
the MPI_COMM_SELF communicator. He is creating about 20,000 datasets
and writing anywhere from 4KB to 32MB to each one. In IO profiles, we
are seeing a huge spike in<1KB writes (about 100,000). My questions
are:

* Are these small writes we are seeing associated with dataset metadata?

* Is there a "best practice" for handling this number of datasets? For
instance, is it better to pre-allocate the datasets before writing to
them?

Thanks
Mark

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://**mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

--
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
miller86@llnl.gov urgent: miller86@pager.llnl.gov
T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-8511

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
       

--
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
miller86@llnl.gov urgent: miller86@pager.llnl.gov
T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-851

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
     
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Hi Xunlei,

Hi Quincey,
My understanding on parallel HDF5 is that it depends on the availability of parallel file system, i.e. GPFS. For instance, I am out of luck whether I am using Windows XP/7 or Windows server (2008), right?

  Yes - we don't support the parallel I/O VFDs (MPI-IO and MPI-POSIX) on Windows currently.

As for Linux (kernel > 2.4), according to
ftp://ftp.hdfgroup.org/HDF5/current/src/unpacked/release_docs/INSTALL_parallel
even on a multi-core laptop, I should be able to access PHDF5 functionalities.
Is this correct? Thanks a lot.

  Yes, I test parallel I/O on my MacBookPro all the time. :slight_smile:

  Quincey

···

On May 13, 2010, at 8:43 AM, Dr. X wrote:

Best,
xunlei

On 5/13/2010 8:08 AM, Quincey Koziol wrote:

Hi Mark& Mark, :slight_smile:

On May 12, 2010, at 2:13 PM, Mark Miller wrote:

Hi Mark,

On Wed, 2010-05-12 at 12:01, Mark Howison wrote:
    

Hi Mark,

All dataspaces are 1D. Currently, the datasets are contiguous. The
size of each dataset is available before the writes occur.

There is a phase later where a large MPI communicator performs
parallel reads of the data, which is why we are using the parallel
version of the library. I think that the VFDs you are suggesting are
only available in the serial library, but I could be mistaken.
      

Well, for any given libhdf5.a, the other vfds are generally always
available. I think direct and mpi-related vfds are the only ones which
might not be available depending on how HDF5 was configured prior to
installation. So, if they are suitable for your needs, you should be
able to use those other vfds, even from a parallel application.
    

  Yes, parallel HDF5 is a superset of serial HDF5 and all the VFDs are available.

  Is each individual file created in the first phase accessed in parallel later? If so, it might be reasonable to use the core VFD for creating the files, then close all the files and re-open them with the MPI-IO VFD.

  Quincey

Mark

Thanks,
Mark

On Tue, May 11, 2010 at 4:33 PM, Mark Miller<miller86@llnl.gov> wrote:
      

Hi Mark,

Since you didn't explicitly describe the H5Dcreate/H5Dwrite calls, I'll
probably wind up asking some silly questions, but...

How big are the dataspaces being written in H5Dwrite?

Are the datasets being created with chunked or contiguous storage?

Why are you even bothering with MPI-IO in this case? Since each
processor is writing to its own file, why not use sec2 vfd or maybe even
stdio vfd, or mpiposix? Or, you could try split vfd and use 'core' vfd
for metadata and either sec2, stdio or mpiposix vfd for raw. That
results in two actual 'files' on disk for every 'file' a task creates
but if this is for out-of-core, you'll soon be deleting them anyways.
Using the split vfd in this way means that all metadata will get held in
memory (in the core vfd) until file is closed and then it'll get written
in one large I/O request. Raw data gets handled as usual.

Well, thats some options to try at least.

Good luck.

Mark

What version of HDF5 is this?
On Tue, 2010-05-11 at 16:23 -0700, Mark Howison wrote:
        

Hi,

I'm helping a user at NERSC modify an out-of-core matrix calculation
code to use HDF5 for temporary storage. Each of his 30 MPI tasks is
writing to its own file using the MPI-IO VFD in independent mode with
the MPI_COMM_SELF communicator. He is creating about 20,000 datasets
and writing anywhere from 4KB to 32MB to each one. In IO profiles, we
are seeing a huge spike in<1KB writes (about 100,000). My questions
are:

* Are these small writes we are seeing associated with dataset metadata?

* Is there a "best practice" for handling this number of datasets? For
instance, is it better to pre-allocate the datasets before writing to
them?

Thanks
Mark

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://**mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

--
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
miller86@llnl.gov urgent: miller86@pager.llnl.gov
T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-8511

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
      

--
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
miller86@llnl.gov urgent: miller86@pager.llnl.gov
T:8-6 (925)-423-5901 M/W/Th:7-12,2-7 (530)-753-851

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
    
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
  
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Hi Quincey,

  Yes - we don't support the parallel I/O VFDs (MPI-IO and MPI-POSIX) on Windows currently.

Is the windows support in the roadmap?

As for Linux (kernel> 2.4), according to
ftp://ftp.hdfgroup.org/HDF5/current/src/unpacked/release_docs/INSTALL_parallel
even on a multi-core laptop, I should be able to access PHDF5 functionalities.
Is this correct? Thanks a lot.
     

  Yes, I test parallel I/O on my MacBookPro all the time. :slight_smile:

This is wonderful! Wait, do you mean PHDF5 works on Mac OS X which is a UNIX kernel? Or you boot camp a Ubuntu...?

Best,
x

No, I'm running OSX. But, it should work fine on a Linux laptop also.

    Quincey

···

On May 13, 2010, at 8:53 AM, Dr. X wrote:

Hi Quincey,

  Yes - we don't support the parallel I/O VFDs (MPI-IO and MPI-POSIX) on Windows currently.

Is the windows support in the roadmap?

As for Linux (kernel> 2.4), according to
ftp://ftp.hdfgroup.org/HDF5/current/src/unpacked/release_docs/INSTALL_parallel
even on a multi-core laptop, I should be able to access PHDF5 functionalities.
Is this correct? Thanks a lot.
    

  Yes, I test parallel I/O on my MacBookPro all the time. :slight_smile:

This is wonderful! Wait, do you mean PHDF5 works on Mac OS X which is a UNIX kernel? Or you boot camp a Ubuntu...?

Hi All,
I have a bunch of HDF4 files that I would like to convert to HDF5. So I head to the site to get "H4 / H5 Conversion Library: 2.1.1"
http://www.hdfgroup.org/ftp/HDF5/h4toh5/bin/H4H5Tools-2.1.1-win64.zip
Then I also downloaded HDF5 and HDF4 from
http://www.hdfgroup.org/ftp/HDF5/hdf5-1.8.5/bin/windows/HDF5-1.8.5-win64.zip
http://www.hdfgroup.org/ftp/HDF/HDF_Current/bin/windows/HDF4.2.5-win64-vs2005-ivf101.zip
After putting the directories into my PATH envrionment and trying to run "h4toh5.exe" from a command line, I got
"The program can't start because szlibdll.dll is missing from your computer..." message. So I made a copy of szip.dll and szip.lib and renamed the copy as szlibdll.dll and szlibdll.lib. Then I tried "h4toh5.exe" again. Everything looks fine with usage message printed.
However, when I tried with a HDF4 data file as "h4toh5.exe hdf4_file hdf5_file", I got
"h4toh5.exe has stopped working A problem caused the program to stop working correctly..." The program crashed.

HDFView has no problem viewing that file. And I tested h4toh5.exe with the out.hdf in the ChunkBinary example at
http://www.hdfgroup.org/training/hdf4_chunking/ChunkBinary.tar
h4toh5.exe worked smoothly. I have uploaded the troubled HDF4 file at
ftp://ftp.renci.org/outgoing/dbltrbl100m.hdfgrdbas
Would you please take a look? I couldn't tell whether it is the h4toh5's limitation.

Thanks a lot.

Best,
xunlei

Hi Xunlei,

Binaries for h4toh5 were built with the 1.8.4-patch1 release. Could you please use the appropriate binary distribution from the http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/ directory?

Your file is very simple. The conversion tools should work. Please make sure you have zlib installed too since some of the datasets use gzip compression.

Please let us know if you still have problems.

Thank you!

Elena

···

On Jun 29, 2010, at 4:15 PM, Dr. X wrote:

Hi All,
I have a bunch of HDF4 files that I would like to convert to HDF5. So I head to the site to get "H4 / H5 Conversion Library: 2.1.1"
http://www.hdfgroup.org/ftp/HDF5/h4toh5/bin/H4H5Tools-2.1.1-win64.zip
Then I also downloaded HDF5 and HDF4 from
http://www.hdfgroup.org/ftp/HDF5/hdf5-1.8.5/bin/windows/HDF5-1.8.5-win64.zip
http://www.hdfgroup.org/ftp/HDF/HDF_Current/bin/windows/HDF4.2.5-win64-vs2005-ivf101.zip
After putting the directories into my PATH envrionment and trying to run "h4toh5.exe" from a command line, I got
"The program can't start because szlibdll.dll is missing from your computer..." message. So I made a copy of szip.dll and szip.lib and renamed the copy as szlibdll.dll and szlibdll.lib. Then I tried "h4toh5.exe" again. Everything looks fine with usage message printed.
However, when I tried with a HDF4 data file as "h4toh5.exe hdf4_file hdf5_file", I got
"h4toh5.exe has stopped working A problem caused the program to stop working correctly..." The program crashed.

HDFView has no problem viewing that file. And I tested h4toh5.exe with the out.hdf in the ChunkBinary example at
http://www.hdfgroup.org/training/hdf4_chunking/ChunkBinary.tar
h4toh5.exe worked smoothly. I have uploaded the troubled HDF4 file at
ftp://ftp.renci.org/outgoing/dbltrbl100m.hdfgrdbas
Would you please take a look? I couldn't tell whether it is the h4toh5's limitation.

Thanks a lot.

Best,
xunlei

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Well, considering the release schedules, I believe that you will need to use a previous release of hdf5. I will look into this current external library transition and figure out how best to proceed. There seems to be two choices: release a version of hdf5 with the old szip libraries or release a version of hdf4, h4h5tools with the new szip library.

Allen

···

Hi All,
I have a bunch of HDF4 files that I would like to convert to HDF5. So I
head to the site to get "H4 / H5 Conversion Library: 2.1.1"
http://www.hdfgroup.org/ftp/HDF5/h4toh5/bin/H4H5Tools-2.1.1-win64.zip
Then I also downloaded HDF5 and HDF4 from
http://www.hdfgroup.org/ftp/HDF5/hdf5-1.8.5/bin/windows/HDF5-1.8.5-win64.zip
http://www.hdfgroup.org/ftp/HDF/HDF_Current/bin/windows/HDF4.2.5-win64-vs2005-ivf101.zip
After putting the directories into my PATH envrionment and trying to run
"h4toh5.exe" from a command line, I got
"The program can't start because szlibdll.dll is missing from your
computer..." message. So I made a copy of szip.dll and szip.lib and
renamed the copy as szlibdll.dll and szlibdll.lib. Then I tried
"h4toh5.exe" again. Everything looks fine with usage message printed.
However, when I tried with a HDF4 data file as "h4toh5.exe hdf4_file
hdf5_file", I got
"h4toh5.exe has stopped working A problem caused the program to stop
working correctly..." The program crashed.

HDFView has no problem viewing that file. And I tested h4toh5.exe with
the out.hdf in the ChunkBinary example at
http://www.hdfgroup.org/training/hdf4_chunking/ChunkBinary.tar
h4toh5.exe worked smoothly. I have uploaded the troubled HDF4 file at
ftp://ftp.renci.org/outgoing/dbltrbl100m.hdfgrdbas
Would you please take a look? I couldn't tell whether it is the h4toh5's
limitation.

Thanks a lot.

Best,
xunlei

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Hi Allen and Elena,
Thanks for the help.
I've tried with
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/hdf5-1.8.4-patch1-win64-vs2005-ivf91-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/szip-2.1-win64-vs2005-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/zlib-1.2.3-win64-vs2005.zip

and

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/hdf5-1.8.4-patch1-win64-vs2008-ivf101-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/szip-2.1-win64-vs2008-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/zlib-1.2.3-win64-vs2008.zip

Certainly I did the same trick on copying and renaming szip.* to szlibdll.*. I got the same error. In my HDF4 distribution, there is no (external) szip and zlib bundled. Not sure whether szip and zlib are included in the hdf425.dll...

Best,
x

···

On 6/29/2010 6:04 PM, Elena Pourmal wrote:

Hi Xunlei,

Binaries for h4toh5 were built with the 1.8.4-patch1 release. Could you please use the appropriate binary distribution from the http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/ directory?

Your file is very simple. The conversion tools should work. Please make sure you have zlib installed too since some of the datasets use gzip compression.

Please let us know if you still have problems.

Thank you!

Elena
On Jun 29, 2010, at 4:15 PM, Dr. X wrote:

Hi All,
I have a bunch of HDF4 files that I would like to convert to HDF5. So I head to the site to get "H4 / H5 Conversion Library: 2.1.1"
http://www.hdfgroup.org/ftp/HDF5/h4toh5/bin/H4H5Tools-2.1.1-win64.zip
Then I also downloaded HDF5 and HDF4 from
http://www.hdfgroup.org/ftp/HDF5/hdf5-1.8.5/bin/windows/HDF5-1.8.5-win64.zip
http://www.hdfgroup.org/ftp/HDF/HDF_Current/bin/windows/HDF4.2.5-win64-vs2005-ivf101.zip
After putting the directories into my PATH envrionment and trying to run "h4toh5.exe" from a command line, I got
"The program can't start because szlibdll.dll is missing from your computer..." message. So I made a copy of szip.dll and szip.lib and renamed the copy as szlibdll.dll and szlibdll.lib. Then I tried "h4toh5.exe" again. Everything looks fine with usage message printed.
However, when I tried with a HDF4 data file as "h4toh5.exe hdf4_file hdf5_file", I got
"h4toh5.exe has stopped working A problem caused the program to stop working correctly..." The program crashed.

HDFView has no problem viewing that file. And I tested h4toh5.exe with the out.hdf in the ChunkBinary example at
http://www.hdfgroup.org/training/hdf4_chunking/ChunkBinary.tar
h4toh5.exe worked smoothly. I have uploaded the troubled HDF4 file at
ftp://ftp.renci.org/outgoing/dbltrbl100m.hdfgrdbas
Would you please take a look? I couldn't tell whether it is the h4toh5's limitation.

Thanks a lot.

Best,
xunlei

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
     
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

We will regenerate the hdf4 and h4h5tools binaries to use the same szip/zlib versions as hdf5 1.8.5. We will include the szip/zlib/jpeg dlls with hdf4. In addition we will also include the hdf4 and hdf5 dlls with the h4h5tools binary so that everything will use the same version of libraries. Hopefully this will prevent future problems like this.

Allen

···

Hi Allen and Elena,
Thanks for the help.
I've tried with
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/hdf5-1.8.4-patch1-win64-vs2005-ivf91-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/szip-2.1-win64-vs2005-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/zlib-1.2.3-win64-vs2005.zip

and

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/hdf5-1.8.4-patch1-win64-vs2008-ivf101-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/szip-2.1-win64-vs2008-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/zlib-1.2.3-win64-vs2008.zip

Certainly I did the same trick on copying and renaming szip.* to
szlibdll.*. I got the same error. In my HDF4 distribution, there is no
(external) szip and zlib bundled. Not sure whether szip and zlib are
included in the hdf425.dll...

Best,
x

On 6/29/2010 6:04 PM, Elena Pourmal wrote:
> Hi Xunlei,
>
> Binaries for h4toh5 were built with the 1.8.4-patch1 release. Could you please use the appropriate binary distribution from the http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/ directory?
>
> Your file is very simple. The conversion tools should work. Please make sure you have zlib installed too since some of the datasets use gzip compression.
>
> Please let us know if you still have problems.
>
> Thank you!
>
> Elena
> On Jun 29, 2010, at 4:15 PM, Dr. X wrote:
>
>
>> Hi All,
>> I have a bunch of HDF4 files that I would like to convert to HDF5. So I head to the site to get "H4 / H5 Conversion Library: 2.1.1"
>> http://www.hdfgroup.org/ftp/HDF5/h4toh5/bin/H4H5Tools-2.1.1-win64.zip
>> Then I also downloaded HDF5 and HDF4 from
>> http://www.hdfgroup.org/ftp/HDF5/hdf5-1.8.5/bin/windows/HDF5-1.8.5-win64.zip
>> http://www.hdfgroup.org/ftp/HDF/HDF_Current/bin/windows/HDF4.2.5-win64-vs2005-ivf101.zip
>> After putting the directories into my PATH envrionment and trying to run "h4toh5.exe" from a command line, I got
>> "The program can't start because szlibdll.dll is missing from your computer..." message. So I made a copy of szip.dll and szip.lib and renamed the copy as szlibdll.dll and szlibdll.lib. Then I tried "h4toh5.exe" again. Everything looks fine with usage message printed.
>> However, when I tried with a HDF4 data file as "h4toh5.exe hdf4_file hdf5_file", I got
>> "h4toh5.exe has stopped working A problem caused the program to stop working correctly..." The program crashed.
>>
>> HDFView has no problem viewing that file. And I tested h4toh5.exe with the out.hdf in the ChunkBinary example at
>> http://www.hdfgroup.org/training/hdf4_chunking/ChunkBinary.tar
>> h4toh5.exe worked smoothly. I have uploaded the troubled HDF4 file at
>> ftp://ftp.renci.org/outgoing/dbltrbl100m.hdfgrdbas
>> Would you please take a look? I couldn't tell whether it is the h4toh5's limitation.
>>
>> Thanks a lot.
>>
>> Best,
>> xunlei
>>
>> _______________________________________________
>> Hdf-forum is for HDF software users discussion.
>> Hdf-forum@hdfgroup.org
>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@hdfgroup.org
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>

You guys are wonderful!
Thanks so much.

Best,
x

···

On 6/30/2010 10:32 AM, Allen D Byrne wrote:

We will regenerate the hdf4 and h4h5tools binaries to use the same szip/zlib versions as hdf5 1.8.5. We will include the szip/zlib/jpeg dlls with hdf4. In addition we will also include the hdf4 and hdf5 dlls with the h4h5tools binary so that everything will use the same version of libraries. Hopefully this will prevent future problems like this.

Allen

Hi Allen and Elena,
Thanks for the help.
I've tried with
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/hdf5-1.8.4-patch1-win64-vs2005-ivf91-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/szip-2.1-win64-vs2005-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/zlib-1.2.3-win64-vs2005.zip

and

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/hdf5-1.8.4-patch1-win64-vs2008-ivf101-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/szip-2.1-win64-vs2008-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/zlib-1.2.3-win64-vs2008.zip

Certainly I did the same trick on copying and renaming szip.* to
szlibdll.*. I got the same error. In my HDF4 distribution, there is no
(external) szip and zlib bundled. Not sure whether szip and zlib are
included in the hdf425.dll...

Best,
x

On 6/29/2010 6:04 PM, Elena Pourmal wrote:
     

Hi Xunlei,

Binaries for h4toh5 were built with the 1.8.4-patch1 release. Could you please use the appropriate binary distribution from the http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/ directory?

Your file is very simple. The conversion tools should work. Please make sure you have zlib installed too since some of the datasets use gzip compression.

Please let us know if you still have problems.

Thank you!

Elena
On Jun 29, 2010, at 4:15 PM, Dr. X wrote:

Hi All,
I have a bunch of HDF4 files that I would like to convert to HDF5. So I head to the site to get "H4 / H5 Conversion Library: 2.1.1"
http://www.hdfgroup.org/ftp/HDF5/h4toh5/bin/H4H5Tools-2.1.1-win64.zip
Then I also downloaded HDF5 and HDF4 from
http://www.hdfgroup.org/ftp/HDF5/hdf5-1.8.5/bin/windows/HDF5-1.8.5-win64.zip
http://www.hdfgroup.org/ftp/HDF/HDF_Current/bin/windows/HDF4.2.5-win64-vs2005-ivf101.zip
After putting the directories into my PATH envrionment and trying to run "h4toh5.exe" from a command line, I got
"The program can't start because szlibdll.dll is missing from your computer..." message. So I made a copy of szip.dll and szip.lib and renamed the copy as szlibdll.dll and szlibdll.lib. Then I tried "h4toh5.exe" again. Everything looks fine with usage message printed.
However, when I tried with a HDF4 data file as "h4toh5.exe hdf4_file hdf5_file", I got
"h4toh5.exe has stopped working A problem caused the program to stop working correctly..." The program crashed.

HDFView has no problem viewing that file. And I tested h4toh5.exe with the out.hdf in the ChunkBinary example at
http://www.hdfgroup.org/training/hdf4_chunking/ChunkBinary.tar
h4toh5.exe worked smoothly. I have uploaded the troubled HDF4 file at
ftp://ftp.renci.org/outgoing/dbltrbl100m.hdfgrdbas
Would you please take a look? I couldn't tell whether it is the h4toh5's limitation.

Thanks a lot.

Best,
xunlei

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Hi Allen,
Is h4h5tools close to be finished? Thanks a lot for the help.
Best,
x

···

On 6/30/2010 10:32 AM, Allen D Byrne wrote:

We will regenerate the hdf4 and h4h5tools binaries to use the same szip/zlib versions as hdf5 1.8.5. We will include the szip/zlib/jpeg dlls with hdf4. In addition we will also include the hdf4 and hdf5 dlls with the h4h5tools binary so that everything will use the same version of libraries. Hopefully this will prevent future problems like this.

Allen

Hi Allen and Elena,
Thanks for the help.
I've tried with
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/hdf5-1.8.4-patch1-win64-vs2005-ivf91-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/szip-2.1-win64-vs2005-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/zlib-1.2.3-win64-vs2005.zip

and

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/hdf5-1.8.4-patch1-win64-vs2008-ivf101-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/szip-2.1-win64-vs2008-enc.zip
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/zlib-1.2.3-win64-vs2008.zip

Certainly I did the same trick on copying and renaming szip.* to
szlibdll.*. I got the same error. In my HDF4 distribution, there is no
(external) szip and zlib bundled. Not sure whether szip and zlib are
included in the hdf425.dll...

Best,
x

On 6/29/2010 6:04 PM, Elena Pourmal wrote:
     

Hi Xunlei,

Binaries for h4toh5 were built with the 1.8.4-patch1 release. Could you please use the appropriate binary distribution from the http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/ directory?

Your file is very simple. The conversion tools should work. Please make sure you have zlib installed too since some of the datasets use gzip compression.

Please let us know if you still have problems.

Thank you!

Elena
On Jun 29, 2010, at 4:15 PM, Dr. X wrote:

Hi All,
I have a bunch of HDF4 files that I would like to convert to HDF5. So I head to the site to get "H4 / H5 Conversion Library: 2.1.1"
http://www.hdfgroup.org/ftp/HDF5/h4toh5/bin/H4H5Tools-2.1.1-win64.zip
Then I also downloaded HDF5 and HDF4 from
http://www.hdfgroup.org/ftp/HDF5/hdf5-1.8.5/bin/windows/HDF5-1.8.5-win64.zip
http://www.hdfgroup.org/ftp/HDF/HDF_Current/bin/windows/HDF4.2.5-win64-vs2005-ivf101.zip
After putting the directories into my PATH envrionment and trying to run "h4toh5.exe" from a command line, I got
"The program can't start because szlibdll.dll is missing from your computer..." message. So I made a copy of szip.dll and szip.lib and renamed the copy as szlibdll.dll and szlibdll.lib. Then I tried "h4toh5.exe" again. Everything looks fine with usage message printed.
However, when I tried with a HDF4 data file as "h4toh5.exe hdf4_file hdf5_file", I got
"h4toh5.exe has stopped working A problem caused the program to stop working correctly..." The program crashed.

HDFView has no problem viewing that file. And I tested h4toh5.exe with the out.hdf in the ChunkBinary example at
http://www.hdfgroup.org/training/hdf4_chunking/ChunkBinary.tar
h4toh5.exe worked smoothly. I have uploaded the troubled HDF4 file at
ftp://ftp.renci.org/outgoing/dbltrbl100m.hdfgrdbas
Would you please take a look? I couldn't tell whether it is the h4toh5's limitation.

Thanks a lot.

Best,
xunlei

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Hi Xunlei, Allen and Elena,

I just verified the file on Linux. It also causes problems. So it seems not
only a windows-related bug. We will investigate this and address this soon,
probably within the week.

Kent

···

On Wed, Jun 30, 2010 at 9:36 AM, Dr. X <xunlei@renci.org> wrote:

You guys are wonderful!
Thanks so much.

Best,
x

On 6/30/2010 10:32 AM, Allen D Byrne wrote:

We will regenerate the hdf4 and h4h5tools binaries to use the same
szip/zlib versions as hdf5 1.8.5. We will include the szip/zlib/jpeg dlls
with hdf4. In addition we will also include the hdf4 and hdf5 dlls with the
h4h5tools binary so that everything will use the same version of libraries.
Hopefully this will prevent future problems like this.

Allen

Hi Allen and Elena,
Thanks for the help.
I've tried with

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/hdf5-1.8.4-patch1-win64-vs2005-ivf91-enc.zip

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/szip-2.1-win64-vs2005-enc.zip

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/zlib-1.2.3-win64-vs2005.zip

and

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/hdf5-1.8.4-patch1-win64-vs2008-ivf101-enc.zip

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/szip-2.1-win64-vs2008-enc.zip

http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/zlib-1.2.3-win64-vs2008.zip

Certainly I did the same trick on copying and renaming szip.* to
szlibdll.*. I got the same error. In my HDF4 distribution, there is no
(external) szip and zlib bundled. Not sure whether szip and zlib are
included in the hdf425.dll...

Best,
x

On 6/29/2010 6:04 PM, Elena Pourmal wrote:

Hi Xunlei,

Binaries for h4toh5 were built with the 1.8.4-patch1 release. Could you
please use the appropriate binary distribution from the
http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/directory?

Your file is very simple. The conversion tools should work. Please make
sure you have zlib installed too since some of the datasets use gzip
compression.

Please let us know if you still have problems.

Thank you!

Elena
On Jun 29, 2010, at 4:15 PM, Dr. X wrote:

Hi All,
I have a bunch of HDF4 files that I would like to convert to HDF5. So I
head to the site to get "H4 / H5 Conversion Library: 2.1.1"
http://www.hdfgroup.org/ftp/HDF5/h4toh5/bin/H4H5Tools-2.1.1-win64.zip
Then I also downloaded HDF5 and HDF4 from

http://www.hdfgroup.org/ftp/HDF5/hdf5-1.8.5/bin/windows/HDF5-1.8.5-win64.zip

http://www.hdfgroup.org/ftp/HDF/HDF_Current/bin/windows/HDF4.2.5-win64-vs2005-ivf101.zip
After putting the directories into my PATH envrionment and trying to
run "h4toh5.exe" from a command line, I got
"The program can't start because szlibdll.dll is missing from your
computer..." message. So I made a copy of szip.dll and szip.lib and renamed
the copy as szlibdll.dll and szlibdll.lib. Then I tried "h4toh5.exe" again.
Everything looks fine with usage message printed.
However, when I tried with a HDF4 data file as "h4toh5.exe hdf4_file
hdf5_file", I got
"h4toh5.exe has stopped working A problem caused the program to stop
working correctly..." The program crashed.

HDFView has no problem viewing that file. And I tested h4toh5.exe with
the out.hdf in the ChunkBinary example at
http://www.hdfgroup.org/training/hdf4_chunking/ChunkBinary.tar
h4toh5.exe worked smoothly. I have uploaded the troubled HDF4 file at
ftp://ftp.renci.org/outgoing/dbltrbl100m.hdfgrdbas
Would you please take a look? I couldn't tell whether it is the
h4toh5's limitation.

Thanks a lot.

Best,
xunlei

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

I did rebuild all the Windows binaries and include szip and zlib. Everything is based on the source of the last released versions. Also, note that the H4H5Tools includes hdf4 and hdf5 libraries and those binaries are not needed for that product. In fact, H4H5Tools is statically compiled and the utilities should be usable without any other libraries (other then system ones).

By all means, let me know if practice does not match theory.

Allen

···

Hi Allen,
Is h4h5tools close to be finished? Thanks a lot for the help.
Best,
x
On 6/30/2010 10:32 AM, Allen D Byrne wrote:
> We will regenerate the hdf4 and h4h5tools binaries to use the same szip/zlib versions as hdf5 1.8.5. We will include the szip/zlib/jpeg dlls with hdf4. In addition we will also include the hdf4 and hdf5 dlls with the h4h5tools binary so that everything will use the same version of libraries. Hopefully this will prevent future problems like this.
>
> Allen
>
>
>> Hi Allen and Elena,
>> Thanks for the help.
>> I've tried with
>> http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/hdf5-1.8.4-patch1-win64-vs2005-ivf91-enc.zip
>> http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/szip-2.1-win64-vs2005-enc.zip
>> http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2005/zlib-1.2.3-win64-vs2005.zip
>>
>> and
>>
>> http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/hdf5-1.8.4-patch1-win64-vs2008-ivf101-enc.zip
>> http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/szip-2.1-win64-vs2008-enc.zip
>> http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/bin/win64-vs2008/zlib-1.2.3-win64-vs2008.zip
>>
>> Certainly I did the same trick on copying and renaming szip.* to
>> szlibdll.*. I got the same error. In my HDF4 distribution, there is no
>> (external) szip and zlib bundled. Not sure whether szip and zlib are
>> included in the hdf425.dll...
>>
>> Best,
>> x
>>
>> On 6/29/2010 6:04 PM, Elena Pourmal wrote:
>>
>>> Hi Xunlei,
>>>
>>> Binaries for h4toh5 were built with the 1.8.4-patch1 release. Could you please use the appropriate binary distribution from the http://www.hdfgroup.org/ftp/HDF5/prev-releases/hdf5-1.8.4-patch1/ directory?
>>>
>>> Your file is very simple. The conversion tools should work. Please make sure you have zlib installed too since some of the datasets use gzip compression.
>>>
>>> Please let us know if you still have problems.
>>>
>>> Thank you!
>>>
>>> Elena
>>> On Jun 29, 2010, at 4:15 PM, Dr. X wrote:
>>>
>>>
>>>
>>>> Hi All,
>>>> I have a bunch of HDF4 files that I would like to convert to HDF5. So I head to the site to get "H4 / H5 Conversion Library: 2.1.1"
>>>> http://www.hdfgroup.org/ftp/HDF5/h4toh5/bin/H4H5Tools-2.1.1-win64.zip
>>>> Then I also downloaded HDF5 and HDF4 from
>>>> http://www.hdfgroup.org/ftp/HDF5/hdf5-1.8.5/bin/windows/HDF5-1.8.5-win64.zip
>>>> http://www.hdfgroup.org/ftp/HDF/HDF_Current/bin/windows/HDF4.2.5-win64-vs2005-ivf101.zip
>>>> After putting the directories into my PATH envrionment and trying to run "h4toh5.exe" from a command line, I got
>>>> "The program can't start because szlibdll.dll is missing from your computer..." message. So I made a copy of szip.dll and szip.lib and renamed the copy as szlibdll.dll and szlibdll.lib. Then I tried "h4toh5.exe" again. Everything looks fine with usage message printed.
>>>> However, when I tried with a HDF4 data file as "h4toh5.exe hdf4_file hdf5_file", I got
>>>> "h4toh5.exe has stopped working A problem caused the program to stop working correctly..." The program crashed.
>>>>
>>>> HDFView has no problem viewing that file. And I tested h4toh5.exe with the out.hdf in the ChunkBinary example at
>>>> http://www.hdfgroup.org/training/hdf4_chunking/ChunkBinary.tar
>>>> h4toh5.exe worked smoothly. I have uploaded the troubled HDF4 file at
>>>> ftp://ftp.renci.org/outgoing/dbltrbl100m.hdfgrdbas
>>>> Would you please take a look? I couldn't tell whether it is the h4toh5's limitation.
>>>>
>>>> Thanks a lot.
>>>>
>>>> Best,
>>>> xunlei
>>>>
>>>> _______________________________________________
>>>> Hdf-forum is for HDF software users discussion.
>>>> Hdf-forum@hdfgroup.org
>>>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>>>>
>>>>
>>> _______________________________________________
>>> Hdf-forum is for HDF software users discussion.
>>> Hdf-forum@hdfgroup.org
>>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>>>
>>>
>>
>>