How do I install the HDF5 man-pages?

Running

    configure --help

I see the parameter

    --mandir=DIR

which, if I understand the instructions, ought to be set by default.
But I'm not getting any man-pages under--prefix=$DIR, and explicitly setting--mandir=$DIR/share/man didn't help either.
Is there some trick to this, like does the destination directory have to have already been created?
I don't see any man-pages in the 1.8.14 directory when I scan it.
Thanks,

                     Carl

···

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------

Hi Carl,

Thank you for reporting the problem!

HDF5 doesn’t have any man pages. Configure's —help option displays several defaults that are not supported and —mandir is one of them. I added an issue to our JIRA DB; for your reference, the issue number is HDFFV-9278.

Elena

···

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Elena Pourmal The HDF Group http://hdfgroup.org
1800 So. Oak St., Suite 203, Champaign IL 61820
217.531.6112
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

On Mar 23, 2015, at 7:13 PM, Carl Ponder <cponder@nvidia.com<mailto:cponder@nvidia.com>> wrote:

Running
configure --help
I see the parameter
--mandir=DIR
which, if I understand the instructions, ought to be set by default.
But I'm not getting any man-pages under --prefix=$DIR, and explicitly setting --mandir=$DIR/share/man didn't help either.
Is there some trick to this, like does the destination directory have to have already been created?
I don't see any man-pages in the 1.8.14 directory when I scan it.
Thanks,

                    Carl

________________________________
This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

I'm running "make check" with the 1.10.0 release of HDF5, and seeing these failures:

    ./use_append_chunk *FAILED*
    ./use_append_chunk -z 256 *FAILED*
    ./use_append_chunk -z 256 -y 5 *FAILED*
    ./use_append_mchunks -z 256 *FAILED*

I listed the details below for one case, the others are similar.
The errors happen with the GCC, Intel & PGI compilers, using MVAPICH2 or OpenMPI, so if it's an issue with my software stack, it would have to be deeper than these.
The other cases passed:

    ./use_append_chunk -f /tmp/datatfile.1160 PASSED
    ./use_append_chunk -l w PASSED
    ./use_append_chunk -l r PASSED
    ./use_append_mchunks PASSED
    ./use_append_mchunks -f /tmp/datatfile.1160 PASSED
    ./use_append_mchunks -l w PASSED
    ./use_append_mchunks -l r PASSED
    ./use_append_mchunks -z 256 -y 5 PASSED

I don't see these tests being run with HDF5 version 1.8.16, though, so is it possible that the they are not formulated correctly?
Thanks,

                     Carl Ponder

···

------------------------------------------------------------------------

./use_append_chunk -z 256 *FAILED*
     ===Parameters used:===
     chunk dims=(1, 256, 256)
     dataset max dims=(18446744073709551615, 256, 256)
     number of planes to write=256
     using SWMR mode=yes(1)
     data filename=use_append_chunk.h5
     launch part=Reader/Writer
     number of iterations=1 (not used yet)
     ===Parameters shown===
     Creating skeleton data file for test...
     File created.
     1559: launch reader process
     ===Parameters used:===
     chunk dims=(1, 256, 256)
     dataset max dims=(18446744073709551615, 256, 256)
     number of planes to write=256
     using SWMR mode=yes(1)
     data filename=use_append_chunk.h5
     launch part=Reader/Writer
     number of iterations=1 (not used yet)
     ===Parameters shown===
     Creating skeleton data file for test...
     File created.
     1545: continue as the writer process
     dataset rank 3, dimensions 0 x 256 x 256
     1545: child process exited with non-zero code (1)
     Error(s) encountered
     HDF5-DIAG: Error detected in HDF5 (1.10.0) thread 0:
       #000: H5F.c line 579 in H5Fopen(): unable to open file
         major: File accessibilty
         minor: Unable to open file
       #001: H5Fint.c line 1208 in H5F_open(): unable to read superblock
         major: File accessibilty
         minor: Read failed
       #002: H5Fsuper.c line 443 in H5F__super_read(): truncated file: eof = 526815, sblock->base_addr = 0, stored_eof = 33559007
         major: File accessibilty
         minor: File has been truncated
     H5Fopen failed
     read_uc_file encountered error

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------

When I built HDF5 1.10.0 with MVAPICH2 2.2b (GPUdirect version) and PGI 16.4, I got this hang during the make check:

    Testing t_cache

I made the hang go away with this setting:

    export MV2_ENABLE_AFFINITY=0

I'd seen a similar problem (and fix) with the make check for the Score-P profiler, built under the same set of conditions.
What I *think* is going on, is that the configurator recognizes the MVAPICH2 libraries as being MPICH-compatible, and tries to use MPICH environment settings to manage the process-to-core bindings.
But the MVAPICH2 environment settings use different names for the settings, so it falls back to default behavior, which could cause an oversubscription of the processor cores and a hang in this case.
Can you confirm this for me?
Thanks,

                     Carl Ponder

···

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------

Hi Carl,

What file system are you testing on? Is it a network file system like NFS, AFS, or SMB?

That test was added in HDF5 1.10.0 and tests single-writer/multiple-readers (SWMR) functionality. Since that is a new feature for 1.10.0, the test is not a part of the HDF5 1.8 release.

Dana Robinson
Software Engineer
The HDF Group

···

From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] On Behalf Of Carl Ponder
Sent: Saturday, April 23, 2016 7:45 PM
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Cc: Cyril Zeller <czeller@nvidia.com>
Subject: [Hdf-forum] HDF5 release 1.10.0 giving "use_append_chunk" failures

I'm running "make check" with the 1.10.0 release of HDF5, and seeing these failures:
./use_append_chunk *FAILED*
./use_append_chunk -z 256 *FAILED*
./use_append_chunk -z 256 -y 5 *FAILED*
./use_append_mchunks -z 256 *FAILED*
I listed the details below for one case, the others are similar.
The errors happen with the GCC, Intel & PGI compilers, using MVAPICH2 or OpenMPI, so if it's an issue with my software stack, it would have to be deeper than these.
The other cases passed:
./use_append_chunk -f /tmp/datatfile.1160 PASSED
./use_append_chunk -l w PASSED
./use_append_chunk -l r PASSED
./use_append_mchunks PASSED
./use_append_mchunks -f /tmp/datatfile.1160 PASSED
./use_append_mchunks -l w PASSED
./use_append_mchunks -l r PASSED
./use_append_mchunks -z 256 -y 5 PASSED
I don't see these tests being run with HDF5 version 1.8.16, though, so is it possible that the they are not formulated correctly?
Thanks,

                    Carl Ponder
________________________________

./use_append_chunk -z 256 *FAILED*
    ===Parameters used:===
    chunk dims=(1, 256, 256)
    dataset max dims=(18446744073709551615, 256, 256)
    number of planes to write=256
    using SWMR mode=yes(1)
    data filename=use_append_chunk.h5
    launch part=Reader/Writer
    number of iterations=1 (not used yet)
    ===Parameters shown===
    Creating skeleton data file for test...
    File created.
    1559: launch reader process
    ===Parameters used:===
    chunk dims=(1, 256, 256)
    dataset max dims=(18446744073709551615, 256, 256)
    number of planes to write=256
    using SWMR mode=yes(1)
    data filename=use_append_chunk.h5
    launch part=Reader/Writer
    number of iterations=1 (not used yet)
    ===Parameters shown===
    Creating skeleton data file for test...
    File created.
    1545: continue as the writer process
    dataset rank 3, dimensions 0 x 256 x 256
    1545: child process exited with non-zero code (1)
    Error(s) encountered
    HDF5-DIAG: Error detected in HDF5 (1.10.0) thread 0:
      #000: H5F.c line 579 in H5Fopen(): unable to open file
        major: File accessibilty
        minor: Unable to open file
      #001: H5Fint.c line 1208 in H5F_open(): unable to read superblock
        major: File accessibilty
        minor: Read failed
      #002: H5Fsuper.c line 443 in H5F__super_read(): truncated file: eof = 526815, sblock->base_addr = 0, stored_eof = 33559007
        major: File accessibilty
        minor: File has been truncated
    H5Fopen failed
    read_uc_file encountered error
________________________________
This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________

*On 04/24/2016 09:42 AM, Dana Robinson wrote:*

The errors happen with the GCC, Intel & PGI compilers, using MVAPICH2 or OpenMPI, so if it's an issue with my software stack, it would have to be deeper than these.

    Creating skeleton data file for test...
    File created.
    1545: continue as the writer process
    dataset rank 3, dimensions 0 x 256 x 256
    1545: child process exited with non-zero code (1)
    Error(s) encountered
    HDF5-DIAG: Error detected in HDF5 (1.10.0) thread 0:
      #000: H5F.c line 579 in H5Fopen(): unable to open file
        major: File accessibilty
        minor: Unable to open file
      #001: H5Fint.c line 1208 in H5F_open(): unable to read superblock
        major: File accessibilty
        minor: Read failed
      #002: H5Fsuper.c line 443 in H5F__super_read(): truncated file: eof = 526815, sblock->base_addr = 0, stored_eof = 33559007
        major: File accessibilty
        minor: File has been truncated
    H5Fopen failed
    read_uc_file encountered error

*On 04/24/2016 09:42 AM, Dana Robinson wrote:*

    What file system are you testing on? Is it a network file system
    like NFS, AFS, or SMB?
    That test was added in HDF5 1.10.0 and tests
    single-writer/multiple-readers (SWMR) functionality. Since that is
    a new feature for 1.10.0, the test is not a part of the HDF5 1.8
    release.

Dana -- here's what I get from the "mount" command:

    cmpool on /cm type zfs (rw,relatime,xattr,noacl)

I don't know how robust our filesystem/fileserver is.
I know we're not running out of space.
Thanks,

                     Carl Ponder

···

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------

Carl and All,

SWMR can run only on the file system that guarantees order of operations (for example, NFS doesn’t guarantee it). HDF5 1.10.0 was never tested on zfs and, my guess is that operations ordering may be a problem.

Please see two documents (come with the HDF5 1.10.0 source or can be found in our repository)


https://svn.hdfgroup.org/hdf5/tags/hdf5-1_10_0/test/SWMR_POSIX_Order_UG.txt

for more details.

We also provide a “twriteorder" test that is built by the “make" command to check if ordering is a problem. Test’s success doesn’t guarantee that the system supports ordering, but the failure definitely indicates that SWMR will not work on the system. If someone on this FORUM will come up with the better test design and contribute the code, we would be more than happy to accept it.

We will separate SWMR tests from other C tests in 1.10.1 release and will try to detect if SWMR tests are run on the appropriate file system. For users that are not interested in the SWMR feature or do not have the right file system, SWMR tests will not run automatically as they do now.

Thank you!

Elena

···

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Elena Pourmal The HDF Group http://hdfgroup.org
1800 So. Oak St., Suite 203, Champaign IL 61820
217.531.6112
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

On Apr 24, 2016, at 12:28 PM, Carl Ponder <cponder@nvidia.com<mailto:cponder@nvidia.com>> wrote:

On 04/24/2016 09:42 AM, Dana Robinson wrote:
The errors happen with the GCC, Intel & PGI compilers, using MVAPICH2 or OpenMPI, so if it's an issue with my software stack, it would have to be deeper than these.

    Creating skeleton data file for test...
    File created.
    1545: continue as the writer process
    dataset rank 3, dimensions 0 x 256 x 256
    1545: child process exited with non-zero code (1)
    Error(s) encountered
    HDF5-DIAG: Error detected in HDF5 (1.10.0) thread 0:
      #000: H5F.c line 579 in H5Fopen(): unable to open file
        major: File accessibilty
        minor: Unable to open file
      #001: H5Fint.c line 1208 in H5F_open(): unable to read superblock
        major: File accessibilty
        minor: Read failed
      #002: H5Fsuper.c line 443 in H5F__super_read(): truncated file: eof = 526815, sblock->base_addr = 0, stored_eof = 33559007
        major: File accessibilty
        minor: File has been truncated
    H5Fopen failed
    read_uc_file encountered error
On 04/24/2016 09:42 AM, Dana Robinson wrote:
What file system are you testing on? Is it a network file system like NFS, AFS, or SMB?
That test was added in HDF5 1.10.0 and tests single-writer/multiple-readers (SWMR) functionality. Since that is a new feature for 1.10.0, the test is not a part of the HDF5 1.8 release.
Dana -- here's what I get from the "mount" command:
cmpool on /cm type zfs (rw,relatime,xattr,noacl)
I don't know how robust our filesystem/fileserver is.
I know we're not running out of space.
Thanks,

                    Carl Ponder

________________________________
This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5