possible bug in h5ltread_dataset_string_f

Hello,

I use hdf5 1.8.3 compiled with:
gcc (Ubuntu 4.3.3-5ubuntu4) 4.3.3
and
GNU Fortran (Ubuntu 4.3.3-5ubuntu4) 4.3.3

and I'm trying to use "h5ltread_dataset_string_f" subroutine :

subroutine h5ltread_dataset_string_f(loc_id, dset_name, type_id, buf, &
                                     dims, errcode)
  implicit none
  integer(HID_T), intent(IN) :: loc_id ! file or group identifier
  character(LEN=*), intent(IN) :: dset_name ! name of the dataset
  integer(HID_T), intent(IN) :: type_id ! datatype identifier
  integer(HSIZE_T), dimension(*), intent(IN) :: dims
                                                  ! size of the buffer buf
  character(LEN=*), intent(INOUT), dimension(*) :: buf
                                                  ! data buffer
  integer :: errcode ! error code
end subroutine h5ltread_dataset_string_f

However, the implementation in H5LTff.f90 does not take into account
"dims" and reads a single string.

Is it a known bug corrected in a snapshot release ?

Thanks a lot,

Cyril.

Cyril,

Thank you for reporting the problem. I entered a bug report and we will take a look.

Elena

···

On Sep 11, 2009, at 6:00 AM, cyril giraudon wrote:

Hello,

I use hdf5 1.8.3 compiled with:
gcc (Ubuntu 4.3.3-5ubuntu4) 4.3.3
and
GNU Fortran (Ubuntu 4.3.3-5ubuntu4) 4.3.3

and I'm trying to use "h5ltread_dataset_string_f" subroutine :

subroutine h5ltread_dataset_string_f(loc_id, dset_name, type_id, buf, &
                                    dims, errcode)
implicit none
integer(HID_T), intent(IN) :: loc_id ! file or group identifier
character(LEN=*), intent(IN) :: dset_name ! name of the dataset
integer(HID_T), intent(IN) :: type_id ! datatype identifier
integer(HSIZE_T), dimension(*), intent(IN) :: dims
                                                 ! size of the buffer buf
character(LEN=*), intent(INOUT), dimension(*) :: buf
                                                 ! data buffer
integer :: errcode ! error code
end subroutine h5ltread_dataset_string_f

However, the implementation in H5LTff.f90 does not take into account
"dims" and reads a single string.

Is it a known bug corrected in a snapshot release ?

Thanks a lot,

Cyril.

I have an HDF-5 based system which consists of two applications. The first is a data collector which harvests data from several thousand surface observing stations and stores them in a single HDF-5 file. The second is a web service which provides data to requestors from this same HDF-5 file. These are separate applications which run independently of each other.

What I'm observing is that, if the data collector is running and the web service receives a request, HDF-5 errors often result--sometimes a diagnostic backtrace (see below), but sometimes a segmentation fault. If the data collector is not running the web service performs flawlessly.

I suspect but can't really determine that the errors result when station data is being requested as that station data is being written into the HDF-5. I am using Python 2.6.2, PyTables 2.1.1, Numpy 1.3.0, and HDF 1.8.1.

Below is a sample backtrace. I should mention that this problem did not arise under earlier versions of these modules, but of course it could be that I just got lucky. Any insight is greatly appreciated.

tables.exceptions.HDF5ExtError: Can't open the group: '/'.
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5G.c line 699 in H5Gclose(): not a group
     major: Invalid arguments to routine
     minor: Inappropriate type
Exception tables.exceptions.HDF5ExtError: HDF5ExtError('Problems closing the Group /',) in ignored
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5F.c line 1560 in H5Fopen(): unable to open file
     major: File accessability
     minor: Unable to open file
   #001: H5F.c line 1337 in H5F_open(): unable to read superblock
     major: File accessability
     minor: Read failed
   #002: H5Fsuper.c line 542 in H5F_super_read(): truncated file
     major: File accessability
     minor: File has been truncated
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5Gdeprec.c line 293 in H5Gopen1(): not a location
     major: Invalid arguments to routine
     minor: Inappropriate type
   #001: H5Gloc.c line 241 in H5G_loc(): invalid object ID
     major: Invalid arguments to routine
     minor: Bad value
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5Adeprec.c line 208 in H5Aopen_name(): not a location
     major: Invalid arguments to routine
     minor: Inappropriate type
   #001: H5Gloc.c line 241 in H5G_loc(): invalid object ID
     major: Invalid arguments to routine
     minor: Bad value
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5G.c line 699 in H5Gclose(): not a group
     major: Invalid arguments to routine
     minor: Inappropriate type
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5F.c line 2005 in H5Fclose(): not a file ID
     major: Invalid arguments to routine
     minor: Inappropriate type
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5F.c line 1560 in H5Fopen(): unable to open file
     major: File accessability
     minor: Unable to open file
   #001: H5F.c line 1337 in H5F_open(): unable to read superblock
     major: File accessability
     minor: Read failed
   #002: H5Fsuper.c line 542 in H5F_super_read(): truncated file
     major: File accessability
     minor: File has been truncated
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5F.c line 3287 in H5Fget_mdc_config(): not a file ID
     major: Invalid arguments to routine
     minor: Bad value
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5F.c line 3346 in H5Fset_mdc_config(): not a file ID
     major: Invalid arguments to routine
     minor: Bad value
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5Gdeprec.c line 293 in H5Gopen1(): not a location
     major: Invalid arguments to routine
     minor: Inappropriate type
   #001: H5Gloc.c line 241 in H5G_loc(): invalid object ID
     major: Invalid arguments to routine
     minor: Bad value
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5Adeprec.c line 208 in H5Aopen_name(): not a location
     major: Invalid arguments to routine
     minor: Inappropriate type
   #001: H5Gloc.c line 241 in H5G_loc(): invalid object ID
     major: Invalid arguments to routine
     minor: Bad value
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5G.c line 699 in H5Gclose(): not a group
     major: Invalid arguments to routine
     minor: Inappropriate type
HDF5-DIAG: Error detected in HDF5 (1.8.1) thread 3086555808:
   #000: H5Gdeprec.c line 293 in H5Gopen1(): not a location
     major: Invalid arguments to routine
     minor: Inappropriate type
   #001: H5Gloc.c line 241 in H5G_loc(): invalid object ID
     major: Invalid arguments to routine
     minor: Bad value
Traceback (most recent call last):
   File "/usr/local/lib/python2.6/site-packages/twisted/protocols/basic.py", line 231, in dataReceived
     why = self.lineReceived(line)
   File "/usr/local/lib/python2.6/site-packages/twisted/web/http.py", line 1325, in lineReceived
     self.allContentReceived()
   File "/usr/local/lib/python2.6/site-packages/twisted/web/http.py", line 1391, in allContentReceived
     req.requestReceived(command, path, version)
   File "/usr/local/lib/python2.6/site-packages/twisted/web/http.py", line 714, in requestReceived
     self.process()
--- <exception caught here> ---
   File "/usr/local/lib/python2.6/site-packages/twisted/web/server.py", line 150, in process
     self.render(resrc)
   File "/usr/local/lib/python2.6/site-packages/twisted/web/server.py", line 157, in render
     body = resrc.render(self)
   File "/home/dsallis/sps/sps_syndication/SyndicationFeed.py", line 423, in render
     boundingBox=boundingBox).renderItem()))
   File "/home/dsallis/sps/sps_syndication/syndicators.py", line 1034, in renderItem
     fp = tables.openFile(SPS_DATA_FILE, 'r')
   File "/usr/local/lib/python2.6/site-packages/tables/file.py", line 230, in openFile
     return File(filename, mode, title, rootUEP, filters, **kwargs)
   File "/usr/local/lib/python2.6/site-packages/tables/file.py", line 520, in __init__
     self.root = root = self.__getRootGroup(rootUEP, title, filters)
   File "/usr/local/lib/python2.6/site-packages/tables/file.py", line 565, in __getRootGroup
     return RootGroup(self, rootUEP, title=title, new=new, filters=filters)
   File "/usr/local/lib/python2.6/site-packages/tables/group.py", line 1134, in __init__
     self._v_objectID = self._g_open()
   File "hdf5Extension.pyx", line 615, in tables.hdf5Extension.Group._g_open

···

--
David E. Sallis, Senior Principal Engineer, Software
General Dynamics Information Technology
NOAA Coastal Data Development Center
Stennis Space Center, Mississippi
228.688.3805
david.sallis@gdit.com
david.sallis@noaa.gov
--------------------------------------------
"Better Living Through Software Engineering"
--------------------------------------------

A Friday 11 September 2009 19:30:05 David E. Sallis escrigué:

I have an HDF-5 based system which consists of two applications. The first
is a data collector which harvests data from several thousand surface
observing stations and stores them in a single HDF-5 file. The second is a
web service which provides data to requestors from this same HDF-5 file.
These are separate applications which run independently of each other.

[clip]

In general is bad idea to run several write/read processes over the same HDF5
file. This is because it can happen that while a process is reading the file,
the other can be updating it, giving weird errors like you are having.

The solution would be to lock the file while writing and unlock it after a
flush over the file has been performed. Also, in order to avoid cache (HDF5,
PyTables) problems with read apps, you would need to re-open your files
whenever you are going to issue a read operation. If a re-opening operation
is unacceptable in terms of speed, you may want to do all your I/O operations
in one single process (or thread) and communicate the results via sockets,
Queue objects (in case of using threads) or whatever, with the client
process/thread.

HTH,

···

--
Francesc Alted

Hello,

I'd like to know if there is any news about this problem ?
Is this a real bug ?

Thanks a lot,

Cyril,

···

Cyril,

Thank you for reporting the problem. I entered a bug report and we
will take a look.

Elena
On Sep 11, 2009, at 6:00 AM, cyril giraudon wrote:

Hello,

I use hdf5 1.8.3 compiled with:
gcc (Ubuntu 4.3.3-5ubuntu4) 4.3.3
and
GNU Fortran (Ubuntu 4.3.3-5ubuntu4) 4.3.3

and I'm trying to use "h5ltread_dataset_string_f" subroutine :

subroutine h5ltread_dataset_string_f(loc_id, dset_name, type_id, buf, &
                                    dims, errcode)
implicit none
integer(HID_T), intent(IN) :: loc_id ! file or group
identifier
character(LEN=*), intent(IN) :: dset_name ! name of the dataset
integer(HID_T), intent(IN) :: type_id ! datatype identifier
integer(HSIZE_T), dimension(*), intent(IN) :: dims
                                                 ! size of the buffer
buf
character(LEN=*), intent(INOUT), dimension(*) :: buf
                                                 ! data buffer
integer :: errcode ! error code
end subroutine h5ltread_dataset_string_f

However, the implementation in H5LTff.f90 does not take into account
"dims" and reads a single string.

Is it a known bug corrected in a snapshot release ?

Thanks a lot,

Cyril.

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Francesc Alted said the following on 9/12/2009 3:20 AM:

A Friday 11 September 2009 19:30:05 David E. Sallis escrigué:

I have an HDF-5 based system which consists of two applications. The first
is a data collector which harvests data from several thousand surface
observing stations and stores them in a single HDF-5 file. The second is a
web service which provides data to requestors from this same HDF-5 file. These are separate applications which run independently of each other.

[clip]

In general is bad idea to run several write/read processes over the same HDF5 file. This is because it can happen that while a process is reading the file, the other can be updating it, giving weird errors like you are having.

The solution would be to lock the file while writing and unlock it after a flush over the file has been performed.

Francesc, that does help. I suspected that my technique was the problem, and it's helpful to have it confirmed. I will implement your suggestion and see if it resolves the issue. Many thanks.

--David

···

--
David E. Sallis, Senior Principal Engineer, Software
General Dynamics Information Technology
NOAA Coastal Data Development Center
Stennis Space Center, Mississippi
228.688.3805
david.sallis@gdit.com
david.sallis@noaa.gov
--------------------------------------------
"Better Living Through Software Engineering"
--------------------------------------------

Cyril,

It was a documentation bug

Please see updated http://www.hdfgroup.org/HDF5/doc/HL/RM_H5LT.html#H5LTread_dataset_string

But I think the function name is misleading and we do have the problem with this API since it behaves differently from the similar APIs for the numeric types.

It turns out that H5LTmake_dataset_string and its fortran counterpart create a dataset with one element of type H5T_C_S1 and the size equal to the size of a string passed by an application (Aside: for numeric types, one can create/read n-dimensional datasets). This behavior is not documented; needs to be fixed.

Do you need an API(s) that creates/reads back an array of fixed length strings? Current work around will be to use HDF5 Fortran library if you need this functionality.

Elena

···

On Sep 25, 2009, at 9:48 AM, cyril giraudon wrote:

Hello,

I'd like to know if there is any news about this problem ?
Is this a real bug ?

Thanks a lot,

Cyril,

Cyril,

Thank you for reporting the problem. I entered a bug report and we
will take a look.

Elena
On Sep 11, 2009, at 6:00 AM, cyril giraudon wrote:

Hello,

I use hdf5 1.8.3 compiled with:
gcc (Ubuntu 4.3.3-5ubuntu4) 4.3.3
and
GNU Fortran (Ubuntu 4.3.3-5ubuntu4) 4.3.3

and I'm trying to use "h5ltread_dataset_string_f" subroutine :

subroutine h5ltread_dataset_string_f(loc_id, dset_name, type_id, buf, &
                                   dims, errcode)
implicit none
integer(HID_T), intent(IN) :: loc_id ! file or group
identifier
character(LEN=*), intent(IN) :: dset_name ! name of the dataset
integer(HID_T), intent(IN) :: type_id ! datatype identifier
integer(HSIZE_T), dimension(*), intent(IN) :: dims
                                                ! size of the buffer
buf
character(LEN=*), intent(INOUT), dimension(*) :: buf
                                                ! data buffer
integer :: errcode ! error code
end subroutine h5ltread_dataset_string_f

However, the implementation in H5LTff.f90 does not take into account
"dims" and reads a single string.

Is it a known bug corrected in a snapshot release ?

Thanks a lot,

Cyril.

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

A Monday 14 September 2009 15:44:29 David E. Sallis escrigué:

Francesc Alted said the following on 9/12/2009 3:20 AM:
> A Friday 11 September 2009 19:30:05 David E. Sallis escrigué:
>> I have an HDF-5 based system which consists of two applications. The
>> first is a data collector which harvests data from several thousand
>> surface observing stations and stores them in a single HDF-5 file. The
>> second is a web service which provides data to requestors from this same
>> HDF-5 file. These are separate applications which run independently of
>> each other.
>
> [clip]
>
> In general is bad idea to run several write/read processes over the same
> HDF5 file. This is because it can happen that while a process is reading
> the file, the other can be updating it, giving weird errors like you are
> having.
>
> The solution would be to lock the file while writing and unlock it after
> a flush over the file has been performed.

Francesc, that does help. I suspected that my technique was the problem,
and it's helpful to have it confirmed. I will implement your suggestion
and see if it resolves the issue. Many thanks.

You are welcome. At any rate, the next ticket could be of interest to you:

http://pytables.org/trac/ticket/185

···

--
Francesc Alted

Hi David,

···

On Sep 14, 2009, at 8:44 AM, David E. Sallis wrote:

Francesc Alted said the following on 9/12/2009 3:20 AM:

A Friday 11 September 2009 19:30:05 David E. Sallis escrigué:

I have an HDF-5 based system which consists of two applications. The first
is a data collector which harvests data from several thousand surface
observing stations and stores them in a single HDF-5 file. The second is a
web service which provides data to requestors from this same HDF-5 file. These are separate applications which run independently of each other.

[clip]
In general is bad idea to run several write/read processes over the same HDF5 file. This is because it can happen that while a process is reading the file, the other can be updating it, giving weird errors like you are having.
The solution would be to lock the file while writing and unlock it after a flush over the file has been performed.

Francesc, that does help. I suspected that my technique was the problem, and it's helpful to have it confirmed. I will implement your suggestion and see if it resolves the issue. Many thanks.

  Francesc's evaluation and solution is indeed correct. We are working to enable "single-writer/multiple-reader" (SWMR) access to HDF5 files and will have a [possibly limited] version working in the next major release (1.10.0), but that's a while away and you should use the current solution until SWMR access is released.

  Quincey

Hello,

Thank you for your answer.

As for as a new API for the string dataset handling,
I think it would be very useful and it would be really
the equivalent API to the h5ltread_dataset_"number" API.

Best regard,

Cyril.

----- Mail Original -----

···

De: "Elena Pourmal" <epourmal@hdfgroup.org>
À: hdf-forum@hdfgroup.org
Envoyé: Vendredi 25 Septembre 2009 20h59:02 GMT +01:00 Amsterdam / Berlin / Berne / Rome / Stockholm / Vienne
Objet: Re: [Hdf-forum] possible bug in h5ltread_dataset_string_f

Cyril,

It was a documentation bug

Please see updated http://www.hdfgroup.org/HDF5/doc/HL/RM_H5LT.html#H5LTread_dataset_string

But I think the function name is misleading and we do have the problem
with this API since it behaves differently from the similar APIs for
the numeric types.

It turns out that H5LTmake_dataset_string and its fortran counterpart
create a dataset with one element of type H5T_C_S1 and the size equal
to the size of a string passed by an application (Aside: for numeric
types, one can create/read n-dimensional datasets). This behavior is
not documented; needs to be fixed.

Do you need an API(s) that creates/reads back an array of fixed length
strings? Current work around will be to use HDF5 Fortran library if
you need this functionality.

Elena

On Sep 25, 2009, at 9:48 AM, cyril giraudon wrote:

Hello,

I'd like to know if there is any news about this problem ?
Is this a real bug ?

Thanks a lot,

Cyril,

Cyril,

Thank you for reporting the problem. I entered a bug report and we
will take a look.

Elena
On Sep 11, 2009, at 6:00 AM, cyril giraudon wrote:

Hello,

I use hdf5 1.8.3 compiled with:
gcc (Ubuntu 4.3.3-5ubuntu4) 4.3.3
and
GNU Fortran (Ubuntu 4.3.3-5ubuntu4) 4.3.3

and I'm trying to use "h5ltread_dataset_string_f" subroutine :

subroutine h5ltread_dataset_string_f(loc_id, dset_name, type_id,
buf, &
                                   dims, errcode)
implicit none
integer(HID_T), intent(IN) :: loc_id ! file or group
identifier
character(LEN=*), intent(IN) :: dset_name ! name of the
dataset
integer(HID_T), intent(IN) :: type_id ! datatype
identifier
integer(HSIZE_T), dimension(*), intent(IN) :: dims
                                                ! size of the buffer
buf
character(LEN=*), intent(INOUT), dimension(*) :: buf
                                                ! data buffer
integer :: errcode ! error code
end subroutine h5ltread_dataset_string_f

However, the implementation in H5LTff.f90 does not take into account
"dims" and reads a single string.

Is it a known bug corrected in a snapshot release ?

Thanks a lot,

Cyril.

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

A Monday 14 September 2009 17:56:04 Francesc Alted escrigué:

You are welcome. At any rate, the next ticket could be of interest to you:

http://pytables.org/trac/ticket/185

...which reminds me about the metadata issue in HDF5 itself. Given that this
issue was filed a year ago, chances are that this has been addressed. Anyone
from the HDF crew can confirm?

Thanks,

···

--
Francesc Alted

Quincey Koziol said the following on 9/15/2009 8:55 AM:

Francesc's evaluation and solution is indeed correct. We are working to enable "single-writer/multiple-reader" (SWMR) access to HDF5 files and will have a [possibly limited] version working in the next major release (1.10.0), but that's a while away and you should use the current solution until SWMR access is released.

Quincey, thank you. I can report that I have eliminated my problem using Francesc's recommendation. I look forward with eagerness to your SWMR implementation in a future release!

--David

···

--
David E. Sallis, Senior Principal Engineer, Software
General Dynamics Information Technology
NOAA Coastal Data Development Center
Stennis Space Center, Mississippi
228.688.3805
david.sallis@gdit.com
david.sallis@noaa.gov
--------------------------------------------
"Better Living Through Software Engineering"
--------------------------------------------

Francesc Alted said the following on 9/14/2009 11:00 AM:

A Monday 14 September 2009 17:56:04 Francesc Alted escrigué:

You are welcome. At any rate, the next ticket could be of interest to you:

http://pytables.org/trac/ticket/185

...which reminds me about the metadata issue in HDF5 itself. Given that this issue was filed a year ago, chances are that this has been addressed. Anyone from the HDF crew can confirm?

I submit that it has been addressed by someone. I'm using PyTables 2.1.1; the file object returned by openFile() possesses a fileno() method, and I'm using Python 2.6.2's fcntl module (specifically fcntl.flock()) on my HDF file with no complaints. That documentation states that the fcntl methods support any file object which has a fileno() method.

···

--
David E. Sallis, Senior Principal Engineer, Software
General Dynamics Information Technology
NOAA Coastal Data Development Center
Stennis Space Center, Mississippi
228.688.3805
david.sallis@gdit.com
david.sallis@noaa.gov
--------------------------------------------
"Better Living Through Software Engineering"
--------------------------------------------

A Monday 14 September 2009 18:13:57 David E. Sallis escrigué:

Francesc Alted said the following on 9/14/2009 11:00 AM:
> A Monday 14 September 2009 17:56:04 Francesc Alted escrigué:
>> You are welcome. At any rate, the next ticket could be of interest to
>> you:
>>
>> http://pytables.org/trac/ticket/185
>
> ...which reminds me about the metadata issue in HDF5 itself. Given that
> this issue was filed a year ago, chances are that this has been
> addressed. Anyone from the HDF crew can confirm?

I submit that it has been addressed by someone. I'm using PyTables 2.1.1;
the file object returned by openFile() possesses a fileno() method, and I'm
using Python 2.6.2's fcntl module (specifically fcntl.flock()) on my HDF
file with no complaints. That documentation states that the fcntl methods
support any file object which has a fileno() method.

Oh sorry if I confused you. Yes, PyTables 2.1.1 does support the fileno()
method. What I meant is that the code sent by fullung in his note:

http://pytables.org/trac/ticket/185#comment:4

still crashes, and that maybe this is a problem with HDF5. See:

http://pytables.org/trac/ticket/185#comment:5

Cheers,

···

--
Francesc Alted

A Monday 14 September 2009 18:00:13 Francesc Alted escrigué:

A Monday 14 September 2009 17:56:04 Francesc Alted escrigué:
> You are welcome. At any rate, the next ticket could be of interest to
> you:
>
> http://pytables.org/trac/ticket/185

...which reminds me about the metadata issue in HDF5 itself. Given that
this issue was filed a year ago, chances are that this has been addressed.
Anyone from the HDF crew can confirm?

OK. I've just tried and the error is still in HDF5 1.8.3 (log attached).

So, you may definitely want to follow the suggestion in:

http://pytables.org/trac/ticket/185#comment:2

of using a separate lock file.

HTH,

error.log (12.2 KB)

···

--
Francesc Alted