libhdf5 repeatable segfault when writing to packet table.

Hello,

I am using an HDF5 packet table to save streaming data from a
photon-counting detector that outputs data as UDP packets. The basic
operation involves a one-time open and initialization of a .h5 file with a
packet table storing fixed length compound types. Then the data processing
loop runs and continually stores packets to the file by appending a packet
to the open packet table and performing an H5Fflush().

During a steady high rate data situation I consistently get a segfault in
libhdf5 at around the same file size each time (around 70 to 85 megabytes).
I'm estimating that I'm writing approximately 1636352 bytes/second to the
.h5 file during this process.

The process doing the work is running on Debian "squeeze" Linux 32-bit
(kernel 2.6.32-5). The process has the best nice priority and realtime IO
priority with a value of 1. The process is being run by the root user. The
disk being written to is an Apacer APS25P6B032G-DT industrial SSD with
great write specs. This problem occurs with two version of hdf5 that I have
tried. First, I was using the libhdf5-serial-1.8.4 that came as a standard
package on debian squeeze. I then compiled hdf5 1.8.11 (compiled with
CLFAGS=-O0 -g). I experience the same problem with 1.8.11 and finally ran
it with GDB to create the backtrace that is attached in this email.

I've now also compiled hdf5-1.8.4-patch1 (with CFLAGS=-O0 -g) and gotten a
backtrace which is attached as 1.8.4-backtrace.txt.

The source code for this program is located here:

The file tmif_hdf5.c contains all of my hdf5-related code. The function
called repeatedly to store data is save_packet().

Does anyone have advice on further debugging or why this always crashes the
same way?

Thanks,
~Nick

backtrace.txt (1.48 KB)

1.8.4-backtrace.txt (8.2 KB)

···

--
Nicholas Nell
Professional Research Assistant
University of Colorado
nicholas.nell@colorado.edu
303-492-5661