Memory Leak while reading large number of arrays

Hi all,
This is my first post in this list. I work at a company specialized in
engineering applications, and we have been using HDF for several years now
and so far we have been really happy with it.

Lately thought we have been trying to track down a memory leak while reading
some large datasets. First we found the memory leak problem while trying to
execute some simulations in our application. Tracking down the memory usage,
we narrowed it down to our in-house routines that read/write HDF files. To
verify if it was a problem with our code (more likely) or a problem in the
HDF library (highly unlikely), we created some sample code that only uses
the HDF library routines, and that reads a file similar to the one where we
originally found the problem. Unfortunately, the memory leak still occurs.
I'm writing here because we are out of ideas on how to try to figure this
problem out, so perhaps you guys can shed some light in the matter and point
us in the right direction.

The file layout we use in this case is (roughly) as follows:

/Timestep_00000
    /GridFunctions
        /GridFunction_00000
            /values
        ...
        ...
        /GridFunction_00016
            /values

/Timestep_00001
    ...
/Timestep_00500
    ...

Above, everything is a group, except for the member "values", which is a
50,000 x 1 dataset of doubles.

As you can see, we have 501 "Timestep" root groups, each containing 17
datasets. We try to read this as follows:

1. Pre-allocate 17 buffers, each one being able to accommodate an entire
dataset;
2. Go over each time-step, and read the 17 buffers.

Measuring the memory during each Timestep read (i.e., the reading of the 17
datasets inside that Timestep), the memory keeps accumulating, until by the
end of the read of the last Timestep it is over 100Mb. Since we use always
the same buffers, we have no idea of what the problem is. The sample routine
we are using for reading is as follows:

    void Read( std::string p_name, void* buffer )
    {
        // error checking suppressed for simplicity
        hid_t id = H5Dopen2( m_file_id, p_name.c_str(), H5P_DEFAULT ); //
m_file_id is the id of the file, obtained by H5Fopen
        H5Dread( id, H5T_NATIVE_DOUBLE, H5S_ALL, H5S_ALL, H5P_DEFAULT,
buffer );
        H5Dclose( id );
    }

We tried even using the same buffer for all datasets, but we still get the
same amount of memory leak.

Does anyone have any idea of what we may be doing wrong in here?

Thanks and Cheers,

···

--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
http://www.esss.com.br

Hi Bruno

How do you measure the 'leak'?

Regards,

-- dimitris

···

2009/6/26 Bruno Oliveira <bruno.oliveira@esss.com.br>

Hi all,
This is my first post in this list. I work at a company specialized in
engineering applications, and we have been using HDF for several years now
and so far we have been really happy with it.

Lately thought we have been trying to track down a memory leak while
reading some large datasets. First we found the memory leak problem while
trying to execute some simulations in our application. Tracking down the
memory usage, we narrowed it down to our in-house routines that read/write
HDF files. To verify if it was a problem with our code (more likely) or a
problem in the HDF library (highly unlikely), we created some sample code
that only uses the HDF library routines, and that reads a file similar to
the one where we originally found the problem. Unfortunately, the memory
leak still occurs. I'm writing here because we are out of ideas on how to
try to figure this problem out, so perhaps you guys can shed some light in
the matter and point us in the right direction.

The file layout we use in this case is (roughly) as follows:

/Timestep_00000
    /GridFunctions
        /GridFunction_00000
            /values
        ...
        ...
        /GridFunction_00016
            /values

/Timestep_00001
    ...
/Timestep_00500
    ...

Above, everything is a group, except for the member "values", which is a
50,000 x 1 dataset of doubles.

As you can see, we have 501 "Timestep" root groups, each containing 17
datasets. We try to read this as follows:

1. Pre-allocate 17 buffers, each one being able to accommodate an entire
dataset;
2. Go over each time-step, and read the 17 buffers.

Measuring the memory during each Timestep read (i.e., the reading of the 17
datasets inside that Timestep), the memory keeps accumulating, until by the
end of the read of the last Timestep it is over 100Mb. Since we use always
the same buffers, we have no idea of what the problem is. The sample routine
we are using for reading is as follows:

    void Read( std::string p_name, void* buffer )
    {
        // error checking suppressed for simplicity
        hid_t id = H5Dopen2( m_file_id, p_name.c_str(), H5P_DEFAULT ); //
m_file_id is the id of the file, obtained by H5Fopen
        H5Dread( id, H5T_NATIVE_DOUBLE, H5S_ALL, H5S_ALL, H5P_DEFAULT,
buffer );
        H5Dclose( id );
    }

We tried even using the same buffer for all datasets, but we still get the
same amount of memory leak.

Does anyone have any idea of what we may be doing wrong in here?

Thanks and Cheers,
--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
http://www.esss.com.br

Hi Bruno,

A Thursday 25 June 2009 23:29:38 Bruno Oliveira escrigué:

Hi all,
This is my first post in this list. I work at a company specialized in
engineering applications, and we have been using HDF for several years now
and so far we have been really happy with it.

Lately thought we have been trying to track down a memory leak while
reading some large datasets. First we found the memory leak problem while
trying to execute some simulations in our application. Tracking down the
memory usage, we narrowed it down to our in-house routines that read/write
HDF files. To verify if it was a problem with our code (more likely) or a
problem in the HDF library (highly unlikely), we created some sample code
that only uses the HDF library routines, and that reads a file similar to
the one where we originally found the problem. Unfortunately, the memory
leak still occurs. I'm writing here because we are out of ideas on how to
try to figure this problem out, so perhaps you guys can shed some light in
the matter and point us in the right direction.

The file layout we use in this case is (roughly) as follows:

[clip]

Above, everything is a group, except for the member "values", which is a
50,000 x 1 dataset of doubles.

As you can see, we have 501 "Timestep" root groups, each containing 17
datasets. We try to read this as follows:

Hmm, that accounts for 17*501 = 8517 groups. In my experience, you must be
ready to see hundreds of MB consumed by the HDF5 when you walk over tens of
thousands of groups/datasets.

1. Pre-allocate 17 buffers, each one being able to accommodate an entire
dataset;
2. Go over each time-step, and read the 17 buffers.

Measuring the memory during each Timestep read (i.e., the reading of the 17
datasets inside that Timestep), the memory keeps accumulating, until by the
end of the read of the last Timestep it is over 100Mb. Since we use always
the same buffers, we have no idea of what the problem is. The sample
routine we are using for reading is as follows:

As I see it, 100 MB can be expected for 8500 groups, so it may well be HDF5's
'fault'. Perhaps there is a way to instruct HDF5 to not consume so much
memory for these scenarios, but in general, I recommend not to put too many
groups on a single file.

At any rate, it always helps if you can submit a sample of the code
reproducing this behaviour.

···

--
Francesc Alted

----------------------------------------------------------------------
This mailing list is for HDF software users discussion.
To subscribe to this list, send a message to hdf-forum-subscribe@hdfgroup.org.
To unsubscribe, send a message to hdf-forum-unsubscribe@hdfgroup.org.

Hi Dimitris, thanks for answering,
We are on windows, so we use the GetProcessMemoryInfo function from the
windows API, calling it before and after the before mentioned function to
check the memory consumption.

Thanks,

···

On Thu, Jun 25, 2009 at 6:59 PM, Dimitris Servis <servisster@gmail.com>wrote:

Hi Bruno

How do you measure the 'leak'?

Regards,

-- dimitris

2009/6/26 Bruno Oliveira <bruno.oliveira@esss.com.br>

Hi all,

This is my first post in this list. I work at a company specialized in
engineering applications, and we have been using HDF for several years now
and so far we have been really happy with it.

Lately thought we have been trying to track down a memory leak while
reading some large datasets. First we found the memory leak problem while
trying to execute some simulations in our application. Tracking down the
memory usage, we narrowed it down to our in-house routines that read/write
HDF files. To verify if it was a problem with our code (more likely) or a
problem in the HDF library (highly unlikely), we created some sample code
that only uses the HDF library routines, and that reads a file similar to
the one where we originally found the problem. Unfortunately, the memory
leak still occurs. I'm writing here because we are out of ideas on how to
try to figure this problem out, so perhaps you guys can shed some light in
the matter and point us in the right direction.

The file layout we use in this case is (roughly) as follows:

/Timestep_00000
    /GridFunctions
        /GridFunction_00000
            /values
        ...
        ...
        /GridFunction_00016
            /values

/Timestep_00001
    ...
/Timestep_00500
    ...

Above, everything is a group, except for the member "values", which is a
50,000 x 1 dataset of doubles.

As you can see, we have 501 "Timestep" root groups, each containing 17
datasets. We try to read this as follows:

1. Pre-allocate 17 buffers, each one being able to accommodate an entire
dataset;
2. Go over each time-step, and read the 17 buffers.

Measuring the memory during each Timestep read (i.e., the reading of the
17 datasets inside that Timestep), the memory keeps accumulating, until by
the end of the read of the last Timestep it is over 100Mb. Since we use
always the same buffers, we have no idea of what the problem is. The sample
routine we are using for reading is as follows:

    void Read( std::string p_name, void* buffer )
    {
        // error checking suppressed for simplicity
        hid_t id = H5Dopen2( m_file_id, p_name.c_str(), H5P_DEFAULT ); //
m_file_id is the id of the file, obtained by H5Fopen
        H5Dread( id, H5T_NATIVE_DOUBLE, H5S_ALL, H5S_ALL, H5P_DEFAULT,
buffer );
        H5Dclose( id );
    }

We tried even using the same buffer for all datasets, but we still get the
same amount of memory leak.

Does anyone have any idea of what we may be doing wrong in here?

Thanks and Cheers,
--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
http://www.esss.com.br

--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
+55 (48) 3953-0010

Hi Francesc, thanks for answering,

A Thursday 25 June 2009 23:29:38 Bruno Oliveira escrigué:
> The file layout we use in this case is (roughly) as follows:
[clip]
> Above, everything is a group, except for the member "values", which is a
> 50,000 x 1 dataset of doubles.
>
> As you can see, we have 501 "Timestep" root groups, each containing 17
> datasets. We try to read this as follows:

Hmm, that accounts for 17*501 = 8517 groups. In my experience, you must be
ready to see hundreds of MB consumed by the HDF5 when you walk over tens of
thousands of groups/datasets.

I understand it might lead to memory consumption while reading the groups,
but:

1) We open the dataset "directly", as opposed to opening each group along
the path until reaching the dataset; Does HDF open intermediate groups when
accessing a dataset? (I supposed it didn't)
2) While it is understandable to occupy memory while opening a group,
shouldn't the allocated memory be released once we close it?

As I see it, 100 MB can be expected for 8500 groups, so it may well be

HDF5's
'fault'. Perhaps there is a way to instruct HDF5 to not consume so much
memory for these scenarios, but in general, I recommend not to put too many
groups on a single file.

At any rate, it always helps if you can submit a sample of the code
reproducing this behaviour.

I will come up with a sample of the code later.

Thanks again!

Best Regards,

···

On Fri, Jun 26, 2009 at 12:02 PM, Francesc Alted <faltet@pytables.org>wrote:
--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
http://www.esss.com.br

Hello all,
Attached is a sample of code that tries to reproduce the behavior I
described previously (including a SCons script, but anyone can compile it
using its own tools). Unfortunately, I have not being able to reproduce it
fully. The reason is that I was using in-house Python bindings for HD5,
which I didn't mention because I thought it was irrelevant, but apparently
it is.

I have tried using h5py to see if I could get the same "leak", and indeed I
had obtained the same behavior (around 50mb of extra memory is allocated by
the end of the file reading). I have no idea why using the python bindings
(both our in-house bindings and h5py) I obtain this leak, while using the
attached script I do not. Does anyone have any ideas?

Thanks!

Best Regards,

SConstruct (405 Bytes)

test_leak.cpp (5.5 KB)

···

On Fri, Jun 26, 2009 at 1:58 PM, Bruno Oliveira <bruno.oliveira@esss.com.br>wrote:

Hi Francesc, thanks for answering,

On Fri, Jun 26, 2009 at 12:02 PM, Francesc Alted <faltet@pytables.org>wrote:

A Thursday 25 June 2009 23:29:38 Bruno Oliveira escrigué:
> The file layout we use in this case is (roughly) as follows:
[clip]
> Above, everything is a group, except for the member "values", which is a
> 50,000 x 1 dataset of doubles.
>
> As you can see, we have 501 "Timestep" root groups, each containing 17
> datasets. We try to read this as follows:

Hmm, that accounts for 17*501 = 8517 groups. In my experience, you must
be
ready to see hundreds of MB consumed by the HDF5 when you walk over tens
of
thousands of groups/datasets.

I understand it might lead to memory consumption while reading the groups,
but:

1) We open the dataset "directly", as opposed to opening each group along
the path until reaching the dataset; Does HDF open intermediate groups when
accessing a dataset? (I supposed it didn't)
2) While it is understandable to occupy memory while opening a group,
shouldn't the allocated memory be released once we close it?

As I see it, 100 MB can be expected for 8500 groups, so it may well be

HDF5's
'fault'. Perhaps there is a way to instruct HDF5 to not consume so much
memory for these scenarios, but in general, I recommend not to put too
many
groups on a single file.

At any rate, it always helps if you can submit a sample of the code
reproducing this behaviour.

I will come up with a sample of the code later.

Thanks again!

Best Regards,
--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
http://www.esss.com.br

--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
+55 (48) 3953-0010
http://www.esss.com.br

Hi Bruno,

Hello all,

Attached is a sample of code that tries to reproduce the behavior I described previously (including a SCons script, but anyone can compile it using its own tools). Unfortunately, I have not being able to reproduce it fully. The reason is that I was using in-house Python bindings for HD5, which I didn't mention because I thought it was irrelevant, but apparently it is.

I have tried using h5py to see if I could get the same "leak", and indeed I had obtained the same behavior (around 50mb of extra memory is allocated by the end of the file reading). I have no idea why using the python bindings (both our in-house bindings and h5py) I obtain this leak, while using the attached script I do not. Does anyone have any ideas?

  I can't speak to possible Python issues, but I'll mention a possible HDF5 library-related one: we use internal "free list stores" to recycle commonly used data structures, etc. Generally, this is a performance win, at the expense of using more memory. However, it does tend to fool some memory leak tracking tools into thinking the memory is gone forever (we do release it all when our atexit() callback is called), or at least increasing indefinitely (it's not).

  You can compile the HDF5 library to avoid using the free list stores completely by defining the "H5_NO_FREE_LISTS" macro, or you can call H5garbage_collect() at runtime to release all the memory in the free lists back to the OS. Some memory checking tools also have other false positive "errors" that can be suppressed by defining the "H5_USING_MEMCHECKER" macro during compile time (which also disables the free list stores). For a UNIX build, this can be enabled with the "--enable-using-memchecker" configure option.

  Quincey

···

On Jun 30, 2009, at 9:12 AM, Bruno Oliveira wrote:

Thanks!

Best Regards,

On Fri, Jun 26, 2009 at 1:58 PM, Bruno Oliveira <bruno.oliveira@esss.com.br > > wrote:
Hi Francesc, thanks for answering,

On Fri, Jun 26, 2009 at 12:02 PM, Francesc Alted > <faltet@pytables.org> wrote:
A Thursday 25 June 2009 23:29:38 Bruno Oliveira escrigué:
> The file layout we use in this case is (roughly) as follows:
[clip]
> Above, everything is a group, except for the member "values", which is a
> 50,000 x 1 dataset of doubles.
>
> As you can see, we have 501 "Timestep" root groups, each containing 17
> datasets. We try to read this as follows:

Hmm, that accounts for 17*501 = 8517 groups. In my experience, you must be
ready to see hundreds of MB consumed by the HDF5 when you walk over tens of
thousands of groups/datasets.

I understand it might lead to memory consumption while reading the groups, but:

1) We open the dataset "directly", as opposed to opening each group along the path until reaching the dataset; Does HDF open intermediate groups when accessing a dataset? (I supposed it didn't)
2) While it is understandable to occupy memory while opening a group, shouldn't the allocated memory be released once we close it?

As I see it, 100 MB can be expected for 8500 groups, so it may well be HDF5's
'fault'. Perhaps there is a way to instruct HDF5 to not consume so much
memory for these scenarios, but in general, I recommend not to put too many
groups on a single file.

At any rate, it always helps if you can submit a sample of the code
reproducing this behaviour.

I will come up with a sample of the code later.

Thanks again!

Best Regards,
--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
http://www.esss.com.br

--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
+55 (48) 3953-0010
http://www.esss.com.br
<SConstruct><test_leak.cpp>----------------------------------------------------------------------
This mailing list is for HDF software users discussion.
To subscribe to this list, send a message to hdf-forum-subscribe@hdfgroup.org.
To unsubscribe, send a message to hdf-forum-unsubscribe@hdfgroup.org.

Hi,

I have tried using h5py to see if I could get the same "leak", and indeed I
had obtained the same behavior (around 50mb of extra memory is allocated by
the end of the file reading). I have no idea why using the python bindings
(both our in-house bindings and h5py) I obtain this leak, while using the
attached script I do not. Does anyone have any ideas?

I'm not able to replicate this from h5py using the file structure you
describe. How are you measuring the memory leak? In the attached
script (using Francesc's stat function) I see the memory level slowly
go up from 20M to 30M over the first few hundred timesteps, but then
it levels off at 31040 kB and stays there for the rest of the read.
Presumably this is just HDF5's (or Python's) internal memory
management. Once it plateaus at 30M I can't get it to go any higher,
even by increasing the number of timesteps to 1000.

If you can replicate the leaking behavior from h5py I'd be happy to
investigate further.

Andrew Collette

readin.py (2.15 KB)

A Tuesday 30 June 2009 16:12:22 Bruno Oliveira escrigué:

Hello all,
Attached is a sample of code that tries to reproduce the behavior I
described previously (including a SCons script, but anyone can compile it
using its own tools). Unfortunately, I have not being able to reproduce it
fully. The reason is that I was using in-house Python bindings for HD5,
which I didn't mention because I thought it was irrelevant, but apparently
it is.

I have tried using h5py to see if I could get the same "leak", and indeed I
had obtained the same behavior (around 50mb of extra memory is allocated by
the end of the file reading). I have no idea why using the python bindings
(both our in-house bindings and h5py) I obtain this leak, while using the
attached script I do not. Does anyone have any ideas?

No idea, but it seems that PyTables is not bitten by this. Here it is the
output of my testbed (attached):

$ PYTHONPATH=. python read-many.py --create-file -s /tmp/read-many.h5
Creating file... /tmp/read-many.h5
WallClock time: 62.215638876
Memory usage: ******* File written *******
VmSize: 151944 kB VmRSS: 34004 kB
VmData: 29776 kB VmStk: 180 kB
Reading file... /tmp/read-many.h5
WallClock time: 68.3129489422
Memory usage: ******* Read iteration #0 *******
VmSize: 150976 kB VmRSS: 34280 kB
VmData: 28808 kB VmStk: 180 kB
WallClock time: 74.0996758938
[clip]
Memory usage: ******* Read iteration #9 *******
VmSize: 150976 kB VmRSS: 34280 kB
VmData: 28808 kB VmStk: 180 kB

So, at least Python does not seem the guilty here (unless my testbed does not
check your intended use).

read-many.py (2.37 KB)

···

--
Francesc Alted

Andrew,
Sorry, I was incorrect: the memory difference is 30mb in both our in-house
bindings and h5py, not 50mb as I wrote previously, so your result is
consistent with mine.

I'm also arriving at the conclusion that this is an issue with Python's
memory management, or HDF5 handling of the free lists as explained by Quincey
Koziol. :confused:

Thanks for the help.

Best Regards,

···

On Tue, Jun 30, 2009 at 4:42 PM, Andrew Collette <andrew.collette@gmail.com>wrote:

Hi,

> I have tried using h5py to see if I could get the same "leak", and indeed
I
> had obtained the same behavior (around 50mb of extra memory is allocated
by
> the end of the file reading). I have no idea why using the python
bindings
> (both our in-house bindings and h5py) I obtain this leak, while using the
> attached script I do not. Does anyone have any ideas?

I'm not able to replicate this from h5py using the file structure you
describe. How are you measuring the memory leak? In the attached
script (using Francesc's stat function) I see the memory level slowly
go up from 20M to 30M over the first few hundred timesteps, but then
it levels off at 31040 kB and stays there for the rest of the read.
Presumably this is just HDF5's (or Python's) internal memory
management. Once it plateaus at 30M I can't get it to go any higher,
even by increasing the number of timesteps to 1000.

If you can replicate the leaking behavior from h5py I'd be happy to
investigate further.

Andrew Collette

----------------------------------------------------------------------
This mailing list is for HDF software users discussion.
To subscribe to this list, send a message to
hdf-forum-subscribe@hdfgroup.org.
To unsubscribe, send a message to hdf-forum-unsubscribe@hdfgroup.org.

--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
+55 (48) 3953-0010

A Tuesday 30 June 2009 21:59:42 Bruno Oliveira escrigué:

Andrew,
Sorry, I was incorrect: the memory difference is 30mb in both our in-house
bindings and h5py, not 50mb as I wrote previously, so your result is
consistent with mine.

I'm also arriving at the conclusion that this is an issue with Python's
memory management, or HDF5 handling of the free lists as explained by
Quincey Koziol. :confused:

Mmh, since your C++ program does not takes so much memory (how much, BTW?), I
don't think this is going to be an HDF5 issue. I tend to think that it is
more a 'feature' of the Python interpreter caching objects internally.

At any rate, I've tried to 10 times more nodes in the HDF5 file and the memory
used raised from 30 MB to 36 MB. So, yeah, the memory usage increases, but I
don't think that 36 MB is that much for handling tens of thousands of nodes.

Cheers,

···

--
Francesc Alted

----------------------------------------------------------------------
This mailing list is for HDF software users discussion.
To subscribe to this list, send a message to hdf-forum-subscribe@hdfgroup.org.
To unsubscribe, send a message to hdf-forum-unsubscribe@hdfgroup.org.

Francesc,

A Tuesday 30 June 2009 21:59:42 Bruno Oliveira escrigué:
> I'm also arriving at the conclusion that this is an issue with Python's
> memory management, or HDF5 handling of the free lists as explained by
> Quincey Koziol. :confused:

Mmh, since your C++ program does not takes so much memory (how much, BTW?),
I
don't think this is going to be an HDF5 issue. I tend to think that it is
more a 'feature' of the Python interpreter caching objects internally.

My C++ program finishes with only 4k (exactly, 4096 bytes), which is
certainly negligible. I'm also thinking this is an issue with the Python
interpreter.

At any rate, I've tried to 10 times more nodes in the HDF5 file and the

memory
used raised from 30 MB to 36 MB. So, yeah, the memory usage increases, but
I
don't think that 36 MB is that much for handling tens of thousands of
nodes.

Yeah, I also observed this behavior.

Thanks for replying!

Cheers,

···

On Wed, Jul 1, 2009 at 6:35 AM, Francesc Alted <faltet@pytables.org> wrote:
--
Bruno Oliveira
bruno.oliveira@esss.com.br
ESSS - Engineering Simulation and Scientific Software
+55 (48) 3953-0010
http://www.esss.com.br

Mmh, since your C++ program does not takes so much memory (how much,
BTW?), I
don't think this is going to be an HDF5 issue. I tend to think that it is
more a 'feature' of the Python interpreter caching objects internally.

My C++ program finishes with only 4k (exactly, 4096 bytes), which is
certainly negligible. I'm also thinking this is an issue with the Python
interpreter.

It may simply be the default behavior of the Python memory-allocation
system. Like other high-level languages, Python manages its own pool
of memory in order to reduce overhead and fragmentation:

http://docs.python.org/whatsnew/2.3.html#pymalloc-a-specialized-object-allocator

The fact that memory use saturates at about 30M seems to support
this... I think this is reasonable behavior given the complex memory
demands of a system like Python. If you're trying to run 100 of these
scripts simultaneously you might want to investigate how to turn it
off and use pure malloc/free (I believe there is a compile-time switch
when building Python), but otherwise it seems harmless.

Andrew

···

----------------------------------------------------------------------
This mailing list is for HDF software users discussion.
To subscribe to this list, send a message to hdf-forum-subscribe@hdfgroup.org.
To unsubscribe, send a message to hdf-forum-unsubscribe@hdfgroup.org.