Dataset chunking overhead

Hello all

I am writing a 1D chunked dataset without compression on WXPx64. The dataset
is created with a chunking size of 1024B. The dataset is 100MB. The overhead
for writing the dataset is a hefty 600MB! If the chunking size is smaller,
this increases somehow exponentially. Is this normal?

Half of the memory seems to be allocated in h5Dchunk@ln705 from
H5D_create_chunk_file_map_hyper, which iterates over sel_points which is
100000000 and striding by 1024 creates lots of little objects that amount
for 2888B in total. So, in general, for each chunk of 1MB this function
needs 3MB of memory (!).

Another 273MB are needed from H5D_create_chunk_mem_map_hyper a bit further
down. This one creates another 100 objects (I guess) of 2448B in total.

So, regardless of the fact that I want to write a 100MB using 1024B chunks,
which I otherwise consider perfectly valid, for each MB that I want to
write, I need 6MB of supporting structures! Isn't that a little too much?

Thanks

-- dimitris