chunking data of 'infinite' dataset to optimize read speed

Hi,

I'm considering the use of HDF5 in a data aquisition project -> total
bandwith is 40MB/s
(512 channels, 40k u16 samples per second per channel). As datasets
tend to be long (in order of 'few hours/few hundreds of gigabytes') I
owuld like to think of data layout.

On the 'writing' side I have some spare I/O and CPU cycles so I may
realign the data when writing to speed up further analysis.
ATM I found that partial readings require anyway that the full chunk
is read back...

My 512 channels are physically 8x64 channels, and the dominating mode
of reading would be getting 1 or two of those groups for further
analysis, which is I/O bound, hence the need to optimize for reading.
(if we call 'channels' axis vertical and 'time' axis horizontal and
assume it's 2D representation, I'm interested in the fastest way of
getting horizontal slices of an 'infinite' set of data.

My first try was a 40 gigabyte dataset with 512x10000 chunking (four
chunks per second), exactly a type of array I get from DAQ system, and
processing the file with MATLAB (for test purposes I read 100 seconds
worth of data, cleaning the windows working set between runs to force
'cold' read):

file = ('M:\test_data\flat_layout.h5')

file =

M:\test_data\flat_layout.h5

tic ; for i=1:400 ; a = h5read(file,'/neuro',[1,1+10000*(i-1)],[512,10000]); end ; toc

Elapsed time is 21.273919 seconds.

tic ; for i=1:400 ; a = h5read(file,'/neuro',[1,1+10000*(i-1)],[64,10000]); end ; toc

Elapsed time is 277.718335 seconds.

Am I doing the read correctly or reading all data in a VERY wrong way?

The dataset will be too large to fit in memory anyway, I'm interested
in building the structure in a way that yields
'cutting 64/128 channels out of 512 in some reasonable chunks of
roughly 1s worth of data' faster.

Thanks in advance for pointing my faults

Michal

ยทยทยท

--
Michal Dwuznik