Which layout shall I use?

Hi,

I am receiving about 100 types of message and each message is less than 8K. I may write over 10G of messages into a H5 file. Which layout is the best in performance? Compact or chunked dataset?

Thanks,
Rodger

How do you plan to access those messages later? By time range? By message type? (=> 100 datasets?) Will you be reading blocks of messages or individual messages? Do you mean read or write performance, or both? What’s your performance goal and what’s your hardware capable of delivering? What’re your platform and environment?

G.

We are running it on Windows and I thought Compact layout would give better performance for both writing and reading. We didn’t make SWMR work yet. At this point, we would like to have 50Mb/s in write.

Your plan is to store each message as an individual (compact) dataset, right? The problem with that approach is that for large numbers of messages the write/read performance is not constant. The more messages you have, the greater the metadata overhead. (Just run ls -ltR on a file system with a large number of files.) You won’t notice anything for 100 messages, but you’re talking about 10^10 messages.

Yes, chunked layout makes no sense for an 8K dataset, but compact layout is not going to save your bacon once you are looking at millions of these. Since you have a relatively small number of message types, why not have a dataset for each message type? Then you’re looking at about 100 datasets (plus auxiliary datasets for indexing, perhaps). Make them extendible (chunked + H5S_UNLIMITED dimension) and apply compression, if that makes sense.

OK?
G.

With large datasets the best performance is with chunked layout; the trick is to find a good representation where data is not scattered into small pieces ( directory tree access will cost you). With the strategy in mind you should be able to attain 90% - 98% throughput of the storage system.

4 years ago it was 500MB/sec with direct chunk write on a Lenovo x250 on my recent Lenovo X1 it is in 2-3GB/sec
direct chunk write based packet table example

steve

Yes, that’s how I am going to implement, one dataset per message type with compound datatype.
I just saw Packet Table API. Do you think packet table may help my case better? I didn’t get what good things Packet library can provide.

Thanks G!

The Packet Table API is OK for a proof-of-concept, but I wouldn’t recommend it for performance-sensitive scenarios. Look at @steven’s suggestion or take control via the C-API.

G.

1 Like

If I use compound datatype, 100 types message will be in 100 datasets and messages won’t be scattered, right? 500MB/sec is impressive, is it SWMR on? Thanks Steve for the sample code!!

In C or C++ you can scatter data in memory easily y using a wrong pattern. Using STL containers is a good start. Basically you want your data in memory contiguous/adjacent and same type(homogeneous) of HDF5 block size before calling direct chunk write.

If you want filters, you have to implement a blocking algorithm. h5::append does exactly that – so you don’t have to write your own.

Instead of SWMR, use C++ thread support to serialise HDF5 write; alternatively a queue structure does the same. ZeroMQ does a good job, or you can implement SCTP based robust solution with multi-node, multi-path feature.

best: steve