Rob,
Did you make any significant discoveries/progress regarding the GPFS tweaks on BG systems. Our machine will be open for use within the next week or so and I'd like to begin some profiling. I'd be interested in knowing if you have discovered any useful facts that I ought to know about.
I'm concerned about how much the --enable-gpfs option is able to 'know' about the system (can we easily find out what the option does?). According to my superficial understanding of the BG architecture, it seems that since the compute nodes have IO calls forwarded off to the IO nodes by kernel level routines, collective operations performed by hdf5 might actually reduce the effectiveness of the IO by forcing the data to be shuffled around twice instead of once. Am I thinking along the right lines?
Ta
JB
We're exploring ways to get better MPI-IO performance out of our Blue
Gene systems running GPFS. HDF5 happens to have a nice collection of
GPFS-specific optimizations if you --enable-gpfs.
Before I spend much time experimenting with those options, I was
curious if anyone's tried them with recent (gpfs-3.4 or gpfs-3.5)
versions of GPFS. I suspect they still work (the gpfs-specific
IOCTLS, i mean: i'm sure HDF5's implementation of them is fine), but
would like to hear others experiences.
==rob
···
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA