We are working on a NASA project that involves ICESat2 data. We have chosen HSDS to be a component in a proof of concept and hopefully continue using it to deploy as a working system.
After successfully deploying in posix mode on a local machine, I am now attempting to deploy in AWS using docker. The one caveat being that we are required to use AWS MFA.
To get around this, I have attached an IAM role to the ec2 instance to grant access to s3 buckets. For this to work there can be no .aws/credentials file and the access keys must be unset in the env after launching hsds. I have tried not setting the env variables and putting them straight into the docker-compose files. That doesn’t seem to work.
HSDS launches fine (with a very standard override.yml with just the AWS specifics altered, but then after launching runall.sh hsinfo errors out (after working for the first 11 secs) with error 500 responses. hstouch gives the exact same response even before the 11 sec mark.
In the override.yml I have set aws_iam_role which leads me to believe that there should be some way to utilize this AWS option. So, I also tried altering runall.sh to not look for AWS access keys if using docker and s3. That doesn’t seem to work either.
Any ideas of what I should try or pointers of what to look at would be greatly appreciated. RIght now I am thinking this is related to not being able to access the s3 bucket. Does that sound right?