HSDS and MFA on AWS (s3 access issues?)

Hi,

We are working on a NASA project that involves ICESat2 data. We have chosen HSDS to be a component in a proof of concept and hopefully continue using it to deploy as a working system.

After successfully deploying in posix mode on a local machine, I am now attempting to deploy in AWS using docker. The one caveat being that we are required to use AWS MFA.

To get around this, I have attached an IAM role to the ec2 instance to grant access to s3 buckets. For this to work there can be no .aws/credentials file and the access keys must be unset in the env after launching hsds. I have tried not setting the env variables and putting them straight into the docker-compose files. That doesn’t seem to work.

HSDS launches fine (with a very standard override.yml with just the AWS specifics altered, but then after launching runall.sh hsinfo errors out (after working for the first 11 secs) with error 500 responses. hstouch gives the exact same response even before the 11 sec mark.

In the override.yml I have set aws_iam_role which leads me to believe that there should be some way to utilize this AWS option. So, I also tried altering runall.sh to not look for AWS access keys if using docker and s3. That doesn’t seem to work either.

Any ideas of what I should try or pointers of what to look at would be greatly appreciated. RIght now I am thinking this is related to not being able to access the s3 bucket. Does that sound right?

Thanks.

-JB Williams

Hi JB,
It sounds like you are the right track, it should be possible to run without AWS credentials as you describe.

I just tried just now setting up a new EC2 instance with an AWS_IAM_ROLE and hsds and it worked with no problems.

If you run: docker logs hsds_dn_1 | grep ERROR, what error messages are you getting?

The code that using the IAM role is here: https://github.com/HDFGroup/hsds/blob/master/hsds/util/s3Client.py#L91, so checking out the info messages around there might be helpful.

The IAM role I used is just:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: “s3:",
“Resource”: "
”
}
]
}

You can specify a resource to restrict the role to access just a specific S3 bucket, but you might want to start with the above and see if that works first.

Let us know if you are still not able to get it working!

Hi,

Thanks for your response!

I was able to get it working. I guess I should have updated the thread.

I altered runall.sh to not check for credentials and unset the credentials in my env.

Launched with not problem after that. Also, thanks for the logs tip. That is useful.

-JBw

I’ve checked in an update to runall.sh so it should be ok with either AWS_ACCESS_KEY_ID or AWS_IAM_ROLE.

It might be worth pointing out that runall.sh is basically a convenience utility that selects a docker-compose file (AWS vs Azure vs Posix) based on which environment variables are defined. You can just run “docker-compose -f up” directly if you’d prefer.

1 Like