Deployment to OpenShift shared cluster

We deploy our complete backend to an OpenShift cluster, managed by a provider and shared with other users.
Our service provider is VSHN in switzerland, their OpenShift hosting platform is called Appuio.
I was basically able to deploy HSDS using the yaml files provided in the HSDS repo. However when i try to start up, the head node tries to list all pods in the cluster and this of course fails since we’re only allowed to list the pods in our project.

–> Do you see an easy option to fix this?
I tested this with MiniShift, a testenvironment for OpenShift and i get the same result.

Hi,
I haven’t used OpenShift myself, but my understanding was that Kubernetes was used in OpenShift as the underlying container management platform. So my hope would be that the same orchestration logic in HSDS would just work for OpenShift (this might be optimistic!)

When you used the yaml deployment files, did you use one of the files in hsds/admin/kubernetes (e.g. https://github.com/HDFGroup/hsds/blob/master/admin/kubernetes/k8s_deployment_aws.yml)?

You can see that unlike with the Docker deployments, there is no head node. With Docker ta head node is created and the other containers register by sending requests to the head node using port 5100 on the localhost. The head node in turn assigns ids to each of the SN/DN nodes so that they can be organized for parallel operations.

With Kubernetes, each container may be running on a different host, so the trick of talking to http://localhost:5100 won’t work. Instead each pod calls the k8s list_pod_for_all_namespaces api to find the HSDS pods running in the cluster. After that, the pods are sorted by their internal IP, and this is used to assign an id for each pod. (For Kubernetes one SN container and one DN container are deployed together as pod). You can see the code for this here: https://github.com/HDFGroup/hsds/blob/master/hsds/basenode.py:L302.

Note that normally code inside a pod is not allowed to call the k8s api, so this RBAC authorization needs to be applied as described in the documentation: https://github.com/HDFGroup/hsds/blob/master/admin/kubernetes/k8s_rbac.yml.

I can imagine many reasons (say increased security controls) why this approach wouldn’t work on OpenShift, but first I wanted to see where exactly this was breaking down. If you apply the kuberentes deployment on OpenShift, I’m sure there will be some useful log messages that will be tell us what works or not. Please investigate and let me know what you find out.

Hi John
Yes, OpenShift builds on Kubernetes, they call it a Kubernetes distribution. To me it is a fancy webgui on top of kubernetes.

I indeed used the yaml files you referred to. The mechanism of listing nodes that you describe is exactly what fails. After application and insantiation i get the following error logs, indicating the listing / registration of Nodes fails because we are not allowed to list them at cluster level:

INFO> k8s_register
Task exception was never retrieved
future: <Task finished name=‘Task-3’ coro=<healthCheck() done, defined at /usr/local/lib/python3.8/site-packages/hsds/basenode.py:413> exception=ApiException()>
Traceback (most recent call last):
File “/usr/local/lib/python3.8/site-packages/hsds/basenode.py”, line 429, in healthCheck
await k8s_register(app)
File “/usr/local/lib/python3.8/site-packages/hsds/basenode.py”, line 313, in k8s_register
ret = v1.list_pod_for_all_namespaces(watch=False)
File “/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py”, line 14098, in list_pod_for_all_namespaces
(data) = self.list_pod_for_all_namespaces_with_http_info(kwargs) # noqa: E501
File “/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py”, line 14179, in list_pod_for_all_namespaces_with_http_info
return self.api_client.call_api(
File “/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py”, line 340, in call_api
return self.__call_api(resource_path, method,
File “/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py”, line 172, in __call_api
response_data = self.request(
File “/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py”, line 362, in request
return self.rest_client.GET(url,
File “/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py”, line 237, in GET
return self.request(“GET”, url,
File “/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py”, line 231, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({‘Cache-Control’: ‘no-store’, ‘Content-Type’: ‘application/json’, ‘X-Content-Type-Options’: ‘nosniff’, ‘Date’: ‘Mon, 02 Nov 2020 05:08:35 GMT’, ‘Content-Length’: ‘277’})
HTTP response body: {“kind”:“Status”,“apiVersion”:“v1”,“metadata”:{},“status”:“Failure”,“message”:"pods is forbidden: User "system:serviceaccount:
***************:default" cannot list pods at the cluster scope: no RBAC policy matched",“reason”:“Forbidden”,“details”:{“kind”:“pods”},“code”:403}

When trying to apply the RBAC, i get the following errror. Talking to my provider they confirmed this is something we can’t do on a shared kubernetes cluster as it would allow us to list pods of other customers which of course is not going to happen. Here’s the error log for reference:

Error from server (Forbidden): error when creating “k8s_rbacV1.yml”: clusterroles.rbac.authorization.k8s.io is forbidden: User “*************” cannot create clusterroles.rbac.authorization.k8s.io at the cluster scope: no RBAC policy matched

If i run the command “oc get pods” on the terminal i can list the pods as this command will only list pods in the current project which i do have the rights for:

NAME READY STATUS RESTARTS AGE
vorn-hsds-5f5b55bb74-w57c9 2/2 Running 0 24m

Trying to list the pods at cluster level fails:
oc get pods --all-namespaces
No resources found.
Error from server (Forbidden): pods is forbidden: User “*************” cannot list pods at the cluster scope: no RBAC policy matched

Calling the same with a bit more information output i see the pods IP:
oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
vorn-hsds-5f5b55bb74-w57c9 2/2 Running 0 28m 10.******** node2854.******************

So i think we’d have to adapt either to a less greedy version of the API getting pods only from the current project or find a different way of listing the pods?
It seems to me that the info i can get on the commandline should be sufficient for the registration so the same should be doable using the API.

Hey,

Thanks for the response - this makes sense OpenShift looks to be more 'locked down" than regular Kubernetes, so the approach we’ve been using will need some modifications…

Could you ask your provider if there is a way to get the information provided by “oc get pods -o wide” from within a pod? I.e. something like what we do now with v1.list_pod_for_all_namespaces(watch=False), but just getting pods in your current project.

Failing that, I think the best approach would be to create a “head” pod and service and use an approach similar to what we do on Docker with each SN/DN container registering via a head service. The difference in the Kubernetes environment is that pods may be running on a different host than the head node. But the SN/DN containers can include their internal IP along with the registration request. This looks to be available to the containers - see: https://stackoverflow.com/questions/30746888/how-to-know-a-pods-own-ip-address-from-inside-a-container-in-the-pod.

If this works, it would be best practice to not rely on the singleton pattern for the Head Node. So it would be best for the head node to use something like etcd to enable things to work if the multiple instances of the head node are running.

Let me know what you find out about my first question and your thoughts on the alternative approach.

Hey

Yes, we can list the pods in the namespace with the API under openshift. I have found this link: https://github.com/shaneboulden/openshift-client-demo and successfully tested it in our environment, I made you screenshots of the pod listing from the command line and from a container running the pod itself, get it here: https://matterhorn.swisscloudhosting.ch/s/bjDTTL6P3KnFf5D

I’ll talk to you about how to go on via direct message and we can put the results in this forum again later.

Hi,

Thanks for the link. I’ve used the api in this branch: https://github.com/HDFGroup/hsds/tree/openshift.
Could you give it a try?

I added an issue in github to track this: https://github.com/HDFGroup/hsds/issues/73.

Hi
Thanks, that worked like a charm, we successfully deployed HSDS to OpenShift.
Next will be to integrate KeyCloak Usermanagement via OpenID.

Kind regards
Ben