I've been experimenting with Kubernetes recently (an instance installed and configured via the Mantl.io project), and one of the more puzzling things that I wanted to figure out is how to use kubectl with a token.  I'm still not quite sure why, but the built-in ServiceAccount token wasn't allowing me to authenticate using the public IP for the cluster.  (Any suggestions are welcome!)  Keep in mind, I already had kubectl and a kubeconfig file established from running minikube on my local workstation.

 

Here are the things that helped me figure out how to enable token authentication.  I had to wade through a fair amount of Kubernetes documentation to sort how the system is configured.  RTFD dude!  Many clues revealed themselves during this digging.

 

  • Kubernetes Authentication:
    • This is where I started, while the documentation states "The signed JWT can be used as a bearer token to authenticate as the given service account. See above for how the token is included in a request. Normally these secrets are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well." I had no success authenticating using the service account token.
  • Accessing The Cluster:
    • In this document, I used the troubleshooting information titled "Without kubectl proxy (post v1.3.x).  This led me to realize that the ServiceAccount token was not even being used to authorize via localhost.  In turn, I tried to determine how to update the "scope" for the ServiceAccount, but I could not quite determine how to configure that.  If anyone knows this, would definitely appreciate a comment on how to configure that!
  • Creating A Custom Cluster
    • After scanning through this document, I felt like I might be more on the money...
    • Clue #1: "You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler, we recommend that you run these as containers, so you need an image to be built."  Ah ha, so somehow I can configure kube-apiserver inside of Kubernetes.
    • Clue #2: "Your tokens and passwords need to be stored in a file for the apiserver to read. This guide uses /var/lib/kube-apiserver/known_tokens.csv. The format for this file is described in the authentication documentation."   Okay...makes sense.
    • Clue #3: "While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using traditional system administration/automation approaches, the remaining master components of Kubernetes are all configured and managed by Kubernetes:

      • their options are specified in a Pod spec (yaml or json) rather than an /etc/init.d file or systemd unit.

      • they are kept running by Kubernetes rather than by init."  Ah ha again...need to configure this inside Kubernetes some how.
    • Clue #4: "The apiserver, controller manager, and scheduler will each run as a pod on the master node."  Getting closer...
    • Clue #5: "Apiserver pod template ... --token-auth-file=/srv/kubernetes/known_tokens.csv"  Ok, so somehow I need to set that flag...
    • Clue #6: "Place each completed pod template into the kubelet config dir (whatever --config= argument of kubelet is set to, typically /etc/kubernetes/manifests). The order does not matter: scheduler and controller manager will retry reaching the apiserver until it is up."  And, bingo...gotta find the manifests directory on my system.

 

In sum, I needed to create the known_tokens.csv and put the right data in it.  And, to add that, I needed to add that to Kubernetes itself, since the apiserver runs as a pod, and will automatically pick up the reconfiguration.  Turns out that is in the manifests directory, and surprise, surprise, there is a "kube-apiserver.yaml" in my /etc/kubernetes/manifests.

 

Once I created the known_tokens.csv and added that configuration to the apiserver manifest, bingo, I was able to login using a token in my kubeconfig.