Kubernetes Authn/Authz with Google OIDC and RBAC

AWS + Kops

Jessica G
4 min readJan 13, 2018

Intro

Most all communication in the kubernetes (k8s) cluster goes through the k8s api server. In order to access the k8s api server, first you need to be authorized to do so. There are a number of different authentication methods to choose from. Here I will review the steps I took to authenticate with the OpenID Connect Token method with Google as the Identity Provider (IdP). After successful authentication, RBAC (role based access control) is used for authorization to specify what actions the user could perform.

Overview:

Here are the steps I took to get authentication set up with Google OIDC and RBAC authorization.

  1. Create Google OAuth clientSecret and clientID
  2. Configure k8s api server for Google OIDC
  3. Create RBAC roles and rolebindings for Users
  4. Authenticate via Google and get OIDC token
  5. Update kubeconfig with a new User (the google authenticated user), a new cluster (with TLS cert), and a new context.

Versions:

kops v1.8.0, kubernetes v1.8.4, k8s-oidc-helper v0.1.0

First: Create Google OAuth clientSecret and clientID

First thing to do is obtain OAuth 2.0 credentials from the Google API Console. Do this with the following steps:

  1. Go to the google API console
  2. From the dropdown, create a new project.
  3. Click ‘credentials’ from side nav bar,
  4. Select ‘OAuth consent screen’ and fill out form for your project and save
  5. Navigate back to ‘Credentials’ and click ‘Create credentials’, Select OAuth client ID
  6. Select ‘other’ application type and create the clientID and clientSecret. Store those somewhere safe for later steps.

Next: Configure k8s api server for Google OIDC

I’m using kops to create a the k8s cluster on AWS. Here is how I modified the kops cluster infrastructure to enable OIDC.

  1. Enable RBAC. When creating the cluster using the kops create cluster command, enable RBAC by passing in an additional option --authorization RBAC. The default that kops sets for authorization is AlwaysAllow, which allows all requests. Use this flag only if you do not require authorization for your API requests.
  2. Edit the cluster config to enable OIDC. Once the cluster is created, run the following command to edit the api server config file:
# first create the k8s cluster with RBAC enabled
kops create cluster \
--authorization RBAC \
--name $CLUSTER \
--cloud aws \
--state $S3_STATE_STORE
# edit the cluster config and add OIDC data
kops edit cluster $CLUSTER --state $S3_STATE_STORE

Add the following content to the cluster config and save:

kubeAPIServer:
authorizationRbacSuperUser: admin
oidcIssuerURL: https://accounts.google.com
oidcClientID: REDACTED.apps.googleusercontent.com
oidcUsernameClaim: email

Next: Create RBAC Roles and Rolebindings for Users

First I needed to modify some of the system roles and rolebindings so that everything continued to work happily. I created this manifest file to update system permissions. Check out this stackoverflow as a reference.

# system.yamlkind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:node--kubelet
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- kind: User
name: kubelet
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cluster-admin--kube-system:default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:node-proxier--kube-proxy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:
- kind: User
name: kube-proxy

I then created a dev role and rolebinding that allows full access, but only in the development namespace for testing purposes.

# dev.yaml
# Give devs full access to the development namespace.
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: development
name: dev-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dev-role-binding
subjects:
- kind: User
name: jessica@gmail.com
roleRef:
kind: Role
name: dev-role
apiGroup: rbac.authorization.k8s.io

Next: Authenticate via Google and get OIDC token

I chose to use this tool k8s-oidc-helper that makes the necessary API requests to google OAuth to get a login token and also formats the google User data and writes it to your kubeconfig.

k8s-oidc-helper --client-id= REDACTED.apps.googleusercontent.com \
--client-secret=REDACTED \
--write=true

After running this command, if you check in you kubeconfig file, you will see a new user with OIDC token:

$ cat ~/.kube/configapiVersion: v1
kind: Config
preferences: {}
users:
- name: jessica@gmail.com
user:
auth-provider:
config:
client-id: REDACTED.apps.googleusercontent.com
client-secret: REDACTED
id-token: REDACTED
idp-issuer-url: https://accounts.google.com
refresh-token: REDACTED

Next: Update kubeconfig with a new cluster (with TLS cert) and a new context.

The last thing to do was configure the kubeconfig file with the cluster and context to use. To use TLS, I pulled down the CA cert that kops stores in the state store S3 bucket to use in the kubeconfig. Then configured the cluster in the kubeconfig and add a context for the new user and the cluster.

#!/bin/bash# get the name of the ca cert that kops created in the 
# state store s3 bucket
cert=$(aws s3 ls $S3_STATE_STORE/$CLUSTER/pki/issued/ca/ | awk '{print $4}')
# copy the ca cert locally for kubectl to reference
aws s3 cp $S3_STATE_STORE/$CLUSTER/pki/issued/ca/"$cert" ~/.kube/"$cert"
# create a cluster in kubeconfig
kubectl config set-cluster $CLUSTER \
--certificate-authority="$cert" \
--server=https://api."$CLUSTER"
# create a context for the oidc user in the kubconfig
kubectl config set-context $USER \
--cluster $CLUSTER \
--user $USER

Once the kubeconfig is set up with the authenticated user, the cluster with CA cert, and the context then use that context and confirm you can access the resources in the development namespace.

$ kubectl config use-context $USER$ kubectl get all -n development
No resources found.

Resources I like:

A good talk on Authn/Authz for K8s access

A presentation by MYOB on how to use OIDC with K8s

--

--