Note This post requires a basic understanding of Kubernetes and how it works for it to be most useful.
Source Code All the code and configuration files for this post are available here.
One Simple App, Two Endpoints
One of the most basic needs of a new application that's deployed on Kubernetes is to expose a couple of web endpoints on the public internet with different URLs and secure them with SSL. For example, your app might want to have the following 2 endpoints publicly available:
app.acme.org
and api.acme.org
both secured with SSL certificates.
Regardless of what your app is written in (Rails, Node, Go,...) this should be easy enough to do in Kubernetes.
The Application + API
For this example, I'm going to use a simple Node.js app, but you can replace it with your favourite language / framework.
My Application
const http = require('http')
const port = 3000 // let's host this on port 3000
const requestHandler = (request, response) => {
console.log(request.url)
response.end('Hello! This is my app!')
}
const server = http.createServer(requestHandler)
server.listen(port, (err) => {
if (err) {
return console.log('something bad happened', err)
}
console.log(`application server is listening on ${port}`)
})
I want this to be available at app.acme.org
.
My API
const http = require('http')
const port = 4000 // I'm going to host this one on port 4000
const requestHandler = (request, response) => {
console.log(request.url)
response.end('Hello! This is my API!')
}
const server = http.createServer(requestHandler)
server.listen(port, (err) => {
if (err) {
return console.log('something bad happened', err)
}
console.log(`API server is listening on ${port}`)
})
I want this to be available at api.acme.org
.
As you can see, both are identical (for the sake of simplicity) with the small difference of the port they serve on: the application is on port 3000 and the API is on port 4000. Obviously when I roll this into production, I want both of them to be on HTTPS port 443.
The Setup
If you would like to try the code in this post, make sure you have a running Kubernetes cluster on DigitalOcean and can communicate with it using kubectl
Step 0: Creating the container images
If you're using Docker, you will need to build your application into a container image first. I am including this step since the image is used in the following steps. The following steps use the following 2 images: 1 for your application called cloud66/k8s_secure_endpoints_app and cloud66/k8s_secure_endpoints_api. Those images are built on Dockerhub as well so you can just use the configuration as it is.
Step 1: Creating Deployments
The first step to hosting our application on Kubernetes is to deploy them as Deployments:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- image: cloud66/k8s_secure_endpoint_app
imagePullPolicy: Always
name: app
ports:
- containerPort: 3000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- image: cloud66/k8s_secure_endpoint_app
imagePullPolicy: Always
name: app
ports:
- containerPort: 4000
Save this to deployment.yml
and apply it to your cluster and see it come up:
Step 2: Creating the Services
Now that you have the deployments, you need to expose them to outside of the cluster using Services:
apiVersion: v1
kind: Service
metadata:
name: app
labels:
app: app
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: 3000
selector:
app: app
---
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
type: ClusterIP
ports:
- port: 4000
targetPort: 4000
selector:
app: api
Save this to services.yml
and apply to to your cluster.
$ kubectl apply -f services.yml
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api ClusterIP 10.109.49.232 <none> 4000/TCP 2s
app ClusterIP 10.109.121.170 <none> 3000/TCP 2s
Step 3: Setting up the Ingress Controller
First, what's Ingress? Ingress is a Kubernetes feature that makes it possible to give access (usually through HTTP) to the Services. You can think of Ingress, like an nginx instance sitting in front of your services and dispatching traffic to the right one based on the incoming URL.
The following configuration files will setup nginx Ingress on your cluster. This is written for a cluster with RBAC support (which is most of the clusters you get if you use a managed provider like DigitalOcean).
Before we jump into a wall of Yaml, let's see what we want to achieve:
- Create a load balancer on the cloud provider to sit in front of all of our incoming traffic and distribute it to the servers (nodes)
- Create a default backend to handle the traffic that doesn't belong to any of our services.
- Use nginx as an Ingress Controller on the cluster.
- Add good logging to our nginx to help with debugging
- Configure the cluster RBAC so the controller can change the needed settings when our services change.
So what's an Ingress Controller? If Ingress is like a routing table in front of our services, Ingress Controller is like a daemon that sits on top of our cluster and watches services as they are added and removed. It then automatically configures the routing table so we can just add new services to the cluster without having to configure the routing table every time.
First, let's configure the RBAC. For the Ingress Controller to watch what's happening on the cluster and make the needed changes, it needs to have the right access rights to the cluster. Here we're going to create a Service Account that runs the controller (like a Linux user running a daemon) and give it the needed access rights:
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system
Now that we have the needed Service Account and the roles, we can move on to creating the default HTTP handler.
Not all traffic that hits your cluster belongs to your services. We also need to take care of the traffic when our services are down. This prevents the load balancers from removing nodes from the cluster. Here I am using a default nginx setup as the default HTTP handler.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
namespace: kube-system
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
You only need one of these on your cluster (hence replicas: 1
)
Now it's time to setup our Ingress Controller. There are multiple options to use as Ingress Controller. Here I am using the nginx Ingress Controller, which basically is pod with nginx running inside of it and its configuration file is changed whenever Ingress changes happen.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: kube-system
labels:
app: nginx-ingress-lb
data:
log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr",
"x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$request_id", "remote_user":
"$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":
$status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri",
"request_query": "$args", "request_length": $request_length, "duration": $request_time,
"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent":
"$http_user_agent" }'
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
namespace: kube-system
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
spec:
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
terminationGracePeriodSeconds: 60
serviceAccountName: nginx-ingress-serviceaccount
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=kube-system/default-http-backend
- --publish-service=kube-system/nginx-ingress-lb
- --configmap=$(POD_NAMESPACE)/nginx-configuration
The first part, configures how nginx should log. The second part creates the Ingress Controller. In this sample I've chosen to use a DaemonSet
to deploy the Ingress Controller. This means I will have a single pod running nginx on each node of my cluster as a result. You can choose a different strategy by using Deployment
with various replicas
count combined perhaps with affinity configuration to tell Kubernetes which servers should handle the incoming traffic.
The last 3 lines are important, so let's go through them: Here we are telling the nginx Ingress Controller:
- If any traffic comes in and it doesn't belong to any of the upstream Ingress, then
default-http-backend
is where it should go. publish-service=kube-system/nginx-ingress-lb
tells the Controller to use the IP address of the physical load balancer for each one of the Ingress. We don't have the load balancer yet (it's coming up next) but using this help when we configure our DNS service.configmap=...
tells our Controller where to get it's configuration, which we created on the top part of the file. TheConfigMap
and this line are optional and only improve the logging for better debugging.
Now we can create the inbound service and the load balancer.
apiVersion: v1
kind: Service
metadata:
namespace: kube-system
name: nginx-ingress-lb
labels:
app: nginx-ingress-lb
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
# Selects nginx-ingress-controller pods
k8s-app: nginx-ingress-controller
This creates a physical load balancer on the cloud provider (DigitalOcean in this case) and points all traffic that comes through to our Ingress Controller.
I've put all of this together here so you can just run it using this command to get it setup:
$ kubectl apply -f https://raw.githubusercontent.com/cloud66-samples/k8s-secure-endpoints/master/k8s/nginx.yml
Step 4: Setting up Ingress
We can now create Ingress objects on the cluster to tell our controller where to send traffic based on the URL.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
labels:
app: app
spec:
rules:
- host: app.acme.org
http:
paths:
- backend:
serviceName: app
servicePort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
labels:
app: api
spec:
rules:
- host: api.acme.org
http:
paths:
- backend:
serviceName: api
servicePort: 4000
Note You will need to change api.acme.org
and app.acme.org
to domains you actually have control over. Unless you own acme.org!
Here we are creating 2 Ingress objects which will setup our "routing table". The creation of these objects are observed by our Ingress Controller (which we setup in step 3) and will lead into the nginx configuration being modified accordingly.
Save this file as ingress_01.yml
and apply it to your cluster:
$ kubectl apply -f ingress_01.yml
Testing it so far
This could be a good point to stop and see how we're doing! Let's begin by testing the services directly. To do that, we're going to use port-forward
to connect to the pods directly from our dev machine and see if the server is responding.
$ kubectl port-forward app-9c5fbd865-8bl6n 3000:3000
Now in another window, try hitting the service:
$ curl http://localhost:3000
Hello! This is my app!
You can try the same with the API service as well.
Let's see how our Ingress is doing:
$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
api-ingress api.acme.org 136.23.12.89 80 1h
app-ingress app.acme.org 136.23.12.89 80 1h
Here, 136.23.12.89
should be the IP address of your load balancer. You can tell that by checking the Ingress Controller service:
$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.245.117.145 <none> 80/TCP 1h
nginx-ingress-lb LoadBalancer 10.245.208.224 136.23.12.89 80:31874/TCP,443:32004/TCP 1h
Let's check to see if our Ingress Controller is up and running (your kube-system
namespace might have other pods running in it as well):
$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
default-http-backend-64c956bc67-95km7 1/1 Running 0 3h
nginx-ingress-controller-kgtzc 1/1 Running 0 3h
nginx-ingress-controller-n7dps 1/1 Running 0 3h
nginx-ingress-controller-t6vfv 1/1 Running 0 3h
Now that we know our service is running, our load balancer is in place and our Ingress Controller is up and running, we can test the flow.
First try hitting the default HTTP backend by targeting the load balancers directly:
$ curl http://136.23.12.89
default backend - 404
This looks good. Without sending traffic that belongs to any specific Ingress, this is the expected behaviour.
Now let's trick nginx into thinking we're hitting the endpoint from a URL by adding a Host
HTTP header (make sure to replace the header with the appropriate domain you used in the Ingress controller above):
$ curl -H 'Host: app.acme.org' http://136.23.12.89
Hello! This is my app!
$ curl -H 'Host: api.acme.org' http://136.23.12.89
Hello! This is my api!
If this is what you're getting, then you're ready for the next step.
Step 5: Setting up DNS
Technically you can manually setup an A record that points to your load balancer and be done. Here, we're doing more than automatic configuration of your DNS provider: as well as making sure your DNS record is updated if your load balancer is deleted and re-recreated, we are also automating addition of any new Ingress records you might add in future. We only have 2 Ingress records for api.acme.org
and app.acme.org
at the moment, but if we add more Ingress records in future for more services, this method will create and update all of them automatically.
Also, you can use this to include any services you are hosting on your cluster that don't use Ingress, like TCP services.
To automate this part, we are using a little useful project called External DNS. This tool sits on top of your cluster and listens to any new services or ingress that is created and creates the DNS records you need with your DNS provider. It supports a wide range of DNS providers you can choose from. For this example, I'm going to use DigitalOcean's DNS service.
To use this, first you need to login to your DigitalOcean account and create a new domain under Networking / Domain. Follow their guide to set it up. Once you have your domain setup, head to DigitalOcean API section and create a read / write API key for your account. ExternalDNS needs this key to make changes to your DNS records.
First, let's add our API key as a Secret in the cluster (replace 'XXXX' with the key):
kubectl create secret generic api-keys -n kube-system --from-literal=do-api-key=XXXXXXX
# kubectl create secret generic api-keys -n kube-system --from-literal=do-api-key=XXXXXXX
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: external-dns
namespace: kube-system
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.opensource.zalan.do/teapot/external-dns:latest
args:
- --source=service
- --source=ingress
- --domain-filter=acme.org
- --provider=digitalocean
# - --dry-run
# - --log-level=debug
env:
- name: DO_TOKEN
valueFrom:
secretKeyRef:
name: api-keys
key: do-api-key
Make sure to change acme.org
to the domain name you added to your DigitalOcean Domains account.
Save this file as external_dns.yml
and apply it to the cluster:
$ kubectl apply -f external_dns.yml
This will add ExternalDNS to your cluster. Once started, ExternalDNS will look at all the Ingress records in the cluster and creates DNS records for the ones that have a spec.rules.host
that matches acme.org
.
You can check the progress by tapping into ExternalDNS log files:
$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
...
external-dns-7c84f5c5b4-fk2bk 1/1 Running 0 4h
...
$ kubectl logs external-dns-7c84f5c5b4-fk2bk -n kube-system -f
You can also log into your DigitalOcean account and see the records created or check it from terminal:
$ dig app.acme.org
; <<>> DiG 9.10.6 <<>> app.acme.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64362
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1452
;; QUESTION SECTION:
;app.acme.org. IN A
;; ANSWER SECTION:
app.acme.org. 300 IN A 136.23.12.89
;; Query time: 128 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Fri Mar 22 13:57:37 GMT 2019
;; MSG SIZE rcvd: 64
$ dig api.acme.org
; <<>> DiG 9.10.6 <<>> app.acme.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64362
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1452
;; QUESTION SECTION:
;api.acme.org. IN A
;; ANSWER SECTION:
api.acme.org. 300 IN A 136.23.12.89
;; Query time: 128 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Fri Mar 22 13:57:37 GMT 2019
;; MSG SIZE rcvd: 64
Once you have the records you can try reaching the services again, this time using their domain name:
$ curl http://app.acme.org
Hello! This is my app!
$ curl http://api.acme.org
Hello! This is my api!
Step 6: Setting up SSL
To secure our services, we are going to use Let's Encrypt SSL certificates. There is a project called Cert Manager, which is based on or inspired by another project called Lego. Cert Manager can be installed on your cluster to watch any services and ingress records you might have on the cluster and request SSL certificates for them automatically. It also takes care of renewing the certificates for you.
Setting up Cert Manager is not very easy. I have included the file you need to use to set it up in the same repository that comes with this post, but I advise you to follow the steps on Cert Manager's website to install it. However. I would advise against using the Helm chart approach as it has caused my pain several times before. Firstly the Helm chart approach is indeterministic and depending on when you try it might break the rest of the configuration you need to do. This has happened to me before. Another scary part about the Helm approach (which is not limited to Cert Manager) is that it can install things across the cluster without telling you. You'd need to know what the chart (release) you are using does before trying it or you might break the cluster for everyone. One of the biggest advantages of Kubernetes for me is the immutable infrastructure: I can run the same code every time and get the same results back from fresh. Helm breaks this firstly by loose package version management. Secondly, it encourages "copy/paste" insfrastructure setup: copy a helm install
and paste it to get to results which is very similar to configuring servers by copy / pasting apt-install
commands without use of configuration management tools. This is the topic for another post, so for now you can use the following command to install Cert Manager on your cluster from this script:
Please note that in this section I am using Cluster Issuers instead of Issuers. This means by running this command you are installing Cert Manager on your entire cluster and not just a single domain. You can replace ClusterIssuer
with Issuer
if you like.
$ kubectl apply -f https://raw.githubusercontent.com/cloud66-samples/k8s-secure-endpoints/master/k8s/cert_manager.yml
This will setup Cert Manager on your cluster. You now need to setup the needed certificates:
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: foo@acme.org
privateKeySecretRef:
name: letsencrypt-production-key
http01: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: app-acme-systems
namespace: central-bootstrap
spec:
secretName: app-acme-systems-tls
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer
commonName: app.acme.org
dnsNames:
- app.acme.org
acme:
config:
- http01:
ingressClass: nginx
domains:
- app.acme.org
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: api-acme-systems
namespace: central-bootstrap
spec:
secretName: api-acme-systems-tls
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer
commonName: api.acme.org
dnsNames:
- api.acme.org
acme:
config:
- http01:
ingressClass: nginx
domains:
- api.acme.org
Make sure to replace foot@acme.org
, app.acme.org
and api.acme.or
with the correct values.
Save this file as certificates.yml
and apply it to your cluster.
$ kubectl apply -f certificates.yml
This will get Cert Manager to contact Let's Entrypt and issue, install certificates for your Ingress records. However, we still have 1 more step before we can use our secure endpoints: we need to tell Ingress Controller where to find the certificates. To do this, open ingress_01.yml
you created and overwrite it with this one:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
labels:
app: app
spec:
rules:
- host: app.acme.org
http:
paths:
- backend:
serviceName: app
servicePort: 3000
tls:
- secretName: app-acme-org-tls
hosts:
- app.acme.org
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
labels:
app: api
spec:
rules:
- host: api.acme.org
http:
paths:
- backend:
serviceName: api
servicePort: 4000
tls:
- secretName: api-acme-org-tls
hosts:
- api.acme.org
Make sure to replace app.acme.org
api.acme.org
app-acme-org-tls
and api-acme-org-tls
with the correct names.
The changes we made to our ingress are in the annotations
and the tls
section. As for the annotations
we added 2 annotations: kubernetes.io/tls-acme
tells Cert Manager that we need a certificate for this endpoint. certmanager.k8s.io/cluster-issuer
specifies which certificate issuer we want to use for this endpoint. If you changed your issuer from ClusterIssuer
to Issuer
make sure to reflect it here too.
The tls
changes are simply telling Ingress Controller where to find the TLS certificates generated by Cert Manager by pointing it to the right secret by name.
To check and see if the certificates have been issued (this may take some time), you can run this command:
$ kubectl get certificates
NAMESPACE NAME AGE
default api-acme-org-tls 15h
default app-acme-org-tls 15h
See it work!
You can now try and hit your service with https:
$ curl https://app.acme.org
Hello! This is my app!
$ curl https://api.acme.org
Hello! This is my api!
Congratulations! You just setup everything that was needed to run a service with SSL security on a Kubernetes cluster! Now you might want to check out our products that make all this much easier to deal with or send me any questions you might have about these!