Traefik Enterprise Combat: TraefikService
Traefik Enterprise Combat: TraefikService
Introduction
The routing rules of traefik can realize basic load balancing operations of layer 4 and layer 7, just use the IngressRoute IngressRouteTCP IngressRouteUDP resource. But if you want to implement advanced operations such as weighted polling and traffic replication, traefik abstracts a TraefikService resource. At this time, the overall traffic trend is: external traffic first enters traefik through the entryPoints port, and then enters TraefikService after being matched by IngressRoute/IngressRouteTCP/IngressRouteUDP. Weighted round robin and traffic replication are implemented at the TraefikService layer, and finally the request is forwarded to the kubernetes service.
Create a demo application
app-v1.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-v1
spec:
replicas: 1
selector:
matchLabels:
app: app-v1
template:
metadata:
labels:
app: app-v1
spec:
containers:
- name: app-v1
image: nginx:latest
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello app-v1 > /usr/share/nginx/html/index.html"]
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: app-v1
spec:
selector:
app: app-v1
ports:
- name: http
port: 80
targetPort: 80
type: ClusterIP
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- twenty one.
- twenty two.
- twenty three.
- twenty four.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.
- 41.
- 42.
- 43.
- 44.
app-v2.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-v2
spec:
replicas: 1
selector:
matchLabels:
app: app-v2
template:
metadata:
labels:
app: app-v2
spec:
containers:
- name: app-v2
image: nginx:latest
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello app-v2 > /usr/share/nginx/html/index.html"]
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: app-v2
spec:
selector:
app: app-v2
ports:
- name: http
port: 80
targetPort: 80
type: ClusterIP
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- twenty one.
- twenty two.
- twenty three.
- twenty four.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.
- 41.
- 42.
- 43.
- 44.
deploy
[root@localhost traefik]# kubectl apply -f app-v1.yaml
deployment.apps/app-v1 created
service/app-v1 created
[root@localhost traefik]# kubectl apply -f app-v2.yaml
deployment.apps/app-v2 created
service/app-v2 created
[root@localhost traefik]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/app-v1-579dbbb754-nwtzw 1/1 Running 0 2m23s
pod/app-v2-7f7844f7b9-grsdk 1/1 Running 0 2m19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/app-v1 ClusterIP 10.100.10.94 <none> 80/TCP 2m23s
service/app-v2 ClusterIP 10.104.145.150 <none> 80/TCP 2m18s
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
Gray release (weighted round robin)
Grayscale release, also known as canary release, allows some services that are about to go online to be released online to observe whether they meet the online requirements. This is mainly achieved through weighted polling. Create traefikService and inressRoute resources to implement wrr weighted polling app-traefikService-ingressroute-wrr.yaml:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: app-ingressroute-wrr
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`traefikservice-wrr.kubesre.lc`)
kind: Rule
services:
- name: wrr
namespace: default
kind: TraefikService
---
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
name: wrr
namespace: default
spec:
weighted:
services:
- name: app-v1
port: 80
weight: 1 # 定义权重
kind: Service # 可选,默认就是 Service
- name: app-v2
port: 80
weight: 2
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- twenty one.
- twenty two.
- twenty three.
- twenty four.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
deploy
[root@localhost traefik]# kubectl apply -f app-traefikService-ingressroute-wrr.yaml
ingressroute.traefik.containo.us/app-ingressroute-wrr created
traefikservice.traefik.containo.us/wrr created
[root@localhost traefik]# kubectl get ingressroute
NAME AGE
app-ingressroute-wrr 6s
[root@localhost traefik]# kubectl get TraefikService
NAME AGE
wrr 3m42s
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
Add local hosts resolution
192.168.36.139 traefikservice-wrr.kubesre.lcc
- 1.
The test results are as follows:
[root@localhost traefik]# for i in {1..9}; do curl http://traefikservice-wrr.kubesre.lc && sleep 1; done
Hello app-v1
Hello app-v2
Hello app-v2
Hello app-v1
Hello app-v2
Hello app-v2
Hello app-v1
Hello app-v2
Hello app-v2
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
session persistence (sticky session)
When we use traefik's load balancing, multiple k8s services are cycled by default. If a user makes multiple requests for the same content, they may be forwarded to different backend servers. Assume that the user makes a request and is assigned to server A, and some information is saved in the session. The user sends a request again and is assigned to server B. The previously saved information is used. If there is no session stickiness between servers A and B, then the server B will not be able to get the previous information, which will cause some problems. Traefik also supports sticky sessions, which allow users to always forward all requests within a session to a specific backend server. Create traefikervie and ingressRoute to implement cookie-based session persistence app-traefikService-ingressroute-cokie.yaml:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: app-ingressroute-cokie
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`traefikservice-cokie.kubesre.lc`)
kind: Rule
services:
- name: cokie
namespace: default
kind: TraefikService
---
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
name: cokie
namespace: default
spec:
weighted:
services:
- name: app-v1
port: 80
weight: 1 # 定义权重
- name: app-v2
port: 80
weight: 2
sticky: # 开启粘性会话
cookie: # 基于cookie区分客户端
name: cookie # 指定客户端请求时,包含的cookie名称
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- twenty one.
- twenty two.
- twenty three.
- twenty four.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
deploy
[root@localhost traefik]# kubectl apply -f app-traefikService-ingressroute-cokie.yaml
ingressroute.traefik.containo.us/app-ingressroute-cokie created
traefikservice.traefik.containo.us/cokie created
[root@localhost traefik]# kubectl get ingressroute
NAME AGE
app-ingressroute-cokie 5s
[root@localhost traefik]# kubectl get TraefikService
NAME AGE
cokie 8s
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
Add local hosts resolution
192.168.36.139 traefikservice-cokie.kubesre.lcc
- 1.
Client access test, carry cookie
[root@localhost traefik]# for i in {1..5}; do curl -b "cookie=default-app-v1-80" http://traefikservice-cokie.kubesre.lc/; done
Hello app-v1
Hello app-v1
Hello app-v1
Hello app-v1
Hello app-v1
[root@localhost traefik]# for i in {1..5}; do curl -b "cookie=default-app-v2-80" http://traefikservice-cokie.kubesre.lc/; done
Hello app-v2
Hello app-v2
Hello app-v2
Hello app-v2
Hello app-v2
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
Traffic replication
The so-called traffic replication, also known as mirroring service, refers to copying the requested traffic according to the rules and sending it to other services, and ignoring the responses of this part of the request. This function is very useful when doing some stress testing or problem recurrence. . Create traefikService and ingressRoute app-traefikService-ingressroute-copy.yaml:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: app-ingressroute-copy
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`traefikservice-copy.kubesre.lc`)
kind: Rule
services:
- name: copy
namespace: default
kind: TraefikService
---
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
name: copy
namespace: default
spec:
mirroring:
name: app-v1 # 发送 100% 的请求到 app-v1
port: 80
mirrors:
- name: app-v2 # 然后复制 10% 的请求到 app-v2
port: 80
percent: 10
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- twenty one.
- twenty two.
- twenty three.
- twenty four.
- 25.
- 26.
- 27.
- 28.
- 29.
deploy
[root@localhost traefik]# kubectl apply -f app-traefikService-ingressroute-copy.yaml
ingressroute.traefik.containo.us/app-ingressroute-copy created
traefikservice.traefik.containo.us/copy created
[root@localhost traefik]# kubectl get ingressroute
NAME AGE
app-ingressroute-copy 7s
[root@localhost traefik]# kubectl get TraefikService
NAME AGE
copy 13s
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
Add local hosts resolution
192.168.36.139 traefikservice-copy.kubesre.lc
- 1.
The test results are as follows: only the return information of app-v1 can be seen.
[root@localhost traefik]# for i in {1..9}; do curl http://traefikservice-copy.kubesre.lc && sleep 1; done
Hello app-v1
Hello app-v1
Hello app-v1
Hello app-v1
Hello app-v1
Hello app-v1
Hello app-v1
Hello app-v1
Hello app-v1
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
Check the pod log of app-v2 and find that 10% of the traffic requests will come in.
[root@localhost traefik]# kubectl logs -f app-v2-7f7844f7b9-grsdk
...
10.244.0.5 - - [23/Aug/2023:02:54:36 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0" "10.24