paint-brush
How to deployment Knative on Azure Kubernetes Service (AKS)by@jkudo
1,924 reads
1,924 reads

How to deployment Knative on Azure Kubernetes Service (AKS)

by Jun KudoApril 2nd, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In order to make knative work with AKS, in addition to the official documentation, it takes some time, so I will explain how to do it.<br>The whole flow is the same as the documentation for starting AKS, installing isto, and installing knative, but it requires settings not found in the documentation.<br>Now I will&nbsp;explain.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How to deployment Knative on Azure Kubernetes Service (AKS)
Jun Kudo HackerNoon profile picture

Introduction



In order to make knative work with AKS, in addition to the official documentation, it takes some time, so I will explain how to do it.The whole flow is the same as the documentation for starting AKS, installing isto, and installing knative, but it requires settings not found in the documentation.Now I will explain.



I will omit the explanation of knative itself.In addition, since there is a possibility that there is a dependency on the verified version etc., there is no guarantee that it will work completely.It is assumed that Azure CLI and kubectl are already available.


Documenthttps://www.knative.dev/docs/install/knative-with-aks/

Start of AKS

Basically the same as the document.

Specifies the name of the environment.

export LOCATION=eastus
export RESOURCE_GROUP=knative-group
export CLUSTER_NAME=knative-cluster

Create a resource group

az group create --name $RESOURCE_GROUP --location $LOCATION


Start AKS.The version is 1.11.8. Maybe even if 1.12 system is OK …

az aks create --resource-group $RESOURCE_GROUP \
   --name $CLUSTER_NAME \
   --generate-ssh-keys \
   --kubernetes-version 1.11.8 \
   --enable-rbac \
   --node-vm-size Standard_DS3_v2

Set to be able to operate with kubectl. (--overwrite-existing overwrites existing settings)

az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --admin --overwrite-existing

Make sure it has started without problems.

kubectl get node
NAME                       STATUS    ROLES     AGE       VERSION
aks-nodepool1-24002009-0   Ready     agent     3m        v1.11.8
aks-nodepool1-24002009-1   Ready     agent     3m        v1.11.8
aks-nodepool1-24002009-2   Ready     agent     4m        v1.11.8

istio installation

Proceed as per the document.

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/istio-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/istio.yaml

Label it.

kubectl label namespace default istio-injection=enabled

Check which pod is started.

kubectl get pods --namespace istio-system

Wait for it to be Running or Completed.

NAME                                        READY     STATUS      RESTARTS   AGE
cluster-local-gateway-76db55c785-wkjvh      1/1       Running     0          5m
istio-citadel-746c765786-d758c              1/1       Running     0          6m
istio-cleanup-secrets-cj8cf                 0/1       Completed   0          6m
istio-egressgateway-7b46794587-jbk2s        1/1       Running     0          6m
istio-galley-75c6976d79-z5hp4               1/1       Running     0          6m
istio-ingressgateway-57f76dc4db-xqx8l       1/1       Running     0          6m
istio-pilot-6495978c49-4wl8w                2/2       Running     0          5m
istio-pilot-6495978c49-csfxn                2/2       Running     0          5m
istio-pilot-6495978c49-llw97                2/2       Running     0          6m
istio-policy-6677c87b9f-7ff2g               2/2       Running     0          6m
istio-sidecar-injector-879fd9dfc-2dfkt      1/1       Running     0          5m
istio-statsd-prom-bridge-549d687fd9-8rbfw   1/1       Running     0          6m
istio-telemetry-7d46d668db-khglq            2/2       Running     0          6m

knative installation


Apply the following, but proceed while checking the startup of each pod.And then, after applying the service separately, work will occur.

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/serving.yaml
kubectl apply --filename https://github.com/knative/build/releases/download/v0.4.0/build.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.4.0/release.yaml
kubectl apply --filename https://github.com/knative/eventing-sources/releases/download/v0.4.0/release.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/monitoring.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.4.0/third_party/config/build/clusterrole.yaml

If an error occurs during application, run the same again.

error: unable to recognize "https://github.com/knative/serving/releases/download/v0.4.0/serving.yaml": no matches for kind "Image" in version "caching.internal.knative.dev/v1alpha1"

Apply the service.

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/serving.yaml

Check the launch of the pod.

kubectl get pods --namespace knative-serving

Activator and autoscaler will result in an error. We need to eliminate this.

NAME                          READY     STATUS             RESTARTS   AGE
activator-6f7d494f55-sdhcw    1/2       CrashLoopBackOff   3          1m
autoscaler-5cb4d56d69-xng46   1/2       CrashLoopBackOff   3          1m
controller-6d65444c78-wrnnc   1/1       Running            0          1m
webhook-55f88654fb-tndgw      1/1       Running            0          1m


The problem is reported below. The solution is also described.Although not described in detail, it seems that there is a problem with istio.






Cannot install Knative servinghttps://github.com/knative/serving/issues/2878Requests don’t make it through the activator on AKShttps://github.com/knative/serving/issues/3026Internal Kubernetes API Calls Blocked by Istiohttps://github.com/istio/istio/issues/8696

First get the cluster FQDN.

az aks show -n $CLUSTER_NAME -g $RESOURCE_GROUP -o table

Name             Location    ResourceGroup    KubernetesVersion    ProvisioningState    Fqdn
---------------  ----------  ---------------  -------------------  -------------------  -------------------------------------------------------------
knative-cluster  eastus      knative-group    1.11.8               Succeeded            knative-cl-knative-group-630e95-44db6d79.hcp.eastus.azmk8s.io


Next, create the following manifest based on the FQDN.Change each one according to your own environment.

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: azmk8s-ext
spec:
  hosts:
  - "knative-cl-knative-group-630e95-44db6d79.hcp.eastus.azmk8s.io"
  location: MESH_EXTERNAL
  ports:
  - number: 443
    name: https
    protocol: HTTPS
  resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: tls-routing
spec:
  hosts:
  - knative-cl-knative-group-630e95-44db6d79.hcp.eastus.azmk8s.io
  tls:
  - match:
    - port: 443
      sniHosts:
      - knative-cl-knative-group-630e95-44db6d79.hcp.eastus.azmk8s.io
    route:
    - destination:
        host: knative-cl-knative-group-630e95-44db6d79.hcp.eastus.azmk8s.io

Once you have set up your istio, check your pod again.

kubectl get pods --namespace knative-serving
NAME                          READY     STATUS    RESTARTS   AGE
activator-6f7d494f55-sdhcw    2/2       Running   8          16m
autoscaler-5cb4d56d69-xng46   2/2       Running   8          16m
controller-6d65444c78-wrnnc   1/1       Running   0          16m
webhook-55f88654fb-tndgw      1/1       Running   0          16m

Make sure you’re all running.

Expand build

kubectl apply --filename https://github.com/knative/build/releases/download/v0.4.0/build.yaml

Check the pod

kubectl get pods --namespace knative-build
NAME                                READY     STATUS    RESTARTS   AGE
build-controller-68dfb74954-vx4rb   1/1       Running   0          12s
build-webhook-866fd64885-dsmdn      1/1       Running   0          12s

Expand eventing.

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.4.0/release.yaml

Check the pod

kubectl get pods --namespace knative-eventing
NAME                                            READY     STATUS    RESTARTS   AGE
eventing-controller-756d56fc7-t64lq             1/1       Running   0          35s
in-memory-channel-controller-79ccbb59c-87cnr    1/1       Running   0          22s
in-memory-channel-dispatcher-5c864b94f4-x5jgk   2/2       Running   1          20s
webhook-85f7f4fb6-tdk46                         1/1       Running   0          34s

Expand eventing-sources.

kubectl apply --filename https://github.com/knative/eventing-sources/releases/download/v0.4.0/release.yaml

Check the pod

kubectl get pods --namespace knative-sources
NAME                   READY     STATUS    RESTARTS   AGE
controller-manager-0   1/1       Running   0          18m

Expand monitoring

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/monitoring.yaml

Check the pod

kubectl get pods --namespace knative-monitoring
NAME                                  READY     STATUS    RESTARTS   AGE
elasticsearch-logging-0               1/1       Running   0          18m
elasticsearch-logging-1               1/1       Running   0          17m
grafana-754bc795bb-cm82c              1/1       Running   0          17m
kibana-logging-7f7b9698bc-pnbp9       1/1       Running   0          18m
kube-state-metrics-768dfff9c5-c4mf2   4/4       Running   0          17m
node-exporter-2snzs                   2/2       Running   0          17m
node-exporter-7tnjp                   2/2       Running   0          17m
node-exporter-95k29                   2/2       Running   0          17m
prometheus-system-0                   1/1       Running   0          17m
prometheus-system-1                   1/1       Running   0          17m

Expand clusterrole

kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.4.0/third_party/config/build/clusterrole.yaml

All installation is complete above.

Operation check


Check the operation according to the document.https://www.knative.dev/docs/install/getting-started-knative-app/

Expand the following:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: helloworld-go
  namespace: default
spec:
  runLatest:
    configuration:
      revisionTemplate:
        spec:
          container:
            image: gcr.io/knative-samples/helloworld-go 
            env:
              - name: TARGET
                value: "Go Sample v1"

Specify INGRESSGATEWAY.

INGRESSGATEWAY=knative-ingressgateway
if kubectl get configmap config-istio -n knative-serving &> /dev/null; then
    INGRESSGATEWAY=istio-ingressgateway
fi

Specify Gateway and check SVC.

kubectl get svc $INGRESSGATEWAY --namespace istio-system
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                                                                                                                   AGE
istio-ingressgateway   LoadBalancer   10.0.101.139   104.41.153.79   80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30458/TCP,8060:31092/TCP,853:30754/TCP,15030:30403/TCP,15031:30798/TCP   53m

Get IP.

export IP_ADDRESS=$(kubectl get svc $INGRESSGATEWAY --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')

URL Check

kubectl get ksvc helloworld-go  --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME            DOMAIN
helloworld-go   helloworld-go.default.example.com

Confirm that “Hello Go Sample v1!” Is coming back.

curl -H "Host: helloworld-go.default.example.com" http://${IP_ADDRESS}
Hello Go Sample v1!

The operation check is complete above.

Summary




Knative now works in Azure.In the future, applications using knative will also increase.Also, if you use Knative Lambda Runtime, you will be able to do Lambda on Azure.I’m looking forward to it.

Original Content (Japanese) : http://level69.net/archives/26443