paint-brush
Exploring Azure's SSL Certificates: Our Journey with Let's Encryptby@socialdiscoverygroup
3,290 reads
3,290 reads

Exploring Azure's SSL Certificates: Our Journey with Let's Encrypt

by Social Discovery GroupApril 21st, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Struggling with SSL certificates? Pavel Shapurau, Lead DevOps engineer at Social Discovery Group, covers it all. In the article, he shares a customized solution configuration that utilizes Let's Encrypt certificates and the lessons learned along the way.
featured image - Exploring Azure's SSL Certificates: Our Journey with Let's Encrypt
Social Discovery Group HackerNoon profile picture

When deploying a web project, SSL certificate implementation is a common challenge that every engineer is likely to face – and I’m no exception.

Typically, startups opt for free certificates like those from Let’s Encrypt. However, these come with limitations and inconveniences to consider, which are detailed on the certificate provider's website

Let's take a closer look at the issues I've personally encountered with free certificates:

  • First and foremost, they require regular re-issuance and are only valid for a maximum of three months. 
  • In the case of Kubernetes, the certificates need to be stored and frequently regenerated within the platform. 
  • Wildcard certificates and their renewal present a number of difficulties.
  • Encryption protocols and algorithms also have their own peculiarities.

Having faced these issues on a regular basis, I've developed a customized solution configuration that utilizes Let's Encrypt certificates. In this article, I'll be sharing my findings and the lessons I've learned along the way.

Recently, I've been focusing on a specific technology stack and would like to discuss a Kubernetes cluster-based infrastructure solution within the context of an Azure cloud provider. While cert-manager is a popular solution in this realm, I prefer installing it through Helm for greater convenience.

So, without further ado, let's dive right in:

helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.6.1

Following that, we can create a ClusterIssuer using the following YAML file:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-cluster-issuer
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: p…[email protected]   #your e-mail
    privateKeySecretRef:
      name: letsencrypt-cluster-issuer
    solvers:
      - http01:
          ingress:
            class: nginx

Moving forward, there are two options for implementing the certificates:

  • Creating certificates by adding "kind: Certificate";
  • Managing certificates through your ingress.

Let’s explore both options.

In the first scenario, my YAML files looked like this:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: myservice2
  namespace: test
Spec:
  duration: 2160h
  renewBefore: 72h
  dnsNames:
    - myservice2.mydomain.org  #you resources
  secretName: myservice2-tls
  issuerRef:
    name: letsencrypt-cluster-issuer
    kind: ClusterIssuer

It is noteworthy that only the "secretName: myservice2-tls" was mentioned in the ingress in the TLS section for a particular service.

By the way, the YAML file contains some useful parameters, such as:

  • duration: 48h – indicating the duration of the certificate in hours
  • renewBefore: 24h – specifying how many hours before the certificate expires, you can attempt to renew the existing certificate

If you're more comfortable working with the console, let me provide you with a comprehensive view of the certificate as an example.

kubectl describe certificates <cert name> -n <namespace name>

So, what do we have in the end?

  • Not After – the expiration date of the certificate; 
  • Not Before – the creation date of the certificate (unless explicitly specified in the Certificate YAML Resource field);
  • Renewal Time – the timestamp before the certificate expires. 

Personally, I've found managing Let's Encrypt certificates through Ingress to be more reliable and convenient, which is why I've been using it lately. With this approach, in addition to the secretName and hostname in the TLS section, you only need to specify annotations in the ingress YAML file.

annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-cluster-issuer"
    cert-manager.io/renew-before: 72h

And there you have it, the magic of it all! Certificates are now automatically re-issued, with a three-day buffer period before they expire in this example. It's worth noting that, in the case of Let's Encrypt, the default period is 90 days.

However, due to the limitations of free certificates from Let's Encrypt, our team eventually contemplated the need for a comprehensive certificate that could safeguard not only our domain but also subdomains. As we continued developing our project on Azure, we found that the Azure Key Vault provided a convenient location to store such certificates. We’ve been using the akv2k8s utility within our Kubernetes cluster. If you're interested, I encourage you to learn more about it

What is Azure Key Vault

Once you've obtained a certificate in Azure, the next step is to add it to the Azure Key Vault (AKV). While this process is relatively straightforward, verifying domain ownership can be a bit tricky. However, once all the confirmation steps are completed successfully, the certificate will appear in the key vault's "Secrets" section.

One of the major benefits of this approach is automatic certificate renewal. The certificate will be reissued and updated in AKV after one year, and it will automatically synchronize with the Secret in Kubernetes. 


In order for the Kubernetes cluster to utilize the acquired certificate, you'll need to grant it certain permissions and access rights. 

To do this, you'll first need to obtain the identityProfile.kubeletidentity.objectId of the cluster. You can do so by using the following command:

az aks show -g <RG> -n <AKS_name>

The resource group (RG) is the location where the cluster is stored, and AKS_name is the name of your cluster.

After obtaining the identityProfile.kubeletidentity.objectId, you need to copy it. Next, add the value to the command for granting secrets access permissions:

az keyvault set-policy --name <name AKV> --object-id  <get from first step value> --secret-permissions get

Next, you can proceed with the installation of akv2k8s, which can be done via Helm or other preferred methods, as described in the installation guide.

Following the official documentation, you can then synchronize your Azure Key Vault certificate with Secret in a particular Kubernetes namespace. Here’s my YAML file:

apiVersion: spv.no/v1
kind: AzureKeyVaultSecret
metadata:
  name: wildcard-cert #any name 
  namespace: default
spec:
  vault:
    name: SandboxKeyVault  #name you keyvault in Azure
    object:
      name: name_object_id #name object id from Azure AKV this cert
      type: secret
  output:
    secret:
      name: wildcard-cert # any name for secret in your namespace
      type: kubernetes.io/tls
      chainOrder: ensureserverfirst # very important values!!!

Let me emphasize the significance of the last line, as it played a crucial role in resolving an issue I encountered. Initially, I was able to upload the certificate into Kubernetes successfully, but it did not function as intended. It took some time to diagnose the problem.

As it turned out, when exporting a PFX certificate from the Key Vault, the server certificate is sometimes positioned at the end of the chain instead of at the beginning where it should be. This can cause issues when used with parameters such as ingress-nginx, as the certificate fails to load and defaults back to its original value. However, by setting the chainOrder to ensureserverfirst, the server certificate is placed first in the chain.

Upon closer inspection of the certificate, I discovered that the chain was arranged in the following sequence:

  1. Intermediate
  2. Root
  3. Server

Having discussed the technical aspects and configurations, let's delve back into the peculiarities of Azure certificates.

Azure offers two options for ordering a certificate, both of which are provided by GoDaddy:

  • a certificate for a specific domain or subdomain;
  • a Wildcard certificate.

We opted for the latter, hoping it would protect all of our applications and services. However, there were a few nuances.

The Azure Wildcard certificate only protects first-level subdomains. For instance, if we have a domain named mydomain.com, the certificate will only cover first-level subdomains in the form of .mydomain.com.

Therefore, the certificate will work for resources like service1.mydomain.com, service2.mydomain.com, service3.mydomain.com, but it will not cover service1.test.mydomain.com or mail.service1.mydomain.com

What options do we have then?

  • Purchasing separate Wildcard certificates for all necessary subdomains;
  • Adding SAN (Subject Alternative Name) records to the certificate. 

The first option is unlikely to be practical, as the number of subdomains, particularly those at the second level, can be enormous. Thus, paying for a wildcard certificate for each subdomain (.service1.mydomain.com, *.dev.mydomain.com…) is not the most reasonable solution.

As for the second option, I had a long conversation with the Azure support team regarding this matter, going through all the stages of denial, frustration, and anger, only to realize in the end that SAN capability for certificates has not been implemented yet.

Right up until the end, I had hoped that such an issue would never occur on Azure. In contrast, their competitor, AWS Amazon, offers certificates through their AWS Certificate Manager (ACM) that support up to 10 alternative subject names, including wildcards. It allows you to create 10 subdomains with a wildcard character (*), and even request a quota increase on AWS of up to 100k.

To wrap up, I'll share how you can utilize certificates with the Front Door service on Azure.

Understanding Azure Front Door (AFD)

For me, Azure Front Door (AFD) is a global and scalable gateway that leverages Microsoft's worldwide edge network to direct incoming traffic to the appropriate endpoints, which can be web applications or services. Operating at the HTTP/HTTPS layer (layer 7), Front Door routes client requests to an available application server from the pool. The application's server-side can be any Internet-accessible service, whether it is hosted inside or outside of Azure.


An example from the documentation website https://docs.microsoft.com/ 

The Azure Front Door is a convenient tool that allows you to balance and proxy incoming traffic accessing distributed applications and services worldwide. It offers a range of features including the ability to bind various rules by configuring the rule handler, joining policies, and firewall settings. I won't delve too deep into the specifics of the AFD service in this article but rather focus on the service peculiarities of certificates.

As you might expect, incoming traffic to the Azure Front Door can be either http or https. If you choose https, you have three options: generate a certificate on the Azure Front Door service itself, upload your own certificate, or sync your existing certificate to Azure Key Vault. To allow the Front Door service to access the Key Vault, you'll need to configure the necessary permissions. 

I recommend using the last option and selecting the latest version of the certificate to avoid having to manually renew or regenerate it. By connecting the certificate from AKV, everything will stay up to date automatically.

This setup will give you the following result:


Here’s another peculiarity while directing traffic from Azure Front Door to AKS.

Handling http traffic isn't an issue, but there is a subtle detail to keep in mind when setting up a resource pool and specifying the external IP address of the AKS cluster. Make sure to leave the "server component node header" field blank to ensure that it is automatically populated with the values that were entered in the "IP or node name" field.



Suppose you have a domain wildcard certificate attached through AKV that is utilized by both the Front Door service and flushed into the AKS cluster through akv2k8s. The interface hostname (and the CNAME record in DNS) for all of your applications and services accessible through Front Door will be the following: 

  • *.mydomain.com –  with the "server component node header" field left blank;
  • Your external AKS IP address will have a default http redirect rule to /.

This will allow all services in the *.mydomain.com format to function properly. Once you have completed this configuration, you're all set.



In certain scenarios, redirecting traffic from Azure Front Door to AKS through https can be more advantageous. To ensure the correct functioning of Azure Front Door in server pool settings, it is crucial to specify a DNS name corresponding to your AKS cluster, which is related to SNI and health checks. Otherwise, the setup will not work.

In my case, there was no name assigned to my AKS clusters, I had only services that previously worked directly but had to function through Azure Front Door. To address this, I had to create a separate DNS name for the AKS cluster, configure DNS, and set up a separate service with a certificate attached to the ingress. Only then could I redirect https traffic to the AKS clusters and ensure it works correctly for all available services.

It's important to consider security measures while setting up the connection permission for AKS. To ensure a secure connection, you can limit the permission to connect to AKS only from Azure Front Door IP addresses in the Network Security Group for AKS (as shown in the picture below).



In addition, you can set up the AKS ingress to accept connections exclusively from your Azure Front Door header by ID using the X-Azure-FDID parameter.

Final note and a piece of advice

1. Azure does not provide comprehensive information about the features and drawbacks of their certificates. However, it is worth mentioning that they promptly refunded us for the purchased certificate.

2. During our development process, we continue to use Let's Encrypt. Although it has its limitations, it is not the worst choice available.

3. If your project requires multiple subdomains with different resource levels, you may want to consider third-party vendors' "Wildcard (also known as Multidomain) with SAN" certificates. These certificates can be imported into Azure and utilized to their full potential.

4. When configured correctly, Azure Front Door is an excellent service. I highly recommend it.

Written by Pavel Shapurau, Lead DevOps engineer, Social Discovery Group