When configuring my homelab cluster, I wanted to use the kubelet credential provider to retrieve images from an Azure Container Registry (ACR). My first idea was to use workload identity federation, but after some trial and error and stumbling on image pull issues, I discovered that this approach is not supported yet for the kubelet credential provider and a different less secure is available for now. In this article, I’ll explain what the kubelet credential provider and ACR credential provider are and walk through the configuration process to show how to authenticate to ACR using this method. This solution is particularly useful for self-managed Kubernetes clusters that need automated access to private Azure container registries.
TLDR
Workload identity federation is currently not supported for the acr credential provider, but is in development. It is unclear if this feature will be limited to AKS or extended to self-managed clusters. As an alternative, you can use a service principal to authenticate and pull images from ACR in self-managed clusters, though this is a less secure approach.
Image pull flow #
So what happens when a workload is deployed that requires a container image:
- A request is sent to the API server, containing a deployment object definition with the name, image, deployment values, etc.
- The API server validates the request and stores the object definition in the etcd datastore.
- The controller manager creates a replica set, which then creates a pod.
- The scheduler assigns the pod to a node.
- The kubelet running on the node becomes responsible for managing the pod.
- The kubelet checks if the required image is already present on the node:
- If the image is present locally, it uses that image.
- If the image is not present locally, it pulls the image from the registry.
For public registries, the kubelet simply pulls the image without any authentication requirements. However, for private registries, additional authentication configuration is needed before the kubelet can successfully pull images.
The traditional way #
Traditionally, to retrieve images from my ACR, I would use image pull secrets stored in the Kubernetes API. These secrets contain registry credentials as a Kubernetes secret to authenticate and pull images from ACR. The Kubernetes secret uses a special secret type: kubernetes.io/dockerconfigjson
with the key .dockerconfigjson
.
There are several ways to configure imagePullSecrets:
- Pod level: The
imagePullSecrets
field references a Kubernetes secret that is added directly to the Pod or Deployment spec. - Namespace level: The default service account in the namespace references a Kubernetes secret as
imagePullSecret
. All pods using the default service account (viaspec.serviceAccountName
) inherit the image pull secret. - Service account level: A dedicated service account references a Kubernetes secret as
imagePullSecret
. All pods using that service account (viaspec.serviceAccountName
) inherit the image pull secret. This approach is preferred over modifying the default service account. - Node level: Node-wide credentials can be configured as runtime configs on the nodes (typically
/var/lib/kubelet/config.json
or/root/.docker/config.json
depending on the container runtime). These contain registry credentials so the kubelet can pull images without requiring pod-level secrets. Node-level approaches are less granular and riskier (anyone with node access can read the files), but reduce per-pod secret management overhead.
The main downside of this approach is that these secrets are often long-lived because they are difficult to rotate, which can lead to unauthorized access when a pull secret is compromised.
Kubelet
When a Pod specifies imagePullSecrets, the kubelet follows this process:
- The pod is scheduled on a node
- The kubelet on that node fetches the referenced secret from the API server
- It converts and places the Docker-style credentials where the node’s container runtime can access them.
- The container runtime uses those credentials through the Container Runtime Interface (CRI) to authenticate and pull the image.
When multiple credential sources exist (node docker config, imagePullSecrets, service account imagePullSecrets), the kubelet/CRI considers all sources. Typically, explicit imagePullSecrets for the Pod or service account take precedence within that namespace. For image names with a registry host, the credential entry matching that host is used.
This traditional approach was not my preferred option due to several disadvantages. This would mean:
- Creating service principal credentials and storing the id and password.
- Creating secrets in each namespace that needs to pull images from ACR.
- Defining image pull secrets in each deployment.
The downsides for image pull secrets are:
- Service principal credentials expire and need rotation, requiring new Kubernetes secrets and updating any ServiceAccounts/Pods that reference them.
- Duplicated credentials accross the cluster increasing risk and management overhead.
- Service principal credentials are long-lived compared to short-lived tokes
- For multiple clusters or many namespaces, manual secret distribution and per deployment configuration becomes error-prone.
- imagePullSecrets are not tied to workload identity, meaning all pods using the secret share the same registry identity, preventing least privilege access controls.
I wanted to avoid storing passwords in the cluster and use a simpler, more secure alternative. A better approach would be using workload identity to authenticate with ACR, but as mentioned earlier, this isn’t supported yet. Let’s first start with using a service principal with the kubelet credential provider for authentication.
Kubelet Credential Provider #
The kubelet credential provider is a Kubernetes feature that dynamically retrieves credentials for container image registries using exec plugins rather than relying on static credentials in image pull secrets. This feature has been GA (Generally Available) since Kubernetes v1.26.
The benefits of using the kubelet credential provider are:
- No longer the need to store passwords in Kubernetes secrets or on disk.
- The credential provider handles authentication dynamically, so there is no need to manually rotate expired credentials.
- Instead of long-lived service principal passwords, the provider uses temporary tokens that expire quickly, reducing the impact if they’re compromised.
- No more creating secrets in every namespace or adding imagePullSecrets to every deployment - the authentication happens automatically at the node level.
- The provider makes API calls directly to the cloud provider to get credentials, which feels much more natural than managing static secrets
- Reducing the attack surface by eliminating stored credentials and using short-lived tokens
Configuration Components #
The kubelet credential provider system consists of two main configuration components:
- Credential Provider Plugin
This is an executable binary that acts as a plugin and must be pre-installed on each node before kubelet starts. The binary is executed by kubelet to obtain registry credentials:
- Input: Receives
CredentialProviderRequest
via stdin - Output: Returns
CredentialProviderResponse
via stdout
The kubelet and exec plugin binary communicate through stdio (stdin, stdout, stderr) by exchanging JSON-serialized, API-versioned request and response objects.
- CredentialProviderConfig
This YAML configuration file defines which provider to use for specific registries (for example *.azurecr.io
for Azure Container Registry). The file path is specified to kubelet via the --image-credential-provider-config
flag and must be present on each node:
apiVersion: kubelet.config.k8s.io/v1
kind: CredentialProviderConfig
providers: #List of credential provider helper plugins that will be enabled by the kubelet
- name: acr-credential-provider #Name of the credential provider. It must match the name of the provider executable
matchImages: #List of strings used to match against images in order to determine if this provider should be invoked
- "*.azurecr.io"
defaultCacheDuration: "10m" #Default duration the plugin will cache credentials in-memory
apiVersion: credentialprovider.kubelet.k8s.io/v1 #Required input version of the exec CredentialProviderRequest
args: #Arguments to pass to the command when executing it
- /etc/kubernetes/azure.json
Kubelet Configuration #
The kubelet credential provider is enabled by configuring two parameters:
--image-credential-provider-config
: Path to file containing the CredentialProviderConfig API--image-credential-provider-bin-dir
: Directory where kubelet searches for plugin binaries
Credential provider flow #
- The kubelet matches the image registry against the configured providers based on the matchImages patterns
- The kubelet calls the plugin with the get-credentials argument and sends a JSON-encoded CredentialProviderRequest via stdin
- The plugin receives a CredentialProviderRequest:
{
"apiVersion": "credentialprovider.kubelet.k8s.io/v1",
"kind": "CredentialProviderRequest",
"image": "myregistry.azurecr.io/myapp:latest"
}
- The plugin returns a CredentialProviderResponse with short lived credentials via stdout:
{
"apiVersion": "credentialprovider.kubelet.k8s.io/v1",
"kind": "CredentialProviderResponse",
"cacheKeyType": "Registry",
"cacheDuration": "12h0m0s",
"auth": {
"myregistry.azurecr.io": {
"username": "00000000-0000-0000-0000-000000000000",
"password": "password"
}
}
}
- The kubelet uses the returned credentials to authenticate with the container registry.
- Credentials are cached according to the provider’s
cacheDuration
andcacheKeyType
configuration to avoid unnecessary API calls.
For more detailed information about the kubelet credential provider, see the Kubernetes documentation .
ACR Credential Provider #
In older Kubernetes versions (prior to v1.20), the capability to dynamically fetch credentials was only available for specific cloud registries like ACR, ECR (Elastic Container Registry) and GCR (Google Container Registry). These used cloud provider specific SDK’s compiled directly into the kubelet code (in-tree). The new plugin mechanism can be used in any cluster, and can be used for all arbitraty container registries, whether public, managed services or self hosted registries (out-of-tree).
For authenticating with ACR, an out-of-tree credential plugin acr-credential provider is available. The source code can be found here .
The ACR credential provider integrates with Microsoft Entra, using service principals, managed identities or workload identities to authenticate with ACR. This allows for seamless authentication without storing static credentials (except for using service principals). The ACR credential provider supports advanced features like registry mirror mapping, where you can configure it to fetch credentials from one ACR when pulling images from another registry (like MCR - Microsoft Container Registry).
I’ll start with configuring the acr credential provider using a service principal, which isn’t ideal because this means storing a plain text client secret on the nodes, something I really wanted to avoid. After some research I found no better way to use the acr credential provider for self managed clusters at this time.
Using a managed identity is not possible from a cluster not running in Azure because a managed identity relies on the Azure Instance Metadata Service (IMDS) (http://169.254.169.254/metadata/identity/oauth2/token) which is only accessible from within Azure’s network.
A good alternative would be to use Azure Workload Identity federation, but more on that later.
Credential provider flow #
- Kubelet detects an image pull for example:
teknologieur1acr.azurecr.io/testimage:latest
- Kubelet matches the image against the
matchImages
patterns in CredentialProviderConfig - Kubelet executes the binary
/var/lib/kubelet/credential-provider/acr-credential-provider
- Kubelet passes arguments including
/etc/kubernetes/azure.json
to the binary - ACR credential provider reads the
/etc/kubernetes/azure.json
file from the node’s filesystem - Provider uses Azure config to authenticate and get temporary ACR credentials
- Provider returns credentials to kubelet for image pull
- Kubelet uses the credentials to authenticate with ACR and pull the image
Configuration steps #
To configure the acr credential provider, you need to:
- Create a service principal
- Configure the
/etc/kubernetes/azure.json
file containing Azure authentication information. - Install the ACR credential provider binary in a local directory accessible by the kubelet on every node.
- Configure the CredentialProviderConfig
- Set the kubelet configuration arguments :
--image-credential-provider-config
: path to the credential provider plugin config file--image-credential-provider-bin-dir
: path to the directory where credential provider plugin binaries are located
Create a service principal #
The ACR credential provider will authenticate using a traditional Entra application registration (service principal).
Create a Service Principal in Entra
az ad sp create-for-rbac \
--name teknologi-eur1-demo-k8s-kubelet \
--role acrpull \
--scopes /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx/resourceGroups/teknologi-eur1-demo-k8s-rg/providers/Microsoft.ContainerRegistry/registries/teknologieur1acr
Save the values of appId
, password
, and tenant
, which we will need later.
{
"appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"displayName": "teknologi-eur1-demo-k8s-kubelet",
"password": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
}
Configure the azure.json file #
The azure.json
file contains Azure authentication and configuration information which the credential provider plugin uses for authentication.
Create /etc/kubernetes/azure.json
The settings below are enough to make use of the acr credential provider.
sudo tee /etc/kubernetes/azure.json > /dev/null <<EOF
{
"tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"AADClientID": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"AADClientSecret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
EOF
# Secure the file - readable only by root
sudo chmod 600 /etc/kubernetes/azure.json
sudo chown root:root /etc/kubernetes/azure.json
Install the ACR credential provider binary #
# Variables
ACR_CREDENTIAL_PROVIDER_VERSION=v1.33.2
# Create folder for credential provider
sudo mkdir -p /var/lib/kubelet/credential-provider
# Download ACR credential provider based on architecture
if [ "$(uname -m)" = "x86_64" ]; then
sudo curl -Lo /var/lib/kubelet/credential-provider/azure-acr-credential-provider \
"https://github.com/kubernetes-sigs/cloud-provider-azure/releases/download/${ACR_CREDENTIAL_PROVIDER_VERSION}/azure-acr-credential-provider-linux-amd64"
elif [ "$(uname -m)" = "aarch64" ]; then
sudo curl -Lo /var/lib/kubelet/credential-provider/azure-acr-credential-provider \
"https://github.com/kubernetes-sigs/cloud-provider-azure/releases/download/${ACR_CREDENTIAL_PROVIDER_VERSION}/azure-acr-credential-provider-linux-arm64"
else
echo "Unsupported architecture: $(uname -m)"
exit 1
fi
# Make the binary executable
sudo chmod +x /var/lib/kubelet/credential-provider/azure-acr-credential-provider
# Verify the binary exists and is executable
ls -la /var/lib/kubelet/credential-provider/azure-acr-credential-provider
# Test the binary can be executed
sudo /var/lib/kubelet/credential-provider/acr-credential-provider --version 2>/dev/null || echo "Binary is executable"
Configure the CredentialProviderConfig #
# Create file for credential provider config
sudo vi /var/lib/kubelet/credential-provider-config.yaml
apiVersion: kubelet.config.k8s.io/v1
kind: CredentialProviderConfig
providers:
- name: azure-acr-credential-provider
apiVersion: credentialprovider.kubelet.k8s.io/v1
defaultCacheDuration: "12h"
matchImages:
- "teknologieur1acr.azurecr.io"
- "teknologieur1acr.azurecr.io/*"
- "*.azurecr.io"
args:
- /etc/kubernetes/azure.json
Important: The name
for the CredentialProviderConfig must be the same as the credential provider, which is azure-acr-credential-provider
.
Set the kubelet configuration arguments #
There are several ways to configure kubelet arguments, I’ll show the most common approaches:
Option 1: Using the kubelet environment file
The KUBELET_EXTRA_ARGS variable in the kubelet environment file can be used for overrides. By running the command systemctl cat kubelet
you can see the environment file is:
EnvironmentFile=-/etc/default/kubelet
Configure the flags in the /etc/default/kubelet
:
vi /etc/default/kubelet
KUBELET_EXTRA_ARGS="--image-credential-provider-config=/var/lib/kubelet/credential-provider-config.yaml --image-credential-provider-bin-dir=/var/lib/kubelet/credential-provider/"
Restart the kubelet:
sudo systemctl daemon-reload
sudo systemctl restart kubelet
sudo systemctl status kubelet
Option 2: Using kubeadm
With kubeadm, you can set extra args in the kubeadm config YAML file under nodeRegistration.kubeletExtraArgs
:
nodeRegistration:
kubeletExtraArgs:
image-credential-provider-config: "/var/lib/kubelet/credential-provider-config.yaml"
image-credential-provider-bin-dir: "/var/lib/kubelet/credential-provider/"
This requires a kubeadm upgrade or kubeadm init/join to take effect, and ensures the config is applied consistently during upgrades and node joins.
Verify and test the credential provider #
Check if credential provider config is loaded:
sudo journalctl -u kubelet | grep -i "acr-credential"
sudo journalctl -u kubelet -n 100 | grep -i "credential.*provider"
If the kubelet service is not running, you can troubleshoot by looking at the kubelet logs:
sudo journalctl -u kubelet --since "2 minutes ago" | head -30
sudo journalctl -u kubelet -f
sudo journalctl -u kubelet -n 50
Now that the acr credential provider has been configured, it can be tested by sending a CredentialProviderRequest directly to the plugin. A CredentialProviderRequest is basically a JSON request that is send to the acr credential provider plugin with the azure.json
as parameter:
{
"kind": "CredentialProviderRequest",
"apiVersion": "credentialprovider.kubelet.k8s.io/v1",
"image": "teknologieur1acr.azurecr.io/testimage:latest"
}
The provider request can be tested with the following command
echo "{\"kind\":\"CredentialProviderRequest\",\"apiVersion\":\"credentialprovider.kubelet.k8s.io/v1\",\"image\":\"teknologieur1acr.azurecr.io/testimage:latest\"}" | sudo /var/lib/kubelet/credential-provider/azure-acr-credential-provider /etc/kubernetes/azure.json -v 4
The response should look something like this:
{
"kind": "CredentialProviderResponse",
"apiVersion": "credentialprovider.kubelet.k8s.io/v1",
"cacheKeyType": "Registry",
"cacheDuration": "5m0s",
"auth": {
"*.azurecr.io": {
"username": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"password": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"*.azurecr.*": {
"username": "",
"password": ""
},
}
}
Test with a Pod
Create a test pod that uses an image from the private ACR
apiVersion: v1
kind: Pod
metadata:
name: acr-test
spec:
containers:
- name: test
image: teknologieur1acr.azurecr.io/testimage:latest
While the kubelet credential provider improves security compared to static secrets, there are limitations in this setup. Any pod running on the node can potentially access the same credentials and retrieve images from the ACR. No workload isolation increases security risks and does not align with principles of least privilege or ephemeral authentication.
Workload Identity Federation with ACR Credential Provider #
My initial plan was to use the new KubeletServiceAccountTokenForCredentialProviders
feature, which allows the kubelet to send a service account token bound to the pod for which the image pull is being performed to the credential provider plugin. This enables the plugin to exchange the token for credentials to access the image registry.
Service account token for credential providers is a Kubernetes feature introduced in v1.26 (alpha) and promoted to beta in v1.33. This enhancement allows credential providers to use pod specific service account tokens to obtain registry credentials, which the kubelet can then use for image pulls. Instead of relying on long lived secrets, credential providers can use service account tokens to request short lived credentials tied to a specific pod’s identity:
- Workload specific authentication: Image pull credentials are scoped to a particular workload instead of an entire node or cluster.
- Workload identity integration: Perfect for Azure workload identity federation.
- Ephemeral credentials: Tokens are automatically rotated, eliminating the risks of long lived secrets.
- Seamless integration: Works with existing Kubernetes authentication mechanisms, aligning with cloud-native security best practices.
This approach would allow me to use an Azure user assigned managed identity (with AcrPull rights on the ACR) that has a federated credential configured with a Kubernetes service account. By using Azure Workload Identity , a short-lived tokens would be generated that the kubelet could use to pull images from ACR.
This method is significantly better than the service principal approach because each pod uses it’s own service account token, and there is no need to store long lived static credentials (azure.json) on nodes. Workloads will pull images based on their own runtime identity, which allows for more granular access control.
After some testing I discovered that this isn’t possible yet with the ACR credential provider. This functionality is still under development, as seen in this GitHub PR . The ACR credential provider doesn’t support workload identity federation yet, however the feature is actively being worked on and should be available in future releases.
I still wanted to prepare my homelab for when workload identity support becomes available in the ACR credential provider.
To configure workload identity with the acr credential provider, the following steps are necessary:
- Azure Workload Identity configured for the cluster. This allows Kubernetes service account tokens to be exchanged for short lived Azure access tokens through OpenID Connect federation
- Enabling RBAC rights for the kubelet to access the service accounts
- Install the ACR credential provider binary that supports service account tokens (which is still in development)
- Configure the CredentialProviderConfig for service account token
- Enable the
KubeletServiceAccountTokenForCredentialProviders
feature gate on the kubelet - Set the kubelet configuration arguments
- Create an Azure user assigned managed identity and create a federation between the identity and the workload service account.
- Create a Kubernetes service account
Enable RBAC rights for the kubelet #
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet-serviceaccount-access
rules:
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-serviceaccount-access-binding
subjects:
- kind: User
name: system:node:teknologik8s # Replace with your node name
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: kubelet-serviceaccount-access
apiGroup: rbac.authorization.k8s.io
Configure the CredentialProviderConfig #
The tokenAttributes
field contains information about the service account token that will be passed to the plugin, including the intended audience for the token and whether the plugin requires the pod to have a service account.
sudo vi /var/lib/kubelet/credential-provider-config.yaml
apiVersion: kubelet.config.k8s.io/v1
kind: CredentialProviderConfig
providers:
- name: azure-acr-credential-provider
matchImages:
- "*.azurecr.io"
defaultCacheDuration: "10m"
apiVersion: credentialprovider.kubelet.k8s.io/v1
tokenAttributes:
serviceAccountTokenAudience: api://AzureADTokenExchange #The intended audience for the projected service account token.
requireServiceAccount: true #The plugin will only be invoked if the pod has a service account
requiredServiceAccountAnnotationKeys: #The list of annotation keys that are required to be present in the service account.
- azure.workload.identity/acr-client-id
args:
- /etc/kubernetes/azure.json
Enable the feature gate #
Option 1: Direct kubelet config modification
First, locate the kubelet config file:
#Check kubelet systemd service to see which config file it uses
sudo systemctl cat kubelet | grep -E "(config|ExecStart)"
For Ubuntu the kubeconfig file is located at: /var/lib/kubelet/config.yaml
.
Add the feature gate to the kubelet config file:
#Backup the original config
sudo cp /var/lib/kubelet/config.yaml /var/lib/kubelet/config.yaml.backup
sudo vi /var/lib/kubelet/config.yaml
featureGates:
KubeletServiceAccountTokenForCredentialProviders: true
Option 2: Using kubeadm
With kubeadm, you can set extra args in the kubeadm config YAML file under nodeRegistration.kubeletExtraArgs
:
nodeRegistration:
kubeletExtraArgs:
feature-gates: "KubeletServiceAccountTokenForCredentialProviders=true"
image-credential-provider-config: "/var/lib/kubelet/credential-provider-config.yaml"
image-credential-provider-bin-dir: "/var/lib/kubelet/credential-provider/"
Restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
sudo systemctl status kubelet
Create an Azure user assigned managed identity #
az identity create \
--name teknologi-eur1-prd-k8s-kubelet-mi --resource-group teknologi-eur1-prd-k8s-rg
az identity federated-credential create \
--name "kubelet-federated-credential" \
--identity-name "teknologi-eur1-prd-k8s-kubelet-mi" \
--resource-group "teknologi-eur1-prd-k8s-rg" \
--issuer "https://teknologiprdk8soidcsa.blob.core.windows.net/`$web/" \
--subject "system:serviceaccount:teknologi-ns:workloadsa"
Create a Kubernetes service account #
apiVersion: v1
kind: ServiceAccount
metadata:
name: workloadsa
namespace: default
annotations:
azure.workload.identity/acr-client-id: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
azure.workload.identity/acr-tenant-id: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
labels:
azure.workload.identity/use: "true"
Summary #
While workload identity federation for kubelet credential providers is the preferred way of secure image pulling, the ACR credential provider doesn’t support this feature yet. For now I’m limited to using service principals with the ACR credential provider. Once Azure releases the updated ACR credential provider with workload identity support, hopefully I will be able to test and implement this and can finally get rid of those hardcoded passwords for good!