Auto-Instrumentation in Azure Application Insights With AKS
Demo of auto-instrumentation with App Insights on AKS: This article provides a demo of how to enable monitoring on applications without requiring code changes.
Join the DZone community and get the full member experience.
Join For FreeMonitoring containerized applications in Kubernetes environments is essential for ensuring reliability and performance. Azure Monitor Application Insights provides powerful application performance monitoring capabilities that can be integrated seamlessly with Azure Kubernetes Service (AKS).
This article focuses on auto-instrumentation, which allows you to collect telemetry from your applications running in AKS without modifying your code. We'll explore a practical implementation using the monitoring-demo-azure repository as our guide.
What Is Auto-Instrumentation?
Auto-instrumentation is a feature that enables Application Insights to automatically collect telemetry, such as metrics, requests, and dependencies, from your applications. As described in Microsoft documentation, "Auto-instrumentation automatically injects the Azure Monitor OpenTelemetry Distro into your application pods to generate application monitoring telemetry" [1].
The key benefits include:
- No code changes required
- Consistent telemetry collection across services
- Enhanced visibility with Kubernetes-specific context
- Simplified monitoring setup
Currently, AKS auto-instrumentation supports (this is currently in preview as of Apr 2025)
- Java
- Node.js
How Auto-Instrumentation Works
The auto-instrumentation process in AKS involves:
- Creating a custom resource of type Instrumentation in your Kubernetes cluster
- The resource defines which language platforms to instrument and where to send telemetry
- AKS automatically injects the necessary components into application pods
- Telemetry is collected and sent to your Application Insights resource
Demo Implementation Using monitoring-demo-azure
The monitoring-demo-azure repository provides a straightforward example of setting up auto-instrumentation in AKS. The repository contains a k8s
directory with the essential files needed to demonstrate this capability.
Setting Up Your Environment
Before applying the example files, ensure you have:
- An AKS cluster running in Azure
- A workspace-based Application Insights resource
- Azure CLI version 2.60.0 or greater
Run the following commands to prepare your environment:
# Install the aks-preview extension
az extension add --name aks-preview
# Register the auto instrumentation feature
az feature register --namespace "Microsoft.ContainerService" --name "AzureMonitorAppMonitoringPreview"
# Check registration status
az feature show --namespace "Microsoft.ContainerService" --name "AzureMonitorAppMonitoringPreview"
# Refresh the registration
az provider register --namespace Microsoft.ContainerService
# Enable Application Monitoring on your cluster
az aks update --resource-group <resource_group> --name <cluster_name> --enable-azure-monitor-app-monitoring
Key Files in the Demo Repository
The demo repository contains three main Kubernetes manifest files in the k8s
directory:
1. namespace.yaml
Creates a dedicated namespace for the demonstration:
apiVersion: v1
kind: Namespace
metadata:
name: demo-namespace
2. auto.yaml
This is the core file that configures auto-instrumentation:
CopyapiVersion: monitor.azure.com/v1
kind: Instrumentation
metadata:
name: default
namespace: demo-namespace
spec:
settings:
autoInstrumentationPlatforms:
- Java
- NodeJs
destination:
applicationInsightsConnectionString: "InstrumentationKey=your-key;IngestionEndpoint=https://your-location.in.applicationinsights.azure.com/"
The key components of this configuration are:
autoInstrumentationPlatforms
: Specifies which languages to instrument (Java and Node.js in this case)destination
: Defines where to send the telemetry (your Application Insights resource)
3. The Deployment and Manifests
The three services can be deployed using the 3 YAML files in the k8s folder. In this case, I used the Automated Deployments to create the images and deploy them into the AKS cluster.
Notice that this deployment file doesn't contain any explicit instrumentation configuration. The auto-instrumentation is entirely handled by the Instrumentation
custom resource.
Deploying the Demo
Deploy the demo resources in the following order:
# Apply the namespace first
kubectl apply -f namespace.yaml
# Apply the instrumentation configuration
kubectl apply -f auto.yaml
# Deploy the application
# Optional: Restart any existing deployments to apply instrumentation
kubectl rollout restart deployment/<deployment-name> -n demo-namespace
Verifying Auto-Instrumentation
After deployment, you can verify that auto-instrumentation is working by:
- Generating some traffic to your application
- Navigating to your Application Insights resource in the Azure portal
- Looking for telemetry with Kubernetes-specific metadata
Key Visualizations in Application Insights
Once your application is sending telemetry, Application Insights provides several powerful visualizations:
Application Map
The Application Map shows the relationships between your services and their dependencies. For Kubernetes applications, this visualization displays how your microservices interact within the cluster and with external dependencies.
The map shows:
- Service relationships with connection lines
- Health status for each component
- Performance metrics like latency and call volumes
- Kubernetes-specific context (like pod names and namespaces)
Performance View
The Performance view breaks down response times and identifies bottlenecks in your application. For containerized applications, this helps pinpoint which services might be causing performance issues.
You can:
- See operation durations across services
- Identify slow dependencies
- Analyze performance by Kubernetes workload
- Correlate performance with deployment events
Failures View
The Failures view aggregates exceptions and failed requests across your application. For Kubernetes deployments, this helps diagnose issues that might be related to the container environment.
The view shows:
- Failed operations grouped by type
- Exception patterns and trends
- Dependency failures
- Container-related issues (like resource constraints)
Live Metrics Stream
Live Metrics Stream provides real-time monitoring with near-zero latency. This is particularly useful for:
- Monitoring deployments as they happen
- Troubleshooting production issues in real time
- Observing the impact of scaling operations
- Validating configuration changes
Conclusion
Auto-instrumentation in AKS with Application Insights provides a streamlined way to monitor containerized applications without modifying your code. The monitoring-demo-azure repository offers a minimal, practical example that demonstrates:
- How to configure auto-instrumentation in AKS
- The pattern for separating instrumentation configuration from application deployment
- The simplicity of adding monitoring to existing applications
By leveraging this approach, you can quickly add comprehensive monitoring to your Kubernetes applications and gain deeper insights into their performance and behavior.
References
[1] Azure Monitor Application Insights Documentation
[2] Auto-Instrumentation Overview
[3] GitHub: monitoring-demo-azure
Opinions expressed by DZone contributors are their own.
Comments