Table of Content
x min
December 3, 2025

Kubernetes Deployment Not Updating: Causes, Fixes & Insights

December 3, 2025
groundcover Team
December 3, 2025
Cover image of the article ‘Kubernetes Deployment Not Updating: Causes, Fixes & Insights’.

Updating a Kubernetes deployment is usually a routine task. You change an image tag, apply the update, and expect the new version to roll out. However, it's common to find the Deployment unchanged, even though everything appears to be correct. When you update the configuration of a deployment, and nothing happens, it's immediately frustrating.

It's a fairly regular occurrence, forcing even experienced engineers to waste time figuring out why the desired state isn't kicking in. Rollout failure is rarely a simple issue; you're usually dealing with several compounding problems, such as incorrect configuration, hitting resource limits, or a complex logic error.

This article breaks down the most common reasons your Deployment is stuck and provides clear, actionable steps to diagnose and resolve the issue.

How Kubernetes Applies Deployment Updates

A Kubernetes Deployment acts as the high-level conductor for application management. Its purpose is to handle updates declaratively, abstracting away the lower-level work traditionally handled by resources like ReplicaSets. It's important to understand that Deployment doesn't create Pods directly.

Instead, the Deployment spins up and manages one or more ReplicaSets under the hood. These ReplicaSets are responsible for the actual replication and lifecycle management of the application's Pods, ensuring the desired state is met during a rollout. The relationship is shown in figure 1.

Relationship between Deployments, ReplicaSets, and Pods in Kubernetes

When you modify a Deployment's manifest and apply it, you are simply declaring a new desired state. The Deployment Controller constantly monitors the cluster, detects discrepancies between the old and new specifications, and initiates the process of reconciling the state.

Rollout Strategies

Kubernetes offers two fundamental strategies for applying an update, defined by the .spec.strategy.type field. While these two are the basic strategies managed natively by the Deployment resource, more advanced deployment strategies are common in production. A/B testing, Blue/Green, and Canary deployments often require specialized tooling, such as service meshes or separate deployment management tools to manage traffic flow between versions.

RollingUpdate (The Default)

The RollingUpdate strategy is the default and preferred method, designed for zero-downtime deployments. When a new Pod template is detected:

  • The Deployment Controller creates a new ReplicaSet reflecting the updated configuration.
  • The Deployment scales up the new ReplicaSet (creating new Pods) while simultaneously scaling down the old ReplicaSet (terminating old Pods).

This gradual, synchronized exchange continues until the new ReplicaSet reaches the desired replica count and the old one is scaled down to zero. Two parameters control the pacing:

  • maxSurge: The maximum number of Pods allowed above the desired replica count during the update.
  • maxUnavailable: The maximum number of Pods that can be unavailable during the update.

Recreate

The Recreate strategy adopts an all-or-nothing approach. This method is necessary for applications that cannot tolerate having both old and new versions running simultaneously (e.g., due to database migration conflicts).

  1. It first scales the entire old ReplicaSet down to zero, tearing down all existing Pods and making the application entirely unavailable.
  2. Only once all old Pods are terminated does it scale up the new ReplicaSet to the desired number of replicas.

This strategy results in application downtime but guarantees version separation.

Once the process is complete and all new Pods are running and healthy (a state checked using their Readiness Probes), the update is considered successful. The old ReplicaSet remains in the cluster, scaled to zero, preserving the revision history and enabling instant rollbacks.

What "Deployment Not Updating" Means in Kubernetes

When a Deployment fails to update, it signals a mismatch between the actual state of the cluster and the desired state defined in the Deployment manifest. This failure generally manifests in one of two critical ways: either the rollout never starts, or it starts but never completes. If the rollout never starts, it typically means the Deployment Controller failed to recognize the change as significant, no new ReplicaSet is created, the revision history remains static, and all existing Pods continue to run the old application version.

Conversely, a much more insidious scenario occurs when the rollout starts but hangs: a new ReplicaSet is created and begins scaling up. Still, it gets stuck because the newly created Pods fail their Readiness Probes or crash loop, preventing the Deployment from safely terminating the older Pods due to availability constraints.

Common Reasons a Kubernetes Deployment Is Not Updating

Deployment failures generally fall into two categories: the update never starts, or it starts but hangs indefinitely. The following table breaks down the most frequent causes, categorized by the failure mode, and provides immediate diagnostic steps:

| Failure Category | Root Cause | Description | Quick Fix/Diagnostic Command | | ------------------------ | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Rollout Never Starts | No Change to Pod Template | The change was applied to the Deployment, but did not affect the Pod template (.spec.template), so the revision hash remained unchanged. | Check kubectl rollout history deployment <name>. If the revision number is the same, verify that the manifest was updated correctly. | | Rollout Never Starts | imagePullPolicy: Always Misuse | If the image tag is the same (e.g., app:latest), Kubernetes might not pull the new image unless the imagePullPolicy is set to Always (or if the image tag is changed). | Change the image tag (e.g., from v1.0 to v1.1) or explicitly set imagePullPolicy: Always. | | Rollout Never Starts | Imperative Change Conflict | A previous imperative command (e.g., kubectl edit) might have left a field that is preventing a new kubectl apply from taking effect. | Run kubectl describe deployment <name> and look for unexpected inline field overrides. | | Rollout Hangs Mid-Update | Failing Readiness/Liveness Probes | The new Pods are unhealthy, causing them to fail their [Readiness Probes](https://www.groundcover.com/blog/kubernetes-readiness-probe). The maxUnavailable setting prevents the Deployment from terminating old, healthy Pods. | Check Pod events: kubectl describe pod <new-pod-name>. Check container logs: kubectl logs <new-pod-name>. | | Rollout Hangs Mid-Update | Image Pull Failure | Kubernetes cannot pull the new container image (e.g., wrong name, private registry secret is missing, or network issue). | Check Pod events for ErrImagePull or [ImagePullBackOff](https://www.groundcover.com/kubernetes-troubleshooting/imagepullbackoff). Verify the image name and imagePullSecrets. | | Rollout Hangs Mid-Update | Insufficient Cluster Resources | The new Pods cannot be scheduled because the Node or namespace is resource-constrained (e.g., insufficient CPU, Memory, or persistent volume claims). | Check Deployment conditions (kubectl describe deployment <name>) and Node status to identify resource pressure. | | Rollout Hangs Mid-Update | Missing ConfigMap or Secret | The new Pod version depends on a ConfigMap or Secret that has not been created, resulting in the Pod failing to start or crash-looping. | Check kubectl describe pod <new-pod-name> for mounting or volume errors. |

Why Traditional Monitoring Fails to Catch Deployment Issues

Most monitoring tools focus on the runtime health of a Kubernetes cluster, CPU, memory, container crashes, and network errors. While these metrics are helpful, they don't explain why a deployment isn't updating. In most cases, the cluster appears completely healthy even when the rollout has stalled.

Traditional monitoring systems are often unaware of the internal workings of Kubernetes and fail to catch deployment issues because of the following reasons:

  • Focus is on Infrastructure, Not Orchestration State: Traditional tools monitor host-centric metrics (CPU spikes, disk I/O). A stalled deployment rollout is a logical state, it can be paused, waiting, or partially complete, and it won't generate abnormal CPU usage or crash events.
  • Blind to Internal Kubernetes Events: They are often unaware of the internal workings of the Deployment Controller. To detect a stalled rollout, visibility into the Deployment condition status, ReplicaSet changes, and ongoing rollout events is required, not just basic system health metrics.
  • Ignoring Configuration and Policy Failures: Deployment failures are frequently caused by state and configuration issues, such as an unchanged image tag, blocked rollouts due to PodDisruptionBudgets, or cluster policies that silently reject updates. These are not "errors" in the traditional sense, so they do not trigger standard performance alerts.
  • Too Slow for Ephemeral Events: The rapid and ephemeral nature of containers means that a Pod might crash and be replaced before a traditional monitoring system's polling interval (e.g., every 5 minutes) can even catch the error or log the event.

To effectively detect these situations, specialized visibility into the Kubernetes API events and the Deployment's rollout history is necessary.

Troubleshooting Deployment Update Failures

When a Deployment update fails to complete, a methodical approach is required to isolate the cause, moving from application-specific issues to configuration errors and finally to cluster-wide constraints. The goal is always to identify the specific event or condition that blocks the new ReplicaSet from achieving its desired state.

1. Container Image Issues

Problems with the container image are the most common reason a rollout hangs, often resulting in new Pods repeatedly crashing or failing to start.

  • Image Pull Failure (ErrImagePull): If Kubernetes cannot retrieve the image, the Pod will remain in a pending state or enter a crash loop.
    • Check/Fix: Verify the image name and tag are spelled correctly. If using a private registry, ensure the necessary imagePullSecrets are correctly defined. Run kubectl describe Pod <new-pod-name> to confirm the error event.
  • Application Crash: The image loads successfully, but the application crashes immediately after loading.
    • Check/Fix: Use kubectl logs <new-pod-name> to inspect the application logs and diagnose the immediate failure within the container itself.

2. Rollout and Strategy Problems

These issues stem from conflicts between the Deployment's desired state, its settings, and the actual runtime behavior of the new Pods.

  • Failing Readiness Probes: The most common cause of a hanging rollout is a failing Readiness Probe. The Deployment Controller will not consider the new Pod available, blocking the rollout from progressing past the maxUnavailable limit.
    • Check/Fix: Review the probe configuration in the manifest. Use kubectl describe Pod <new-pod-name> and look for explicit Readiness probe failed messages.
  • Unchanged Pod Template Hash: If you change a configuration object (like a ConfigMap) but fail to update the Pod template (e.g., changing an annotation or an environment variable that references the ConfigMap's hash), the Deployment Controller won't create a new ReplicaSet.
    • Check/Fix: The Pod's metadata must change. Ensure you have modified a field within the .spec.template of the Deployment.

3. Configuration or YAML Errors

These errors prevent the rollout from beginning, resulting in the inability to create the new ReplicaSet entirely.

  • Typographical Errors in the Manifest: Simple typos in field names or values can cause the kubectl apply command to fail or result in the Deployment object being created with an invalid specification.
    • Fix: Always use an IDE or linter to validate YAML structure before applying. Proactively use the --dry-run flag (e.g., kubectl apply -f file.yaml --dry-run=client -o yaml) to preview the API server's interpretation of your manifest and catch structural errors before committing the change.
  • Immutable Field Changes: Attempting to change immutable fields (like the selector labels) will result in an explicit error message from the API server upon application.
    • Fix: Check the output of the kubectl apply command for the error and revert the immutable change.

4. Environment, Cluster, or Node Issues

If the application is correct and the manifest is valid, the issue lies in resource availability or cluster constraints.

  • Resource Quotas Exceeded: The namespace may have a ResourceQuota limiting the total CPU, Memory, or Pod count, preventing the new ReplicaSet from scaling up.
    • Check/Fix: Run kubectl describe resourcequota in the target namespace.
  • Insufficient Resource Requests/Limits: If the new Pod requests more CPU or memory than any single Node can provide, the Pod will remain Pending, unable to be scheduled.
    • Check/Fix: Use 'kubectl describe Pod <new-pod-name>' and look for "Events" referencing a scheduling failure or "Insufficient <resource>" messages.
  • Taints and Tolerations or Node Selectors: The new Pods may have scheduling constraints that do not match any available, healthy Node in the presence of Taints and Tolerations.
    • Check/Fix: Verify Node attributes (kubectl get nodes --show-labels) and compare them to the Pod's scheduling constraints.

Best Practices to Prevent Deployment Not Updating in Kubernetes

Preventing rollout failures is far more efficient than troubleshooting them. By adopting a few key practices in your development and deployment pipeline, you can dramatically improve the reliability of your Kubernetes releases.

| Area of Practice | Best Practice | Why It Prevents Failure | Command/Tool | | ---------------------- | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- | | Image Management | Use Unique, Immutable Tags | Avoids the ambiguity of the :latest tag and guarantees Kubernetes pulls the new image during an update, preventing image cache conflicts. | Use build tool output (e.g., Git SHA) for tagging. | | Rollout Triggering | Hash ConfigMaps/Secrets into Template | Changes to external configuration objects (ConfigMaps/Secrets) don't trigger rollouts. Hashing the config data into a Pod annotation forces a new Pod template hash and triggers a new ReplicaSet. | Use tools like Kustomize or Helm for automated hashing. | | Application Health | Implement Robust Readiness Probes | The Readiness Probe is the sole mechanism that dictates if a new Pod is safe to accept traffic and if the old Pod can be terminated. Accurate probes prevent hanging rollouts. | Tune initialDelaySeconds and timeoutSeconds carefully. | | Configuration Safety | Mandate kubectl --dry-run | Catches fundamental YAML syntax errors, typos, and validation errors before the manifest is submitted to the API server, preventing rollout failures at the start. | <br> kubectl apply \-f file.yaml \--dry-run=client \-o yaml | | Scheduling Reliability | Define Resource Requests/Limits | Requests guarantee that the new Pods are scheduled on Nodes with sufficient capacity, preventing Pods from getting stuck in a Pending state due to resource starvation. | Specify resources.requests and resources.limits for all containers. | | Cluster Limits | Monitor Quotas | Ensures that the scaling up of the new ReplicaSet doesn't hit a namespace ResourceQuota or LimitRange, which would immediately halt the rollout. | Run kubectl describe resrouecquota <resourcequota> |

Implementing GitOps to Avoid Deployment Not Updating

GitOps is the definitive proactive strategy for achieving reliable, automated, and auditable Kubernetes deployments. It shifts the entire desired state of your cluster from internal cluster objects to a Git repository, which acts as the single source of truth.

Implementing a GitOps workflow (using tools like ArgoCD or Flux) fundamentally prevents common rollout failures by:

  • Eliminating Configuration Drift: The GitOps controller constantly monitors Git and reconciles the cluster state, preventing manual kubectl edit changes from causing conflicts.
  • Guaranteed Change Trigger: Deployment updates are initiated by a Git commit merge. This bypasses manual command errors and ensures the Pod template hash reliably changes, triggering a new ReplicaSet.
  • Built-in Rollback: Rolling back a failed release is as simple as reverting the Git commit, providing an instant and reliable recovery mechanism.
  • Auditability: Every cluster state is a versioned commit, providing a complete, verifiable history for easy debugging in the event of a deployment hang.

By treating configuration changes with the same rigor as application code, GitOps ensures the deployment process is reliable and repeatable.

How groundcover Delivers Instant Insights into Deployment Not Updating

Traditional monitoring often fails to provide a comprehensive view of the cluster events, as it cannot see inside the kernel or connect cluster events to application performance. Modern, Kubernetes-native observability platforms, such as groundcover, address this by using technologies like eBPF (extended Berkeley Packet Filter) to provide deep, instant visibility into the entire deployment lifecycle.

groundcover's approach fundamentally prevents common troubleshooting blind spots by:

  • Connecting Control Plane and Application Health: The platform actively links the Deployment's status conditions directly to the application's runtime data.
  • Real-time Rollout Condition Monitoring: It actively monitors the status of application deployments in Kubernetes, immediately surfacing high-priority alerts based on the deployment status.
  • Zero-Overhead Logs and Traces: Leveraging eBPF, the tool collects application logs, network metrics, and execution traces from the kernel without requiring sidecars, providing the instant data needed to diagnose application crashes or slow startup times that cause the Readiness Probe to fail.
  • Scheduling Failure Diagnosis: It correlates a Pod stuck in a Pending state with the underlying reason captured in Kubernetes scheduler events, quickly identifying issues such as insufficient resources (e.g., CPU/RAM) or Taints and Tolerations conflicts, which are often missed by basic manual checks.

In essence, groundcover provides the necessary single pane of glass that turns a vague "Deployment Not Updating" issue into immediate, actionable insight.

Conclusion

A failing Kubernetes Deployment update is often a layered problem, demanding a systematic approach that moves from checking image tags and Readiness Probes to cluster resources and resource quotas. While careful manual troubleshooting is essential for diagnosing immediate issues, the long-term solution lies in proactive pipeline changes.

By adopting best practices like unique image tagging, integrating a robust GitOps workflow, and leveraging Kubernetes-native observability tools such as groundcover, DevOps teams can efficiently resolve scenarios where a Deployment is not updating and ensure high-velocity application delivery.

FAQs

Why does my Kubernetes deployment not update even after applying changes?

The Deployment Controller only initiates a rollout when a change is made to the Pod template (.spec.template). If your Deployment is not updating, the most common reasons are:

  • No Change to the Pod Template Hash: The modification you applied (e.g., changing a field outside of .spec.template) did not result in a new revision.
  • Image Tag Conflict: You updated the container image to a tag that already exists (e.g., using :latest), and the Pod's imagePullPolicy is set to the default of IfNotPresent, preventing Kubernetes from pulling the new version from the registry.
  • Failing Readiness Probe: The Deployment recognized the change and created a new ReplicaSet, but the new Pods are failing their Readiness Probes and cannot become "Available." This forces the rollout to halt indefinitely due to maxUnavailable constraints.
  • Resource Constraints: The new Pods are stuck in a Pending state because the cluster has insufficient resources (CPU, memory, or Pod count) to schedule them, or they are blocked by a ResourceQuota.

How do I confirm whether my new image version is actually being deployed?

You can use a sequence of kubectl commands to verify that the change was recognized and is progressing:

  • Check Rollout Status: Use kubectl rollout status deployment <deployment-name>. This command provides real-time progress or indicates if the rollout is stuck.
  • Check Revision History: Run kubectl rollout history deployment <deployment-name>. If the revision number has not incremented, the Deployment Controller did not recognize the change, indicating an issue with your manifest or application method.
  • Inspect Pods: Use kubectl get pods -l app=<label> (using your deployment's selector label) and check the age of the Pods. New Pods should have a recent age. Then, confirm the image: kubectl describe pod <new-pod-name> and inspect the Container ID and Image field under the Pod details.
  • View Events: Always check the Deployment's event history for specific error messages: kubectl describe deployment <deployment-name>. Look for status conditions indicating scheduling failures or ReplicaSet scale issues.

How can groundcover help detect and fix deployments that are not updating?

Modern observability platforms like groundcover provide the necessary deep, kernel-level visibility (via eBPF) that traditional monitoring may miss, enabling faster resolution:

  • Real-time Readiness Diagnosis: It connects the Kubernetes control plane events (the Deployment's status condition) directly to the application performance data, instantly showing why the new Pod's Readiness Probe is failing (e.g., application startup error, network connection pool exhaustion).
  • Zero-Overhead Logs and Traces: By leveraging eBPF, the tool collects all necessary application logs and traces without requiring sidecars, giving immediate diagnostic information for application crashes or slow initializations that halt the rollout.
  • Automated Scheduling and Resource Insight: It bypasses manual event scraping by automatically correlating Pods stuck in a Pending state with the precise reason for the scheduling failure, such as Insufficient CPU/Memory or Taints and Tolerations conflicts.
  • Elimination of Monitoring Blind Spots: It provides continuous, low-level health monitoring that catches ephemeral events (Pod crashes) missed by slow polling intervals, providing data crucial for identifying instability in the new Pod version.

Sign up for Updates

Keep up with all things cloud-native observability.

We care about data. Check out our privacy policy.

Trusted by teams who demand more

Real teams, real workloads, real results with groundcover.

“We cut our costs in half and now have full coverage in prod, dev, and testing environments where we previously had to limit it due to cost concerns.”

Sushant Gulati

Sr Engineering Mgr, BigBasket

“Observability used to be scattered and unreliable. With groundcover, we finally have one consolidated, no-touch solution we can rely on.“

ShemTov Fisher

DevOps team lead
Solidus Labs

“We went from limited visibility to a full-cluster view in no time. groundcover’s eBPF tracing gave us deep Kubernetes insights with zero months spent on instrumentation.”

Kristian Lee

Global DevOps Lead, Tracr

“The POC took only a day and suddenly we had trace-level insight. groundcover was the snappiest, easiest observability platform we’ve touched.”

Adam Ceresia

Software Engineering Mgr, Posh

“All vendors charge on data ingest, some even on users, which doesn’t fit a growing company. One of the first things that we liked about groundcover is the fact that pricing is based on nodes, not data volumes, not number of users. That seemed like a perfect fit for our rapid growth”

Elihai Blomberg,

DevOps Team Lead, Riskified

“We got a bill from Datadog that was more then double the cost of the entire EC2 instance”

Said Sinai Rijcov,

DevOps Engineer at EX.CO.

“We ditched Datadog’s integration overhead and embraced groundcover’s eBPF approach. Now we get full-stack Kubernetes visibility, auto-enriched logs, and reliable alerts across clusters with zero code changes.”

Eli Yaacov

Prod Eng Team Lead, Similarweb

Make observability yours

Stop renting visibility. With groundcover, you get full fidelity, flat cost, and total control — all inside your cloud.