Skip to content

Debugging kubernetes

Debugging a Distributed Rate Limiter on Kubernetes: A Survival Guide

Section titled “Debugging a Distributed Rate Limiter on Kubernetes: A Survival Guide”

Building a rate limiter sounds simple: track IPs in Redis and block them if they hit a limit. But when you move that logic into a Terraform-managed Kubernetes cluster using Kind and Nginx Ingress, you run into a gauntlet of “silent failures.”

Here is how we solved the hurdles of this project and the guide we used to track down the bugs.


Terminal window
# Forward pod port directly
kubectl port-forward -n dev <pod-name> 8080:80
# Forward service port
kubectl port-forward -n dev svc/hello-app 8080:80
# Test locally
curl http://localhost:8080
Terminal window
# Open an interactive shell into a pod
kubectl exec -it -n dev <pod-name> -- /bin/sh
# Test DNS resolution inside the cluster
nslookup hello-app.dev.svc.cluster.local
## Accessing Logs and Port-Forwarding with Kind
Once your Kind cluster is running and your resources are deployed via Terraform, you can debug and interact with your apps directly from your local machine.
### 1. Get Pod Logs
```bash
# List pods in the namespace
kubectl get pods -n dev
# Fetch logs from a specific pod
kubectl logs -n dev <pod-name>
# Stream logs continuously
kubectl logs -n dev -f <pod-name>

Using Kind with Terraform for Local Kubernetes Debugging

Section titled “Using Kind with Terraform for Local Kubernetes Debugging”

Running Kubernetes locally is a lifesaver for development—but combining it with Terraform makes it even more powerful. You can spin up a fully configured cluster, deploy resources, and experiment safely on your machine before touching cloud environments.


  • Repeatable setups: Terraform defines the cluster, networking, and app resources in code.
  • Safe experimentation: Test infrastructure and deployments locally without affecting production.
  • Integrates with CI/CD: Your Terraform manifests can be reused in cloud environments later.

  • Docker installed and running
  • Kind installed
  • Terraform installed
  • kubectl installed

Terminal window
kind create cluster --name local-debug
kubectl cluster-info --context kind-local-debug
kubectl get nodes

Step 2: Configure Terraform for Kubernetes

Section titled “Step 2: Configure Terraform for Kubernetes”
  1. Initialize Terraform in your project directory.
  2. Configure the Kubernetes provider to point to your Kind cluster:
provider "kubernetes" {
config_path = "${path.module}/kubeconfig.yaml"
}

Generate the kubeconfig for Kind:

Terminal window
kind get kubeconfig --name local-debug > kubeconfig.yaml

Here’s an example of deploying a namespace and a simple deployment:

resource "kubernetes_namespace" "dev" {
metadata {
name = "dev"
}
}
resource "kubernetes_deployment" "app" {
metadata {
name = "hello-app"
namespace = kubernetes_namespace.dev.metadata[0].name
}
spec {
replicas = 1
selector {
match_labels = {
app = "hello"
}
}
template {
metadata {
labels = {
app = "hello"
}
}
spec {
container {
name = "hello"
image = "nginx:latest"
port {
container_port = 80
}
}
}
}
}
}

Terminal window
terraform init
terraform apply

Verify deployment:

Terminal window
kubectl get all -n dev

  • Inspect pod logs:
Terminal window
kubectl logs -n dev <pod-name>
  • Port-forward to test services:
Terminal window
kubectl port-forward svc/hello-app 8080:80 -n dev
curl http://localhost:8080
  • Make changes in Terraform and re-apply safely.

Terminal window
terraform destroy
kind delete cluster --name local-debug

Using Kind with Terraform gives you a repeatable, safe, and fully configurable Kubernetes environment locally. This combination is perfect for testing manifests, debugging deployments, and experimenting before applying changes in production.

1. The Terraform “Bool vs String” Trap

Section titled “1. The Terraform “Bool vs String” Trap”

The Error: json: cannot unmarshal bool into Go struct field ... of type string

The Cause: Terraform’s Helm provider often converts strings like "true" into actual booleans. However, Kubernetes labels and nodeSelectors must be strings.

The Fix: Force the type in your main.tf:

set {
name = "controller.nodeSelector.ingress-ready"
value = "true"
type = "string"
}

2. The “Invisible” Rate Limiter (IP vs. Port)

Section titled “2. The “Invisible” Rate Limiter (IP vs. Port)”

The Problem: The app was running, but requests were never being limited.

The Discovery: redis-cli MONITOR showed keys like rate_limit:10.244.0.1:45932. Because every request uses a unique source port, Redis saw every request as a new user.

The Fix: Use net.SplitHostPort to strip the port:

host, _, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
host = r.RemoteAddr
}
key := "rate_limit:" + host

This version ensures the RateLimiter factory is correctly applied to the ServeMux.

package main
import (
"context"
"log"
"net"
"net/http"
"os"
"time"
"github.com/redis/go-redis/v9"
)
var limitScript = redis.NewScript(`
local current = redis.call("INCR", KEYS[1])
if current == 1 then redis.call("EXPIRE", KEYS[1], ARGV[2]) end
if current > tonumber(ARGV[1]) then return 0 end
return 1
`)
func RateLimiter(rdb *redis.Client) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
host, _, _ := net.SplitHostPort(r.RemoteAddr)
key := "rate_limit:" + host
result, _ := limitScript.Run(r.Context(), rdb, []string{key}, 5, 60).Int()
if result == 0 {
http.Error(w, "Too Many Requests", http.StatusTooManyRequests)
return
}
next.ServeHTTP(w, r)
})
}
}
func main() {
rdb := redis.NewClient(&redis.Options{Addr: os.Getenv("REDIS_ADDR")})
mux := http.NewServeMux()
mux.HandleFunc("/api/hello", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello, world!"))
})
log.Fatal(http.ListenAndServe(":8080", RateLimiter(rdb)(mux)))
}

When the status is “Green” but the browser says “404” or “Connection Refused,” follow this checklist.

Step 1: Redis Monitor (The Source of Truth)

Section titled “Step 1: Redis Monitor (The Source of Truth)”

See if your app is even attempting to talk to the database.

Terminal window
# Find your Redis pod and stream commands
REDIS_POD=$(kubectl get pods -l app=redis -o name)
kubectl exec -it $REDIS_POD -- redis-cli MONITOR

Step 2: Nginx Ingress Logs (The Traffic Police)

Section titled “Step 2: Nginx Ingress Logs (The Traffic Police)”

Determine if the request is hitting a “black hole” before it reaches your app.

Terminal window
NGINX_POD=$(kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o name)
kubectl logs -n ingress-nginx $NGINX_POD --tail=20 -f

Step 3: Direct Pod Port-Forward (Bypass the Mesh)

Section titled “Step 3: Direct Pod Port-Forward (Bypass the Mesh)”

Is it a code bug or a network bug? Hit the pod directly.

Terminal window
APP_POD=$(kubectl get pods -l app=rate-limiter -o name | head -n 1)
kubectl port-forward $APP_POD 8080:8080
# In a new terminal:
curl -i http://localhost:8080/api/hello

Ensure your app can actually resolve the Redis service name.

Terminal window
kubectl exec -it $APP_POD -- nslookup redis-service
SymptomProbable Cause
404 Not FoundPath mismatch between Ingress rules and Go http.HandleFunc.
No Redis activityMiddleware is defined but not wrapped around the http.ListenAndServe.
Limit never triggersRedis keys include the unique source port (use net.SplitHostPort).
503 Service UnavailablePods are failing health checks; check kubectl describe pod.