Skip to content

Kubernetes Networking and Service Exposure

Kubernetes Networking and Service Exposure

Section titled “Kubernetes Networking and Service Exposure”

Kubernetes networking design is mostly about reducing blast radius while keeping service-to-service traffic reliable. A solid approach defines:

  1. How workloads discover each other.
  2. Which traffic is exposed externally.
  3. Which internal paths are explicitly denied.

Kubernetes assumes:

  1. Every Pod gets its own IP.
  2. Pods can talk to other Pods by default unless restricted.
  3. Services provide stable virtual IPs and DNS names over changing Pod sets.

This model simplifies routing but requires deliberate policy controls in production.

  1. ClusterIP:
    • Default for internal traffic only.
    • Use for service-to-service communication.
  2. NodePort:
    • Exposes service on every node IP and static port.
    • Mostly for simple or legacy setups; avoid as primary internet edge.
  3. LoadBalancer:
    • Provisions cloud L4/L7 load balancer.
    • Preferred for external endpoints where cloud integration exists.
  4. ExternalName:
    • DNS alias to external service.
    • Useful for abstracting third-party endpoints.

Rule of thumb:

  1. Internal microservices: ClusterIP.
  2. Internet-facing applications: Ingress + LoadBalancer.

Ingress gives HTTP/S routing by host/path and centralizes TLS handling.

Typical pattern:

  1. Public DNS points to Ingress controller LoadBalancer.
  2. Ingress routes to internal ClusterIP Services.
  3. Security controls (WAF, rate limits, auth) live at edge or gateway.

For advanced requirements (multi-tenant auth, quotas, traffic policy), add API gateway or service mesh at ingress boundary.

Without NetworkPolicy, pod-to-pod traffic is often too permissive.

Baseline approach:

  1. Default deny ingress and egress at namespace level.
  2. Allow only required app, DNS, and observability traffic.
  3. Separate sensitive workloads into dedicated namespaces.

Sample policy posture:

  1. Frontend can call API.
  2. API can call database.
  3. Everything else denied.

Core DNS name format:

  1. <service>.<namespace>.svc.cluster.local

Operational notes:

  1. Use short service names inside same namespace.
  2. Track DNS latency and error rate; DNS issues can mimic app outages.
  3. Set reasonable connection/read timeouts to prevent retry storms.

Kubernetes Services load-balance across healthy endpoints.

Important considerations:

  1. Readiness probes control endpoint inclusion.
  2. Sticky sessions should be used only when required.
  3. For gRPC/HTTP2, confirm ingress supports expected connection behavior.

If clients need zero-downtime updates:

  1. Use rolling updates with readiness gates.
  2. Drain connections gracefully with preStop hooks.

TLS strategy should be explicit:

  1. Terminate TLS at ingress for external traffic.
  2. Use mTLS for east-west traffic when regulatory or trust boundaries require it.
  3. Automate certificate issuance and rotation (for example, cert-manager).

Do not:

  1. Hardcode certificates in images.
  2. Share one wildcard cert for all unrelated domains.

North-south traffic:

  1. Client to cluster edge.
  2. Governed by ingress, WAF, and external auth controls.

East-west traffic:

  1. Service-to-service inside cluster or between clusters.
  2. Governed by NetworkPolicy, identity, and optionally service mesh.

Design objective:

  1. Keep external exposure minimal.
  2. Keep internal trust explicit, not assumed.
  1. Service has no endpoints due to label mismatch.
  2. Readiness probe failure removes all pods from load balancer.
  3. NetworkPolicy blocks DNS egress unintentionally.
  4. Ingress path/host mismatch returns 404/503.
  5. MTU or CNI issues cause intermittent packet loss.

Track:

  1. Ingress 4xx/5xx rates by route.
  2. Service request latency and error ratio.
  3. DNS lookup failures and latency.
  4. Dropped packets and connection resets.
  5. NetworkPolicy deny events if your CNI supports them.

Fast triage sequence:

  1. Check Pod readiness.
  2. Check Service selectors and endpoints.
  3. Check Ingress rules and controller logs.
  4. Check DNS resolution from source pod.
  5. Check NetworkPolicy allow rules.

Service:

apiVersion: v1
kind: Service
metadata:
name: catalog-api
namespace: app
spec:
selector:
app: catalog-api
ports:
- name: http
port: 80
targetPort: 8080
type: ClusterIP

Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: catalog-ingress
namespace: app
spec:
ingressClassName: nginx
tls:
- hosts: ["catalog.example.com"]
secretName: catalog-tls
rules:
- host: catalog.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: catalog-api
port:
number: 80

Default deny ingress policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: app
spec:
podSelector: {}
policyTypes:
- Ingress
  1. Use ClusterIP for internal services by default.
  2. Expose public traffic only through Ingress/Gateway.
  3. Enforce default-deny NetworkPolicy and explicit allows.
  4. Validate readiness probes before production rollout.
  5. Monitor DNS, ingress errors, and endpoint health continuously.
  6. Automate TLS certificate rotation and renewal alerts.

Reliable Kubernetes networking comes from explicit exposure boundaries and strict internal traffic policy. If you combine clean service design, controlled ingress, and policy-driven east-west rules, clusters stay both accessible and secure.