Microservices Mastery

Load Balancing: Nginx vs Ingress Controllers

1 Views Updated 5/4/2026

Load Balancing Mastery

Load balancing is the art of distributing incoming network traffic across multiple servers. This ensures that no single server is overwhelmed and provides high availability. If one server dies, the load balancer simply shifts the traffic to the survivors.

1. Layer 4 vs Layer 7

  • Layer 4 (Transport): Balances based on IP address and Port. It is extremely fast but doesn't understand the content (e.g., "This is a request for /images").
  • Layer 7 (Application): Understands the HTTP request. It can route traffic based on the URL path, Cookies, or Headers. This is what we use for Microservices.

2. Nginx Ingress Controller

In Kubernetes, the **Ingress Controller** (often powered by Nginx) is the entry point for all web traffic. It handles SSL termination (HTTPS) and uses rules to route toolliyo.com/api/users to the Identity service and toolliyo.com/api/catalog to the Catalog service.

4. Interview Mastery

Q: "What is 'Sticky Sessions' (Session Affinity) and why should we avoid it in Microservices?"

Architect Answer: "Sticky Sessions ensure that a specific user always hits the same server instance. This is a hack used to support legacy stateful applications. In Microservices, we avoid it because it breaks **Scalability**. If one server gets stuck with 1,000 'Heavy' users, we can't easily move them to a quieter server. We prefer **Stateless Services** where ANY instance can handle ANY user, because the state is stored in a shared place like Redis."

Microservices Mastery
1. Distributed Systems Fundamentals
Monolith vs Microservices: When to migrate? The 12-Factor App Methodology for Cloud-Native Apps Database Per Service: Handling distributed data consistency
2. Containerization & Orchestration
Docker Essentials: Building efficient .NET images Docker Compose: Orchestrating a multi-service environment Kubernetes Architecture: Pods, Services, and Deployments K8s ConfigMaps & Secrets: Managing environment variables Helm Charts: Packaging your microservices for K8s
3. Service Communication
Synchronous vs Asynchronous Communication: Pros and Cons REST APIs in a Microservices World: Best Practices Mastering gRPC: High-performance binary communication API Gateways: Implementing Ocelot for single-entry access BFF Pattern: Backend-for-Frontend (Mobile vs Web)
4. Event-Driven Architecture
Message Brokers: Introduction to RabbitMQ & Azure Service Bus Pub/Sub Pattern: Implementing MassTransit for .NET The Outbox Pattern: Ensuring 100% data consistency Dead Letter Queues: Handling message failure gracefully Distributed Transactions: The Saga Pattern (State Machines)
5. Resilience & Scalability
Distributed Caching with Redis: Optimizing global state Service Discovery: IdentityServer4 & Consul Load Balancing: Nginx vs Ingress Controllers The Sidecar Pattern: Offloading cross-cutting concerns
6. Observability & Security
Distributed Logging with Serilog & SEQ Distributed Tracing: OpenTelemetry & Jaeger Health Checks: Monitoring system vitals in real-time OAuth2 & OpenID Connect: Centralized Identity (AuthN/AuthZ) Rate Limiting & Throttling: Protecting your services
7. Advanced Cloud Topics
Infrastructure as Code (IaC): Introduction to Terraform CI/CD Pipelines for Microservices (GitHub Actions/Azure DevOps) C# Architect Interview: Microservices & System Design Focus