Envoy Gateway Quick Start: Get Your First Request Through in 10 Minutes

4 min read

The goal of this post is simple: skip the philosophy, just get Envoy Gateway running. If you successfully hit the example service, you've already done more than everyone still stuck on the docs homepage.

Prerequisites

You need at least one working Kubernetes cluster, such as:

  • Local: kind, minikube, k3d
  • Cloud: EKS, GKE, AKS
  • Home lab: K3s + MetalLB works too

You'll also need:

  • kubectl
  • helm
  • A cluster that can provision a LoadBalancer address, or one where you're okay using port-forward

⚠️ If your cluster doesn't have LoadBalancer support, the official docs recommend pairing it with MetalLB. No external address isn't the end of the world — you'll just test via port-forward instead.

Step 1: Install Envoy Gateway

Install the control plane:

helm install eg oci://docker.io/envoyproxy/gateway-helm \
  --version v1.7.0 \
  -n envoy-gateway-system \
  --create-namespace

Wait for it to be ready:

kubectl wait --timeout=5m -n envoy-gateway-system \
  deployment/envoy-gateway \
  --for=condition=Available

Once this completes, the Envoy Gateway controller is running in your cluster. It can now read Gateway API resources and generate the corresponding Envoy configuration.

Step 2: Apply the Official Quickstart Resources

The official repo provides a bundled example that includes:

  • GatewayClass
  • Gateway
  • HTTPRoute
  • A sample backend app

Apply it directly:

kubectl apply -f \
  https://github.com/envoyproxy/gateway/releases/download/v1.7.0/quickstart.yaml \
  -n default

Think of this as the official team laying out a complete demo scene for you. Get it running first, then take it apart later.

Step 3: Verify Traffic Is Flowing

Option A: Cluster Has a LoadBalancer

Get the Gateway address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Send a request:

curl --verbose \
  --header "Host: www.example.com" \
  http://$GATEWAY_HOST/get

If you get an HTTP 200, it means:

  • The Gateway is serving as a working ingress
  • The HTTPRoute has successfully attached
  • The backend service is being routed to correctly

Option B: No LoadBalancer — Use Port-Forward

Find the Envoy service:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system \
  --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg \
  -o jsonpath='{.items[0].metadata.name}')

Forward local port 8888 to Gateway port 80:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8888:80

In another terminal:

curl --verbose \
  --header "Host: www.example.com" \
  http://localhost:8888/get

This is the local testing path. It feels a bit like sneaking in through the back door, but if it validates the routing, that's fine — sometimes engineering is "make it work first, then make it elegant."

Where to Look When Things Break

The most common issues aren't that Envoy Gateway is broken — it's that one segment of the chain didn't connect:

Symptom Check First
gateway/eg has no address Does the cluster have a LoadBalancer implementation?
curl returns 404 or 503 HTTPRoute, Service, and backend Pod status
kubectl wait is stuck Are Pods in envoy-gateway-system running?

Useful commands:

kubectl get pods -n envoy-gateway-system
kubectl get gatewayclass
kubectl get gateway -A
kubectl get httproute -A
kubectl get svc,pod -n default

To dig deeper into Gateway status:

kubectl get gateway/eg -o yaml

Key things to check:

  • status.addresses
  • Listener conditions
  • Whether routes were accepted

What You Actually Just Did

You only ran a few commands, but you completed an entire end-to-end chain:

  1. Installed Envoy Gateway
  2. Created a GatewayClass to tell the cluster which gateway controller to use
  3. Created a Gateway declaring the ingress port and protocol
  4. Created an HTTPRoute routing traffic to the backend service
  5. Envoy Gateway translated these declarations into config that Envoy Proxy can execute

This is exactly why Gateway API is worth using. You write Kubernetes resources — you don't maintain a pile of proxy config files by hand.

Next Step

Now that you have your first request flowing, the next post covers what each of these resources actually does — building the mental model: 👉 Core Concepts