The other day, I helped troubleshoot a tough networking error with an Angular app in Kubernetes running ng serve. In this post, I’ll talk about the error and how I fixed it.
Connection Refused
I first found that trying to access the application’s Route (an OpenShift equivalent of Ingress) resulted in a 504 Gateway Timeout. (Different ingress controllers may result in different 5xx responses). Typically, this means your Route or Ingress is selecting the wrong service, your service is not selecting any pods, or your service is not targeting the right port. However, in this case, everything there was correctly configured.
So, I decided to take a more basic approach: “Let’s just make sure I can curl localhost from inside the Pod and get a valid response”. Sure enough, everything there looked fine:
$ curl localhost:4200
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>TestApp</title>
<base href="/">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" type="image/x-icon" href="favicon.ico">
<link rel="stylesheet" href="styles.css"></head>
<body>
<app-root></app-root>
<script src="runtime.js" defer></script><script src="polyfills.js" defer></script><script src="vendor.js" defer></script><script src="main.js" defer></script></body>
</html>
Next, then, I tried to call the service from another pod to make sure traffic would resolve through the service.
$ curl angular-example:4200
curl: (7) Failed to connect to angular-example port 4200: Connection refused
Huh? Connection refused isn’t what I expected, especially because the service selector and ports were correctly configured. Here’s a simplified version of our deployment and service, where you can see the selector and port matches those of the application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-example
spec:
selector:
matchLabels:
app: angular-example
replicas: 1
template:
metadata:
labels:
app: angular-example
spec:
containers:
- name: main
image: quay.io/adewey/angular-example:1.0.0
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: angular-example
spec:
selector:
app: angular-example
ports:
- name: http
port: 4200
targetPort: 4200
protocol: TCP
type: ClusterIP
So, if the Kubernetes side OK, something must be wrong with the application itself. Sure enough, I found that the user needed to add a couple of flags to ng serve.
Adding Host-Related Flags to Ng Serve
To resolve properly, we added two flags to ng serve:
- –host 0.0.0.0, used to allow ng to serve on any IP
- –disable-host-check, allowing ingress from any host
These flags can be added by configuring the start script in package.json:
"scripts": {
"ng": "ng",
"start": "ng serve --host 0.0.0.0 --disable-host-check",
"build": "ng build",
"test": "ng test",
"lint": "ng lint",
"e2e": "ng e2e"
}
Now, was this the most secure option? Definitely not. This was done in a development environment, but I don’t recommend using ng serve in production. Ng serve is powered by webpack-dev-server, whose GitHub repo emphasizes that it is for development only. For a more hardened approach, I recommend building with ng build –prod and hosting the resulting dist folder in an apache or nginx server.
Thanks For Reading
Assuming the Kubernetes configuration is correct, fixing ng serve networking issues is simple. I hope this helps solve networking issues with ng serve if you come across it yourself.
This is exactly what I was looking for. Thanks Austin. You da bomb!
Awesome!