[Solved] Kubernetes deployment fails for iglu server


#1

Hi there,

I have been trying to follow the instructions on https://github.com/snowplow/snowplow-docker/tree/develop/iglu-server/example/kubernetes in order to deploy an iglu-server on a minikube, but so far it has not resulted in a usable deployment.

After investigating, I believe the postgres-srv service provided by postgres.yaml is broken. I followed the steps 1-3: Steps 1-2 are just cloning the repo and cd-ing to the relevant directory, step 3 is:

$ kubectl create -f postgres.yaml

I tried to see if the service described in the snowplow provided file postgres.yaml is healthy

$ kubectl get services postgres-srv
NAME           TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
postgres-srv   NodePort   10.100.33.133   <none>        5432:32011/TCP   16h

The pod seems to be ok

$ kubectl get pods 
NAME                        READY   STATUS    RESTARTS   AGE                                         
postgres-5c664f9d9c-6jkqr   1/1     Running   0          16h

Then if I try to connect to the service with the psql client

$ psql -h "10.100.33.133" -p 5432  
psql: could not connect to server: Connection timed out                                                             
	Is the server running on host "10.100.33.133" and accepting
	TCP/IP connections on port 5432?

Obviously, the rest of the procedure does not work (I tried) because the iglu server cannot connect to the postgres service.

I did not dare to create an issue on the tracker, because I am a beginner with kubernetes and the mistake could come from me (for the record, since I was not 100% sure I also tried to connect to the db using the other port 32011, although my understanding is that this port should be internal to the pod).

If I can’t connect to the db, I can’t insert the super API key and therefore I can’t play with the service. I am a bit stuck here…

Am I doing anything wrong?

Cheers,
Christophe-Marie


#2

Hi Christophe-Marie,
When you do “psql -h “10.100.33.133” -p 5432” it looks like 10.100.33.133 is the IP of your Kubernetes cluster.
However, for services exposed through NodePort (and this is what the iglu-server.yaml does), the service is exposed at the IP of the Kubernetes nodes. So for the example you run kubectl describe node and take one of the node IPs as the Postgres host in the application.conf file.

I admit this is not described very well in the readme file - I will add this asap.

In a production grade scenario you would use a load balancer as ingress, created as a Kubernetes resource on the cluster or an external load balancer (such as NGINX).

However, please try the above and let me know if there are any questions left.

Best regards,
Dirk Rejahl


#3

Ok, now I understand better.

For minikube users: I obtained the relevant information using

$ minikube service list
|-------------|--------------|-----------------------------|
|  NAMESPACE  |     NAME     |             URL             |
|-------------|--------------|-----------------------------|
| default     | kubernetes   | No node port                |
| default     | postgres-srv | http://192.168.99.100:32011 |
| kube-system | kube-dns     | No node port                |
|-------------|--------------|-----------------------------|

Now I can connect to the service:

$ psql -h "192.168.99.100" -p 32011 -d postgresdb -U test_user -c "SELECT * FROM pg_catalog.pg_tables LIMIT 3;"
Password for user test_user: 
 schemaname |  tablename   | tableowner | tablespace | hasindexes | hasrules | hastriggers | rowsecurity 
------------+--------------+------------+------------+------------+----------+-------------+-------------
 pg_catalog | pg_statistic | test_user  |            | t          | f        | f           | f
 pg_catalog | pg_type      | test_user  |            | t          | f        | f           | f
 pg_catalog | pg_authid    | test_user  | pg_global  | t          | f        | f           | f
(3 rows)

I will continue the rest of the procedure with “192.168.99.100” and 32011 as host and port then!


#4

Great news!
Ping me if you have any questions.
Cheers,
Dirk


#5

Well, I actually do :slight_smile:

After running all the steps described in the procedure, I am hitting the section “Insert the super API key”. Unfortunately there is apparently no database named igludb.

$ psql -h "192.168.99.100" -p 32011 -d igludb -U test_user -c "insert into apikeys(uid, vendor_prefix, permission, createdat) values ('1d9c7e70-012b-11e8-ba89-0ed5f89f718b', '*', 'super', current_timestamp);"
Password for user test_user: 
psql: FATAL:  database "igludb" does not exist

Is it normal?


#6

Hi Christophe-Marie,
Is the iglu service up and running?
The igludb is created i(ncluding the db tables) when the iglu server is started for the first time.

I.e. you need to deploy the iglu service (step 6) before seeding the super API key.

Hope that helps.

Cheers,
Dirk


#7

Yes, I have been running all the steps until 6. At step 4, I used 10.100.33.133 and 5432 in lines 29 and 30 of application.conf (By the way, there is a typo in step 6: the option is -f, not f).

Unfortunately, the pod keeps restarting…

$ kubectl get pods    
NAME                          READY   STATUS             RESTARTS   AGE
iglu-server-8bd584869-7tswd   0/1     CrashLoopBackOff   32         142m
postgres-5c664f9d9c-6jkqr     1/1     Running            0          21h

#8

Looks like the pod is not even coming up.

Try to see if there is anything helpful in the logs:

kubectl logs iglu-server-8bd584869-7tswd


#9
$ kubectl logs iglu-server-8bd584869-7tswd
[DEBUG] [03/14/2019 14:47:10.080] [main] [EventStream] StandardOutLogger started
[iglu-server-akka.actor.default-dispatcher-2] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
[DEBUG] [03/14/2019 14:47:10.171] [main] [EventStream(akka://iglu-server)] logger log1-Slf4jLogger started
[DEBUG] [03/14/2019 14:47:10.172] [main] [EventStream(akka://iglu-server)] Default Loggers started
There is a problem with database initialization: FATAL: database "igludb" does not exist Check your credentials.

Are you sure the database is automatically created?


#10

Hi Christophe-Marie,

Actually there was a mistake in the postgres.yaml. The name of the db is specified as “postgresdb”, but obviously it should be “igludb”.

I am very sorry for this :frowning:

However the easiest way for you to fix this is to create the db manually with psql (if you rerun the deployment kubectl will use the existing image, so the manual fix is the quickest way out of the mess):

create database igludb owner test_user;

Then delete and re-create the iglu server:

kubectl delete-f iglu-server.yaml
kubectl create -f iglu-server.yaml

Afterwards both services should be up and running:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
iglu-srv NodePort 10.152.183.45 8080:31360/TCP 5m58s
kubernetes ClusterIP 10.152.183.1 443/TCP 11m
postgres-srv NodePort 10.152.183.23 5432:31040/TCP 11m

Then insert the super API key: (make sure you are referring to the Postgres port)
psql -h “10.0.2.15” -p 31040 -d igludb -U test_user -c "insert into apikeys(uid, vendor_prefix, permission, createdat) values (‘1d9c7e70-012b-11e8-ba89-0ed5f89f718b’, '’, ‘super’, current_timestamp);"*

Then create your first vendor key pair: (make sure you are referring to the Iglu port)
curl -X POST http://10.0.2.15:31360/api/auth/keygen?vendor_prefix=dg -H ‘apikey: 1d9c7e70-012b-11e8-ba89-0ed5f89f718b’

Response like:
{
“read” : “b486345c-2c47-4452-8129-83540a7b2e74”,
“write” : “5ab750ef-e1a3-4fab-886e-13d561274f3b”
}

That’s it…

I apologise again for the hassle - I will create a pull request to update the repo asap!

Cheers,
Dirk


#11

Hi drejahl,

I got it working! Thank you very much for your support.