Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upInclude database status on the /health endpoint or a new endpoint like /db-health #6588
Comments
|
It sounds reasonable for me. Are willed to tackle this one? |
|
I think it's a good PR but this is definitely a breaking change and should be denoted clearly in the release notes. Currently, calling |
|
New info I learned: for K8, there is a liveness probe and a readiness probe. The liveness probe should be current /parse/health . It only means if the docker is responsible or not. And if not, K8 will restart the docker. But a readiness probe should include if there is a proper database connection because if that is true, K8 will send traffic to it. So we should have a separated url like /parse/db-health |
|
Good point. Is there a scenario in which the DB is not ready immediately and will become ready after some time? Does parse server support a DB connection that is not ready on start-up? What do you think about adding a general |
|
Thank you @mtrezza . I'll try to create a ready endpoint starting with just db and later we can add more. I plan to just do a query on the _User collection with count=0. I think it will work but let me know if there is a better way |
|
It depends on how you define “ready” for the DB, which has itself a status endpoint. For example: The driver has a https://mongodb.github.io/node-mongodb-native/3.5/api/MongoClient.html#isConnected The server status, which may be a good middle ground for your https://mongodb.github.io/node-mongodb-native/3.5/api/Admin.html#serverStatus The replica set status, where you determine the individual node status. Just because you can execute a query, doesn’t mean the replica set is fully up and running and ready for what you consider your modus operandi. Maybe a separate PR in the future, if someone wants to get into these details. https://mongodb.github.io/node-mongodb-native/3.5/api/Admin.html#replSetGetStatus Sent with GitHawk |
I'd do a query in the schema table. |
|
I did some research and came across two concepts for these endpoints. Maybe it can give you some inspiration.
https://quarkus.io/guides/microprofile-health
GET /-/health GET /-/readiness GET /-/liveness GET /-/metrics https://www.npmjs.com/package/@ozawa/express-health-check-middleware |
|
I believe option 1 is good enough for now but option 2 would be really great. I have no idea how we could ensure all those states in option 2 though. |
|
I think we should decide for a long-term option now, because if we want to change this in a few months, we may have a breaking change and things get more complex.
What is more common in 3rd party systems to ensure compatibility and practicability for this endpoint? |
|
Looking around my k8s cluster, this is what prometheus uses:
and this is what ambassador uses:
So IMHO focusing on ready, vs. live, where live probe avoids accidental restarts and ready probe serves as a signal to start routing traffic sounds better then to focus on overall health vs db health. This is a great resource where I think we should aim to go: https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes |
|
Yes, option 2, but also potentially changing names of the endpoints to better indicate their purpose. |
Is your feature request related to a problem? Please describe.
Deployed Parse server using k8 and used /parse/health for the readinessProbe and livenessProbe probe. But when deploy new code and broke the database connection, /parse/health still returns 200 OK. So old container was killed and new broken one brought online
Describe the solution you'd like
Only return 200 ok after db is connected on /parse/health or /parse/db-health
Describe alternatives you've considered
Tried to create a cloud function that checks db connection but that requires POST and have application-id in the header, which k8 cannot do
Additional context
Related to #4575