Optimising python-gremlin for fastApi

PS: Please give me some rope here as I am new to gremlin-python! I have a fastAPI which is serving some Rest API calls based on the data that I am fetching from a Neptune instance. I am using gremlin-python library to make the queries to graph instance. Some points to note: 1. I'm using Uvicorn workers with fastapi. You can consider a single worker for any stats I share. 2. There may be multiple graph calls in a single API request. 3. Currently getting around 8-10 RPS. Now having worked on a lot of fastApi services, I know that this RPS is way too low. I have done lots of optimisations in the past by moving towards asyncio "native" libraries like AioHttp, AioRedis, etc. Achieving a considerable 90-100 RPS in some instances on each worker. I have scourged through the gremlin-python source code, and am confused about a couple of things. 1. Even though the library is creating a separate loop, while doing any I/O, the coroutines are being used in a blocking manner by calling loop.run_until_complete. Why is this the case?? Is the separate event_loop the very reason?? 2. I have tried exploring other libraries like aiogremlin, aiogoblin but they all seem to have lost community support and are no longer being updated. Hence I am somewhat hesitant is using them. 3. Is the main event being blocked when I am running the queries, For example
vertices_list = (
self.__g.V(hopped_vertex_ids)
.has_label(within(hop_via_vertices))
.both_e(*hop_via_edges)
.both_v()
.has(T.id, without(hopped_vertex_ids))
.has_not('supernode_identified_on')
.dedup()
.limit(vertex_count)
.value_map(True)
.to_list()
)
vertices_list = (
self.__g.V(hopped_vertex_ids)
.has_label(within(hop_via_vertices))
.both_e(*hop_via_edges)
.both_v()
.has(T.id, without(hopped_vertex_ids))
.has_not('supernode_identified_on')
.dedup()
.limit(vertex_count)
.value_map(True)
.to_list()
)
If the main fastapi event loop is indeed being blocked, what can I do to unblock the same when making the gremlin queries?? Any suggestions to increase the throughput on each worker are highly appreciated!!
3 Replies
Limosin18
Limosin18OP5mo ago
Got a lot of help from this discord thread: https://discordapp.com/channels/838910279550238720/994030447203995658/994367575767142521 Tried the changes and now able to get 35RPS.. a 3X jump!
Kennh
Kennh5mo ago
Yea, the thread you mentioned has some good information that is relevant to your question. The gremlin-python module currently isn't async which is why it blocks. There is an open JIRA about adding full async support, https://issues.apache.org/jira/browse/TINKERPOP-2774 , you can see that there are others that use FastAPI that have run into similar situations and have had similar questions. Feel free to leave a comment in that JIRA if you are interested in seeing full async support. In any case, following the advice from the thread you linked will probably lead to the highest RPS you can have for now until TINKERPOP-2774 is implemented. Would you consider your question solved for now?
Limosin18
Limosin18OP5mo ago
tagged answered
Want results from more Discord servers?
Add your server