criminosis
Explore posts from serversATApache TinkerPop
•Created by criminosis on 7/27/2024 in #questions
op_traversal P98 Spikes
For anyone else that finds this thread the things I ended up finding to be issues:
- Cassandra's disk throughput I/O (EBS gp3 is 125MB/s by default, at least for my use case a I was periodically maxxing that out, increasing to 250MB/s resolved that apparent bottleneck). So if long sustained writing occured the 125MB/s was not sufficent.
- Optimizing traversals to using mergeE/mergeV that were either older groovy-script based evaluations I was submitting or older
fold().coalesce(unfold(),...)
style "get or create" based vertex mutations.
The former was identified using EBS volume stats and confirming at a lower level using iostats
the latter was via gathering and monitoring metrics emitted by Gremlin Server (in JanusGraph).9 replies
ATApache TinkerPop
•Created by criminosis on 7/27/2024 in #questions
op_traversal P98 Spikes
The criteria that's in
lookup
that gets applied to mergeV()
should return a single upserted vertex since I directly dictate the vertex id as a parameter so all the side_effects steps after that should apply to a single vertex after the injected list is unfolded and the iter()
at the end should dictate no response I/O.
single_props
will have usually 7-10 properties in it. Two of which may be very large documents (10s MBs) for leveraging full-text searching with my JanusGraph deployment being backed by Elasticsearch. It's worth mentioning though the latency screenshot has the Elasticsearch indices disabled as I was trying to narrow the space of what may have been slowing things down and I've confirmed through system telemetry no I/O is going to Elasticsearch.
set_props
will have usually 6-8 properties in it, with each one having having only 2-3 properties contained for its corresponding key.
The edges
varies greatly. At least 1 entry but may have dozens.9 replies
ATApache TinkerPop
•Created by criminosis on 7/27/2024 in #questions
op_traversal P98 Spikes
I'm inferring they're building up though because the timer starts before the traversal is submitted into the executor service here: https://github.com/apache/tinkerpop/blob/418d04993913abe60579cbd66b9f4b73a937062c/gremlin-server/src/main/java/org/apache/tinkerpop/gremlin/server/op/traversal/TraversalOpProcessor.java#L212C44-L212C60, and then is actually submitted for execution on line 304. If they're not building up then I'm very perplexed what a few traversals could be doing to distort the P98 by so much if it isn't sitting in the queue for most of it.
The traversal in question is a batching up of upserts to the graph. Each vertex's worth of info is loaded into various hashmaps that then get combined into a collective map and then each vertex's info is then combined into a list that's injected at the start of the traversal. It looks like this:
9 replies
ATApache TinkerPop
•Created by criminosis on 7/27/2024 in #questions
op_traversal P98 Spikes
The spikes tend to correlate to long sustained periods of writes. Normally my application will write a series of a few dozens of vertices through the traversal. But sometimes it ends up being 10s of thousands. Obviously I don't try to do those all at once. The stream of things to write are chunked to batches of 50 vertices at at time and are submitted as concurrent traversals. My connection pool to JanusGraph is currently capped to 5, so no more than 5 should be getting submitted concurrently which is why I'm somewhat confused why the 10s of thousands seem to cause this outsized impact if from the GremlinServer's perspecive it isn't getting more than 5 concurrently unless the nature of sustained traffic is an issue. However I haven't found a metric to confirm this from the JanusGraph/GremlinServer side.
On the JanusGraph side I've made dashboards visualizing the CQLStorageManager which never seems to have a build up on writes to Cassandra from what I can tell which leads me to be suspect the
TraversalOpProcessor
is the bottleneck, but from what I can tell of the JMX emitted metrics there doesn't appear to be an emitted metric of the work queue for the underlying created ExecutorService here: https://github.com/apache/tinkerpop/blob/418d04993913abe60579cbd66b9f4b73a937062c/gremlin-server/src/main/java/org/apache/tinkerpop/gremlin/server/util/ServerGremlinExecutor.java#L100-L105.
So as a proxy I've been using the JMX metric metrics_org_apache_tinkerpop_gremlin_server_GremlinServer_op_traversal_Count
but that only tells me the number of traversals from the perspective of the metric (https://github.com/apache/tinkerpop/blob/418d04993913abe60579cbd66b9f4b73a937062c/gremlin-server/src/main/java/org/apache/tinkerpop/gremlin/server/op/traversal/TraversalOpProcessor.java#L80) as they come through and not their possible build up in the task queue for the executor service.9 replies
ATApache TinkerPop
•Created by criminosis on 2/5/2024 in #questions
Iterating over responses
Got it. Thanks @spmallette
7 replies
ATApache TinkerPop
•Created by criminosis on 2/5/2024 in #questions
Iterating over responses
Out of curiosity what's the batch intended to be used for then if not for batch size to the client? The graph provider from its backing data store?
7 replies
ATApache TinkerPop
•Created by criminosis on 1/26/2024 in #questions
Documentation states there should be a mid-traversal .E() step?
D'oh. 🤦♂️ Thank you! I didn't realize how recently this was added. There is a "Since" field on the backing JavaDoc that I now realized I overlooked.
For anyone else that runs into this:
Changelog for 3.7.0 mentions it:
https://github.com/apache/tinkerpop/blob/master/CHANGELOG.asciidoc#tinkerpop-370-release-date-july-31-2023
And the underlying jira:
https://issues.apache.org/jira/plugins/servlet/mobile#issue/TINKERPOP-2798
5 replies
ATApache TinkerPop
•Created by criminosis on 8/16/2023 in #questions
VertexProgram filter graph before termination
That does look plausible. I was hoping I could hook into something without having to muddle into SparkGraphComputer's internals or duplicate its implementation, if I understand what you're suggesting.
But persisting nothing graph wise and re-retrieving the vertices based on the IDs collected into the return Memory may be the least lift to get what I was wanting to arrive at.
5 replies