criminosis
criminosis
Explore posts from servers
ATApache TinkerPop
Created by criminosis on 7/27/2024 in #questions
op_traversal P98 Spikes
For anyone else that finds this thread the things I ended up finding to be issues: - Cassandra's disk throughput I/O (EBS gp3 is 125MB/s by default, at least for my use case a I was periodically maxxing that out, increasing to 250MB/s resolved that apparent bottleneck). So if long sustained writing occured the 125MB/s was not sufficent. - Optimizing traversals to using mergeE/mergeV that were either older groovy-script based evaluations I was submitting or older fold().coalesce(unfold(),...) style "get or create" based vertex mutations. The former was identified using EBS volume stats and confirming at a lower level using iostats the latter was via gathering and monitoring metrics emitted by Gremlin Server (in JanusGraph).
9 replies
ATApache TinkerPop
Created by criminosis on 7/27/2024 in #questions
op_traversal P98 Spikes
The criteria that's in lookup that gets applied to mergeV() should return a single upserted vertex since I directly dictate the vertex id as a parameter so all the side_effects steps after that should apply to a single vertex after the injected list is unfolded and the iter() at the end should dictate no response I/O. single_props will have usually 7-10 properties in it. Two of which may be very large documents (10s MBs) for leveraging full-text searching with my JanusGraph deployment being backed by Elasticsearch. It's worth mentioning though the latency screenshot has the Elasticsearch indices disabled as I was trying to narrow the space of what may have been slowing things down and I've confirmed through system telemetry no I/O is going to Elasticsearch. set_props will have usually 6-8 properties in it, with each one having having only 2-3 properties contained for its corresponding key. The edges varies greatly. At least 1 entry but may have dozens.
9 replies
ATApache TinkerPop
Created by criminosis on 7/27/2024 in #questions
op_traversal P98 Spikes
graph.get_traversal()
.inject(list_of_each_vertex_payload)
//Unfold each vertex's payload into the object stream (individually)
.unfold()
.as_("payload")
//Upsert the vertex based on the given lookup map
.merge_v(__.select("lookup"))
.as_("v")
//Drop all properties that were previously there
//Assign only what is given upon the vertex
.side_effect(__.properties(()).drop())
//First up are the simple single cardinality properties
.side_effect(
__.select("payload")
.select("single_props")
//This unfolds each of the single cardinality properties into their key value pairs into the object stream
.unfold()
.as_("kv")
.select("v")
.property(
//These pull apart the key value pair in the object stream as parameters to property()
__.select("kv").by(Column::Keys),
__.select("kv").by(Column::Values),
),
)
//Next are the set properties. Similar to the single cardinality properties but we have to unfold the nested collection
.side_effect(
__.select("payload")
.select("set_props")
//This unfolds into the object stream "key":["value",...]
.unfold().as_("kvals")
//This separates out the individual values into the object stream (without the key)
.select("kvals").by(Column::Values).unfold().as_("value")
//now select the vertex again back into the object stream and apply these as separate entries
.select("v").property_with_cardinality(
Cardinality::Set,
//This pulls the key in from further up the object stream
__.select("kvals").by(Column::Keys),
//This pulls in the value that was before the vertex in the object stream
__.select("value")
)
)
//Now create edges. The "edges" map within the vertex's payload
//is a vec of maps that enumerate the "from" vertex ids and other upsert properties for mergeE
.side_effect(
__.select("payload").select("edges").unfold().as_("edge_map")
.merge_e(__.select("edge_map"))
.option((Merge::InV, __.select("v")))
.property("last_modified_timestamp", Utc::now())
).iter().await
graph.get_traversal()
.inject(list_of_each_vertex_payload)
//Unfold each vertex's payload into the object stream (individually)
.unfold()
.as_("payload")
//Upsert the vertex based on the given lookup map
.merge_v(__.select("lookup"))
.as_("v")
//Drop all properties that were previously there
//Assign only what is given upon the vertex
.side_effect(__.properties(()).drop())
//First up are the simple single cardinality properties
.side_effect(
__.select("payload")
.select("single_props")
//This unfolds each of the single cardinality properties into their key value pairs into the object stream
.unfold()
.as_("kv")
.select("v")
.property(
//These pull apart the key value pair in the object stream as parameters to property()
__.select("kv").by(Column::Keys),
__.select("kv").by(Column::Values),
),
)
//Next are the set properties. Similar to the single cardinality properties but we have to unfold the nested collection
.side_effect(
__.select("payload")
.select("set_props")
//This unfolds into the object stream "key":["value",...]
.unfold().as_("kvals")
//This separates out the individual values into the object stream (without the key)
.select("kvals").by(Column::Values).unfold().as_("value")
//now select the vertex again back into the object stream and apply these as separate entries
.select("v").property_with_cardinality(
Cardinality::Set,
//This pulls the key in from further up the object stream
__.select("kvals").by(Column::Keys),
//This pulls in the value that was before the vertex in the object stream
__.select("value")
)
)
//Now create edges. The "edges" map within the vertex's payload
//is a vec of maps that enumerate the "from" vertex ids and other upsert properties for mergeE
.side_effect(
__.select("payload").select("edges").unfold().as_("edge_map")
.merge_e(__.select("edge_map"))
.option((Merge::InV, __.select("v")))
.property("last_modified_timestamp", Utc::now())
).iter().await
9 replies
ATApache TinkerPop
Created by criminosis on 7/27/2024 in #questions
op_traversal P98 Spikes
I'm inferring they're building up though because the timer starts before the traversal is submitted into the executor service here: https://github.com/apache/tinkerpop/blob/418d04993913abe60579cbd66b9f4b73a937062c/gremlin-server/src/main/java/org/apache/tinkerpop/gremlin/server/op/traversal/TraversalOpProcessor.java#L212C44-L212C60, and then is actually submitted for execution on line 304. If they're not building up then I'm very perplexed what a few traversals could be doing to distort the P98 by so much if it isn't sitting in the queue for most of it. The traversal in question is a batching up of upserts to the graph. Each vertex's worth of info is loaded into various hashmaps that then get combined into a collective map and then each vertex's info is then combined into a list that's injected at the start of the traversal. It looks like this:
9 replies
ATApache TinkerPop
Created by criminosis on 7/27/2024 in #questions
op_traversal P98 Spikes
The spikes tend to correlate to long sustained periods of writes. Normally my application will write a series of a few dozens of vertices through the traversal. But sometimes it ends up being 10s of thousands. Obviously I don't try to do those all at once. The stream of things to write are chunked to batches of 50 vertices at at time and are submitted as concurrent traversals. My connection pool to JanusGraph is currently capped to 5, so no more than 5 should be getting submitted concurrently which is why I'm somewhat confused why the 10s of thousands seem to cause this outsized impact if from the GremlinServer's perspecive it isn't getting more than 5 concurrently unless the nature of sustained traffic is an issue. However I haven't found a metric to confirm this from the JanusGraph/GremlinServer side. On the JanusGraph side I've made dashboards visualizing the CQLStorageManager which never seems to have a build up on writes to Cassandra from what I can tell which leads me to be suspect the TraversalOpProcessor is the bottleneck, but from what I can tell of the JMX emitted metrics there doesn't appear to be an emitted metric of the work queue for the underlying created ExecutorService here: https://github.com/apache/tinkerpop/blob/418d04993913abe60579cbd66b9f4b73a937062c/gremlin-server/src/main/java/org/apache/tinkerpop/gremlin/server/util/ServerGremlinExecutor.java#L100-L105. So as a proxy I've been using the JMX metric metrics_org_apache_tinkerpop_gremlin_server_GremlinServer_op_traversal_Count but that only tells me the number of traversals from the perspective of the metric (https://github.com/apache/tinkerpop/blob/418d04993913abe60579cbd66b9f4b73a937062c/gremlin-server/src/main/java/org/apache/tinkerpop/gremlin/server/op/traversal/TraversalOpProcessor.java#L80) as they come through and not their possible build up in the task queue for the executor service.
9 replies
JJanusGraph
Created by criminosis on 7/10/2024 in #questions
MergeV "get or create" performance asymmetry
Reran my trial with both set to chunk sizes of 10 of the 10k batch (both still had 10 parallel connections allowed). So this reduced the MergeV chunk (what it'd inject into the traversal) down from 200 to 10, but figured that'd make it more comparable on the lookup side. MergeV got way worse 🤔
Trial 0 w/ 10000 vertices
Reference: 7834 ms MergeV: 7464 ms
MergeV redo: 11842 ms
Reference: 2377 ms MergeV: 11526 ms (All read, dataset swap)

Trial 1 w/ 10000 vertices
Reference: 7747 ms MergeV: 8399 ms
test mergev_demo has been running for over 60 seconds
MergeV redo: 11411 ms
Reference: 2247 ms MergeV: 12502 ms (All read, dataset swap)

Trial 2 w/ 10000 vertices
Reference: 7477 ms MergeV: 7834 ms
MergeV redo: 11205 ms
Reference: 2211 ms MergeV: 13655 ms (All read, dataset swap)

Trial 3 w/ 10000 vertices
Reference: 8258 ms MergeV: 8440 ms
MergeV redo: 11932 ms
Reference: 2154 ms MergeV: 12316 ms (All read, dataset swap)

Trial 4 w/ 10000 vertices
Reference: 9069 ms MergeV: 8718 ms
MergeV redo: 18618 ms
Reference: 3375 ms MergeV: 25823 ms (All read, dataset swap)

Trial 5 w/ 10000 vertices
Reference: 14659 ms MergeV: 13939 ms
MergeV redo: 21259 ms
Reference: 4586 ms MergeV: 23869 ms (All read, dataset swap)

Trial 6 w/ 10000 vertices
Reference: 17766 ms MergeV: 19030 ms
MergeV redo: 22648 ms
Reference: 3790 ms MergeV: 20051 ms (All read, dataset swap)

Trial 7 w/ 10000 vertices
Reference: 13271 ms MergeV: 14976 ms
MergeV redo: 21850 ms
Reference: 3126 ms MergeV: 23877 ms (All read, dataset swap)

Trial 8 w/ 10000 vertices
Reference: 14844 ms MergeV: 16161 ms
MergeV redo: 26621 ms
Reference: 3400 ms MergeV: 23748 ms (All read, dataset swap)

Trial 9 w/ 10000 vertices
Reference: 18169 ms MergeV: 17160 ms
MergeV redo: 29828 ms
Reference: 4315 ms MergeV: 26917 ms (All read, dataset swap)
Trial 0 w/ 10000 vertices
Reference: 7834 ms MergeV: 7464 ms
MergeV redo: 11842 ms
Reference: 2377 ms MergeV: 11526 ms (All read, dataset swap)

Trial 1 w/ 10000 vertices
Reference: 7747 ms MergeV: 8399 ms
test mergev_demo has been running for over 60 seconds
MergeV redo: 11411 ms
Reference: 2247 ms MergeV: 12502 ms (All read, dataset swap)

Trial 2 w/ 10000 vertices
Reference: 7477 ms MergeV: 7834 ms
MergeV redo: 11205 ms
Reference: 2211 ms MergeV: 13655 ms (All read, dataset swap)

Trial 3 w/ 10000 vertices
Reference: 8258 ms MergeV: 8440 ms
MergeV redo: 11932 ms
Reference: 2154 ms MergeV: 12316 ms (All read, dataset swap)

Trial 4 w/ 10000 vertices
Reference: 9069 ms MergeV: 8718 ms
MergeV redo: 18618 ms
Reference: 3375 ms MergeV: 25823 ms (All read, dataset swap)

Trial 5 w/ 10000 vertices
Reference: 14659 ms MergeV: 13939 ms
MergeV redo: 21259 ms
Reference: 4586 ms MergeV: 23869 ms (All read, dataset swap)

Trial 6 w/ 10000 vertices
Reference: 17766 ms MergeV: 19030 ms
MergeV redo: 22648 ms
Reference: 3790 ms MergeV: 20051 ms (All read, dataset swap)

Trial 7 w/ 10000 vertices
Reference: 13271 ms MergeV: 14976 ms
MergeV redo: 21850 ms
Reference: 3126 ms MergeV: 23877 ms (All read, dataset swap)

Trial 8 w/ 10000 vertices
Reference: 14844 ms MergeV: 16161 ms
MergeV redo: 26621 ms
Reference: 3400 ms MergeV: 23748 ms (All read, dataset swap)

Trial 9 w/ 10000 vertices
Reference: 18169 ms MergeV: 17160 ms
MergeV redo: 29828 ms
Reference: 4315 ms MergeV: 26917 ms (All read, dataset swap)
6 replies
JJanusGraph
Created by criminosis on 7/10/2024 in #questions
MergeV "get or create" performance asymmetry
I guess technically mergeV is having to lookup 200 vertices per network call whereas Reference's chunk size is only 10, but figured I'd post the question in case this seemed weird to any JG core devs
6 replies
JJanusGraph
Created by criminosis on 7/10/2024 in #questions
MergeV "get or create" performance asymmetry
But then figured I should try the "get" side of the "get or create" and I was rather surprised that mergeV seemed to be significantly slower than the "traditional" way of doing it: "MergeV redo" is the writing the same vertices again from the inital MergeV trial. The "(All read, dataset swap)" line is running the Reference and MergeV logic again, but with the other's dataset.
Trial 0 w/ 10000 vertices
Reference: 11752 ms MergeV: 3469 ms
MergeV redo: 8512 ms
Reference: 2822 ms MergeV: 7752 ms (All read, dataset swap)

Trial 1 w/ 10000 vertices
Reference: 12001 ms MergeV: 3729 ms
MergeV redo: 7313 ms
test mergev_demo has been running for over 60 seconds
Reference: 3190 ms MergeV: 7854 ms (All read, dataset swap)

Trial 2 w/ 10000 vertices
Reference: 13198 ms MergeV: 3683 ms
MergeV redo: 9451 ms
Reference: 2212 ms MergeV: 7445 ms (All read, dataset swap)

Trial 3 w/ 10000 vertices
Reference: 13053 ms MergeV: 3110 ms
MergeV redo: 8635 ms
Reference: 2295 ms MergeV: 6918 ms (All read, dataset swap)

Trial 4 w/ 10000 vertices
Reference: 14799 ms MergeV: 3492 ms
MergeV redo: 7777 ms
Reference: 2746 ms MergeV: 7858 ms (All read, dataset swap)

Trial 5 w/ 10000 vertices
Reference: 13755 ms MergeV: 3248 ms
MergeV redo: 8447 ms
Reference: 3448 ms MergeV: 8789 ms (All read, dataset swap)

Trial 6 w/ 10000 vertices
Reference: 16477 ms MergeV: 3572 ms
MergeV redo: 7834 ms
Reference: 3196 ms MergeV: 8690 ms (All read, dataset swap)

Trial 7 w/ 10000 vertices
Reference: 19804 ms MergeV: 3869 ms
MergeV redo: 9646 ms
Reference: 3727 ms MergeV: 9107 ms (All read, dataset swap)

Trial 8 w/ 10000 vertices
Reference: 16757 ms MergeV: 3389 ms
MergeV redo: 7432 ms
Reference: 2422 ms MergeV: 7552 ms (All read, dataset swap)

Trial 9 w/ 10000 vertices
Reference: 19879 ms MergeV: 4459 ms
MergeV redo: 8559 ms
Reference: 2877 ms MergeV: 8536 ms (All read, dataset swap)
Trial 0 w/ 10000 vertices
Reference: 11752 ms MergeV: 3469 ms
MergeV redo: 8512 ms
Reference: 2822 ms MergeV: 7752 ms (All read, dataset swap)

Trial 1 w/ 10000 vertices
Reference: 12001 ms MergeV: 3729 ms
MergeV redo: 7313 ms
test mergev_demo has been running for over 60 seconds
Reference: 3190 ms MergeV: 7854 ms (All read, dataset swap)

Trial 2 w/ 10000 vertices
Reference: 13198 ms MergeV: 3683 ms
MergeV redo: 9451 ms
Reference: 2212 ms MergeV: 7445 ms (All read, dataset swap)

Trial 3 w/ 10000 vertices
Reference: 13053 ms MergeV: 3110 ms
MergeV redo: 8635 ms
Reference: 2295 ms MergeV: 6918 ms (All read, dataset swap)

Trial 4 w/ 10000 vertices
Reference: 14799 ms MergeV: 3492 ms
MergeV redo: 7777 ms
Reference: 2746 ms MergeV: 7858 ms (All read, dataset swap)

Trial 5 w/ 10000 vertices
Reference: 13755 ms MergeV: 3248 ms
MergeV redo: 8447 ms
Reference: 3448 ms MergeV: 8789 ms (All read, dataset swap)

Trial 6 w/ 10000 vertices
Reference: 16477 ms MergeV: 3572 ms
MergeV redo: 7834 ms
Reference: 3196 ms MergeV: 8690 ms (All read, dataset swap)

Trial 7 w/ 10000 vertices
Reference: 19804 ms MergeV: 3869 ms
MergeV redo: 9646 ms
Reference: 3727 ms MergeV: 9107 ms (All read, dataset swap)

Trial 8 w/ 10000 vertices
Reference: 16757 ms MergeV: 3389 ms
MergeV redo: 7432 ms
Reference: 2422 ms MergeV: 7552 ms (All read, dataset swap)

Trial 9 w/ 10000 vertices
Reference: 19879 ms MergeV: 4459 ms
MergeV redo: 8559 ms
Reference: 2877 ms MergeV: 8536 ms (All read, dataset swap)
6 replies
JJanusGraph
Created by criminosis on 7/10/2024 in #questions
MergeV "get or create" performance asymmetry
Doing my trials (Cassandra & ES running locally via docker compose, also running JG locally in said docker compose enviornment) I was seeing 2-4x improvement of writes to the graph when the vertices were all novel ids (Reference & MergeV trials would generate distinct datasets to write for each trial):
Trial 0 w/ 10000 vertices
Reference: 13585 ms MergeV: 3357 ms

Trial 1 w/ 10000 vertices
Reference: 11897 ms MergeV: 3765 ms

Trial 2 w/ 10000 vertices
Reference: 9835 ms MergeV: 3703 ms

Trial 3 w/ 10000 vertices
Reference: 10971 ms MergeV: 3651 ms

Trial 4 w/ 10000 vertices
Reference: 9503 ms MergeV: 3519 ms

Trial 5 w/ 10000 vertices
Reference: 9728 ms MergeV: 3477 ms

Trial 6 w/ 10000 vertices
Reference: 8867 ms MergeV: 3779 ms

Trial 7 w/ 10000 vertices
Reference: 8575 ms MergeV: 3666 ms

Trial 8 w/ 10000 vertices
Reference: 8773 ms MergeV: 3551 ms

Trial 9 w/ 10000 vertices
Reference: 9755 ms MergeV: 3524 ms
Trial 0 w/ 10000 vertices
Reference: 13585 ms MergeV: 3357 ms

Trial 1 w/ 10000 vertices
Reference: 11897 ms MergeV: 3765 ms

Trial 2 w/ 10000 vertices
Reference: 9835 ms MergeV: 3703 ms

Trial 3 w/ 10000 vertices
Reference: 10971 ms MergeV: 3651 ms

Trial 4 w/ 10000 vertices
Reference: 9503 ms MergeV: 3519 ms

Trial 5 w/ 10000 vertices
Reference: 9728 ms MergeV: 3477 ms

Trial 6 w/ 10000 vertices
Reference: 8867 ms MergeV: 3779 ms

Trial 7 w/ 10000 vertices
Reference: 8575 ms MergeV: 3666 ms

Trial 8 w/ 10000 vertices
Reference: 8773 ms MergeV: 3551 ms

Trial 9 w/ 10000 vertices
Reference: 9755 ms MergeV: 3524 ms
6 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
Looking at my commit message for the change it looks like I described it as so:
Kill JG if the graph fails to open instead of idly be hosting no graphs
So it seems I'm correctly remembering the symptom, but having defined steps for reproduction would be beneficial for the feature request
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
I'll have to reproduce the scenario before I create the feature request. TBH this was a change I made over a year ago so the details have fallen into the memory aether if I'm being honest 😅 We also sharpened other k8s infra with liveliness probes since then so we may have addressed this issue, at least for ourselves, via other means.
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
Vs what I wanted was the container to die in that case. Which the checked graph manager would do.
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
The storage timescript does watch for cassandra to be up, but IIRC it would just timeout but not kill the container? So the graph would attempt to open, fail, and then JG would just hang out without any graphs being opened.
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
Yeah, I ended up doing something similar with our hold HBase system. It feels fairly common place to rebuild these tools 😅
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
The reason I did it wasn't for schema
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
But now that I'm retracing through the steps, I automated this a long time ago so it's gotten a little dusty in my memory, I may have conflated an issue here
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
Well, to be clear I'm not doing it in the containers that are running JG for purposes of hosting, it's done in a secondary loader container that runs just once at the deployment start
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
Loosely inspired by liquibase
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
Each script is versioned and writes a "completed version X" indicator into the graph
33 replies
JJanusGraph
Created by criminosis on 5/24/2024 in #questions
Comma Separated Config Options Values Via Environment Variable?
The scripts check if the schema was already loaded and skips doing it again.
33 replies