J
JanusGraph•5mo ago
criminosis

MergeV "get or create" performance asymmetry

So I'm working on adding the mergeV step among others to the Rust gremlin driver. As part of that I took a pause and did a performance comparison to the "traditional" way of doing it. So the Rust driver is submitting bytecode that's effectively doing: "Traditional/Reference":
g.V("expected_id").fold().coalesce(unfold(), addV("my_label").properties(T.id, "expected_id")).properties("other_property", "foo").V("some_other_expected_id")...and so forth
g.V("expected_id").fold().coalesce(unfold(), addV("my_label").properties(T.id, "expected_id")).properties("other_property", "foo").V("some_other_expected_id")...and so forth
^ But given a batch of 10k vertices to write it'd do this for a chunk of 10 vertices in a single mutation traversal, but doing 10 connections in parallel to split up the batch until it finished getting all 10k written. It's well known that very long traversals don't perform well and my own trials found that doing this at > 50 vertices in a single traversal would cause timeouts for my use case, so I've been generally doing 10 and calling it good. But this puts a ceiling on the amount of work a single network call can make (10 vertices worth) so hence why I started trying out mergeV() to stack more info into a single call without making the traversal prohibitively long. And then the "mergeV()" way:
g.inject(
[["lookup": [(T.id):"expected_id"], "properties":["other_property": "foo"]], ["lookup":[(T.id):"some_other_expected_id"], "properties":[other_vertex_properties_here]], ...and so forth]).
unfold().as("payload").
mergeV(select('lookup')).
property(
"other_property",
select('payload').select('properties').select("other_property")).
g.inject(
[["lookup": [(T.id):"expected_id"], "properties":["other_property": "foo"]], ["lookup":[(T.id):"some_other_expected_id"], "properties":[other_vertex_properties_here]], ...and so forth]).
unfold().as("payload").
mergeV(select('lookup')).
property(
"other_property",
select('payload').select('properties').select("other_property")).
I would run the mergeV call with chunks of 200 vertices in each call.
1 Reply
criminosis
criminosisOP•5mo ago
Doing my trials (Cassandra & ES running locally via docker compose, also running JG locally in said docker compose enviornment) I was seeing 2-4x improvement of writes to the graph when the vertices were all novel ids (Reference & MergeV trials would generate distinct datasets to write for each trial):
Trial 0 w/ 10000 vertices
Reference: 13585 ms MergeV: 3357 ms

Trial 1 w/ 10000 vertices
Reference: 11897 ms MergeV: 3765 ms

Trial 2 w/ 10000 vertices
Reference: 9835 ms MergeV: 3703 ms

Trial 3 w/ 10000 vertices
Reference: 10971 ms MergeV: 3651 ms

Trial 4 w/ 10000 vertices
Reference: 9503 ms MergeV: 3519 ms

Trial 5 w/ 10000 vertices
Reference: 9728 ms MergeV: 3477 ms

Trial 6 w/ 10000 vertices
Reference: 8867 ms MergeV: 3779 ms

Trial 7 w/ 10000 vertices
Reference: 8575 ms MergeV: 3666 ms

Trial 8 w/ 10000 vertices
Reference: 8773 ms MergeV: 3551 ms

Trial 9 w/ 10000 vertices
Reference: 9755 ms MergeV: 3524 ms
Trial 0 w/ 10000 vertices
Reference: 13585 ms MergeV: 3357 ms

Trial 1 w/ 10000 vertices
Reference: 11897 ms MergeV: 3765 ms

Trial 2 w/ 10000 vertices
Reference: 9835 ms MergeV: 3703 ms

Trial 3 w/ 10000 vertices
Reference: 10971 ms MergeV: 3651 ms

Trial 4 w/ 10000 vertices
Reference: 9503 ms MergeV: 3519 ms

Trial 5 w/ 10000 vertices
Reference: 9728 ms MergeV: 3477 ms

Trial 6 w/ 10000 vertices
Reference: 8867 ms MergeV: 3779 ms

Trial 7 w/ 10000 vertices
Reference: 8575 ms MergeV: 3666 ms

Trial 8 w/ 10000 vertices
Reference: 8773 ms MergeV: 3551 ms

Trial 9 w/ 10000 vertices
Reference: 9755 ms MergeV: 3524 ms
But then figured I should try the "get" side of the "get or create" and I was rather surprised that mergeV seemed to be significantly slower than the "traditional" way of doing it: "MergeV redo" is the writing the same vertices again from the inital MergeV trial. The "(All read, dataset swap)" line is running the Reference and MergeV logic again, but with the other's dataset.
Trial 0 w/ 10000 vertices
Reference: 11752 ms MergeV: 3469 ms
MergeV redo: 8512 ms
Reference: 2822 ms MergeV: 7752 ms (All read, dataset swap)

Trial 1 w/ 10000 vertices
Reference: 12001 ms MergeV: 3729 ms
MergeV redo: 7313 ms
test mergev_demo has been running for over 60 seconds
Reference: 3190 ms MergeV: 7854 ms (All read, dataset swap)

Trial 2 w/ 10000 vertices
Reference: 13198 ms MergeV: 3683 ms
MergeV redo: 9451 ms
Reference: 2212 ms MergeV: 7445 ms (All read, dataset swap)

Trial 3 w/ 10000 vertices
Reference: 13053 ms MergeV: 3110 ms
MergeV redo: 8635 ms
Reference: 2295 ms MergeV: 6918 ms (All read, dataset swap)

Trial 4 w/ 10000 vertices
Reference: 14799 ms MergeV: 3492 ms
MergeV redo: 7777 ms
Reference: 2746 ms MergeV: 7858 ms (All read, dataset swap)

Trial 5 w/ 10000 vertices
Reference: 13755 ms MergeV: 3248 ms
MergeV redo: 8447 ms
Reference: 3448 ms MergeV: 8789 ms (All read, dataset swap)

Trial 6 w/ 10000 vertices
Reference: 16477 ms MergeV: 3572 ms
MergeV redo: 7834 ms
Reference: 3196 ms MergeV: 8690 ms (All read, dataset swap)

Trial 7 w/ 10000 vertices
Reference: 19804 ms MergeV: 3869 ms
MergeV redo: 9646 ms
Reference: 3727 ms MergeV: 9107 ms (All read, dataset swap)

Trial 8 w/ 10000 vertices
Reference: 16757 ms MergeV: 3389 ms
MergeV redo: 7432 ms
Reference: 2422 ms MergeV: 7552 ms (All read, dataset swap)

Trial 9 w/ 10000 vertices
Reference: 19879 ms MergeV: 4459 ms
MergeV redo: 8559 ms
Reference: 2877 ms MergeV: 8536 ms (All read, dataset swap)
Trial 0 w/ 10000 vertices
Reference: 11752 ms MergeV: 3469 ms
MergeV redo: 8512 ms
Reference: 2822 ms MergeV: 7752 ms (All read, dataset swap)

Trial 1 w/ 10000 vertices
Reference: 12001 ms MergeV: 3729 ms
MergeV redo: 7313 ms
test mergev_demo has been running for over 60 seconds
Reference: 3190 ms MergeV: 7854 ms (All read, dataset swap)

Trial 2 w/ 10000 vertices
Reference: 13198 ms MergeV: 3683 ms
MergeV redo: 9451 ms
Reference: 2212 ms MergeV: 7445 ms (All read, dataset swap)

Trial 3 w/ 10000 vertices
Reference: 13053 ms MergeV: 3110 ms
MergeV redo: 8635 ms
Reference: 2295 ms MergeV: 6918 ms (All read, dataset swap)

Trial 4 w/ 10000 vertices
Reference: 14799 ms MergeV: 3492 ms
MergeV redo: 7777 ms
Reference: 2746 ms MergeV: 7858 ms (All read, dataset swap)

Trial 5 w/ 10000 vertices
Reference: 13755 ms MergeV: 3248 ms
MergeV redo: 8447 ms
Reference: 3448 ms MergeV: 8789 ms (All read, dataset swap)

Trial 6 w/ 10000 vertices
Reference: 16477 ms MergeV: 3572 ms
MergeV redo: 7834 ms
Reference: 3196 ms MergeV: 8690 ms (All read, dataset swap)

Trial 7 w/ 10000 vertices
Reference: 19804 ms MergeV: 3869 ms
MergeV redo: 9646 ms
Reference: 3727 ms MergeV: 9107 ms (All read, dataset swap)

Trial 8 w/ 10000 vertices
Reference: 16757 ms MergeV: 3389 ms
MergeV redo: 7432 ms
Reference: 2422 ms MergeV: 7552 ms (All read, dataset swap)

Trial 9 w/ 10000 vertices
Reference: 19879 ms MergeV: 4459 ms
MergeV redo: 8559 ms
Reference: 2877 ms MergeV: 8536 ms (All read, dataset swap)
I guess technically mergeV is having to lookup 200 vertices per network call whereas Reference's chunk size is only 10, but figured I'd post the question in case this seemed weird to any JG core devs Reran my trial with both set to chunk sizes of 10 of the 10k batch (both still had 10 parallel connections allowed). So this reduced the MergeV chunk (what it'd inject into the traversal) down from 200 to 10, but figured that'd make it more comparable on the lookup side. MergeV got way worse 🤔
Trial 0 w/ 10000 vertices
Reference: 7834 ms MergeV: 7464 ms
MergeV redo: 11842 ms
Reference: 2377 ms MergeV: 11526 ms (All read, dataset swap)

Trial 1 w/ 10000 vertices
Reference: 7747 ms MergeV: 8399 ms
test mergev_demo has been running for over 60 seconds
MergeV redo: 11411 ms
Reference: 2247 ms MergeV: 12502 ms (All read, dataset swap)

Trial 2 w/ 10000 vertices
Reference: 7477 ms MergeV: 7834 ms
MergeV redo: 11205 ms
Reference: 2211 ms MergeV: 13655 ms (All read, dataset swap)

Trial 3 w/ 10000 vertices
Reference: 8258 ms MergeV: 8440 ms
MergeV redo: 11932 ms
Reference: 2154 ms MergeV: 12316 ms (All read, dataset swap)

Trial 4 w/ 10000 vertices
Reference: 9069 ms MergeV: 8718 ms
MergeV redo: 18618 ms
Reference: 3375 ms MergeV: 25823 ms (All read, dataset swap)

Trial 5 w/ 10000 vertices
Reference: 14659 ms MergeV: 13939 ms
MergeV redo: 21259 ms
Reference: 4586 ms MergeV: 23869 ms (All read, dataset swap)

Trial 6 w/ 10000 vertices
Reference: 17766 ms MergeV: 19030 ms
MergeV redo: 22648 ms
Reference: 3790 ms MergeV: 20051 ms (All read, dataset swap)

Trial 7 w/ 10000 vertices
Reference: 13271 ms MergeV: 14976 ms
MergeV redo: 21850 ms
Reference: 3126 ms MergeV: 23877 ms (All read, dataset swap)

Trial 8 w/ 10000 vertices
Reference: 14844 ms MergeV: 16161 ms
MergeV redo: 26621 ms
Reference: 3400 ms MergeV: 23748 ms (All read, dataset swap)

Trial 9 w/ 10000 vertices
Reference: 18169 ms MergeV: 17160 ms
MergeV redo: 29828 ms
Reference: 4315 ms MergeV: 26917 ms (All read, dataset swap)
Trial 0 w/ 10000 vertices
Reference: 7834 ms MergeV: 7464 ms
MergeV redo: 11842 ms
Reference: 2377 ms MergeV: 11526 ms (All read, dataset swap)

Trial 1 w/ 10000 vertices
Reference: 7747 ms MergeV: 8399 ms
test mergev_demo has been running for over 60 seconds
MergeV redo: 11411 ms
Reference: 2247 ms MergeV: 12502 ms (All read, dataset swap)

Trial 2 w/ 10000 vertices
Reference: 7477 ms MergeV: 7834 ms
MergeV redo: 11205 ms
Reference: 2211 ms MergeV: 13655 ms (All read, dataset swap)

Trial 3 w/ 10000 vertices
Reference: 8258 ms MergeV: 8440 ms
MergeV redo: 11932 ms
Reference: 2154 ms MergeV: 12316 ms (All read, dataset swap)

Trial 4 w/ 10000 vertices
Reference: 9069 ms MergeV: 8718 ms
MergeV redo: 18618 ms
Reference: 3375 ms MergeV: 25823 ms (All read, dataset swap)

Trial 5 w/ 10000 vertices
Reference: 14659 ms MergeV: 13939 ms
MergeV redo: 21259 ms
Reference: 4586 ms MergeV: 23869 ms (All read, dataset swap)

Trial 6 w/ 10000 vertices
Reference: 17766 ms MergeV: 19030 ms
MergeV redo: 22648 ms
Reference: 3790 ms MergeV: 20051 ms (All read, dataset swap)

Trial 7 w/ 10000 vertices
Reference: 13271 ms MergeV: 14976 ms
MergeV redo: 21850 ms
Reference: 3126 ms MergeV: 23877 ms (All read, dataset swap)

Trial 8 w/ 10000 vertices
Reference: 14844 ms MergeV: 16161 ms
MergeV redo: 26621 ms
Reference: 3400 ms MergeV: 23748 ms (All read, dataset swap)

Trial 9 w/ 10000 vertices
Reference: 18169 ms MergeV: 17160 ms
MergeV redo: 29828 ms
Reference: 4315 ms MergeV: 26917 ms (All read, dataset swap)
Want results from more Discord servers?
Add your server