mergeV with onMerge when extra properties are unknown

I'm in the following situation:
jobId = "spark:bdx_job_1"

// Initial vertices
vertices = [
[
(T.id): "spark:bdx_job_1",
(T.label): "job",
"url": single("spark://some/url"),
"status": single("running"),
],
]

// First insertion
g.inject(vertices) \
.unfold() \
.mergeV() \
.iterate()

// Updated vertices
updatedVertices = vertices = [
[
(T.id): jobId,
(T.label): "job",
"url": "spark://another/url",
"status": "completed"
}
]
]
jobId = "spark:bdx_job_1"

// Initial vertices
vertices = [
[
(T.id): "spark:bdx_job_1",
(T.label): "job",
"url": single("spark://some/url"),
"status": single("running"),
],
]

// First insertion
g.inject(vertices) \
.unfold() \
.mergeV() \
.iterate()

// Updated vertices
updatedVertices = vertices = [
[
(T.id): jobId,
(T.label): "job",
"url": "spark://another/url",
"status": "completed"
}
]
]
I want to inject updatedVertices in such a way that the node is created if it doesn't exist, and its properties (excluding id and label) are updated if a match is found on <id, label>. I tried this approach, but I'm in a situation where the extra properties are not known upfront, so the range/tail approach probably isn't feasible. I've tried ti tinker with sideEffect, but any attempts resulted in a serialization error through the JDK proxy. I've also tried some of the solutions suggested here, but I haven't had much luck either.
Stack Overflow
Efficiently bulk upserting vertices and edges containing properties...
Context We're ingesting a combined total of between (500, 1500, 5000) (P25, P50, P75) vertices and edges per minute. Each vertex has roughly 5 "records" on it (id, label, 3 string propert...
1 Reply
spmallette
spmallette2w ago
hi - if the extra properties are not known upfront, are you saying that they aren't included in the inject() as in that example? where would they come from then?
Want results from more Discord servers?
Add your server