Opan
Opan
JJanusGraph
Created by Opan on 10/25/2024 in #questions
Janusgraph bigtable rows exceeds the limit 256MiB when exported via Dataflow in Parquet format
Hi team, Currently, we are using Janusgraph with Bigtable as the storage backend. And we wanted to export the data out of Bigtable using Dataflow in a Parquet format to cloud storage. But during the process it failed because some of the rows size too large that exceeds the limit with the following error messages: See attachment We have talked with GCP support if there is a workaround for this and they suggest to change the GC policy of the columns in the table. But since the rows and columns structure are created and managed directly by Janusgraph, we have concern that if we modify/change the GC policy, it might corrupt the data. Our question is, is there a way to configure the size of the rows in janusgraph? Or is it possible to configure the GC policy directly from Janusgraph? Do let me know if I posted this in a wrong section. Column families that have large row size:
risk-serving-bt-feature-engine

family {
name: "l"
locality_group: "user_flash"
administrator: "chubby!mdb/cloud-bigtable-internal-bigtable-administrators-prod"
administrator: "chubby!user/cloud-bigtable"
writer: "chubby!mdb/cloud-bigtable-internal-bigtable-writers-prod"
writer: "chubby!user/cloud-bigtable"
reader: "chubby!mdb/cloud-bigtable-internal-bigtable-readers-prod"
reader: "chubby!user/cloud-bigtable"
gcexpr: "(age() > 604800000000 || version() > 1)"
}
risk-serving-bt-feature-engine

family {
name: "l"
locality_group: "user_flash"
administrator: "chubby!mdb/cloud-bigtable-internal-bigtable-administrators-prod"
administrator: "chubby!user/cloud-bigtable"
writer: "chubby!mdb/cloud-bigtable-internal-bigtable-writers-prod"
writer: "chubby!user/cloud-bigtable"
reader: "chubby!mdb/cloud-bigtable-internal-bigtable-readers-prod"
reader: "chubby!user/cloud-bigtable"
gcexpr: "(age() > 604800000000 || version() > 1)"
}
janusgraph version: 0.6.4 storage backend: bigtable
2 replies
JJanusGraph
Created by Opan on 10/4/2024 in #questions
Best way to migrate JanusGraph data into another Janusgraph instance
Hi team, We have a usecase where we have to migrate the data between two JanusGraph instances. And we also planning to upgrade them as well. So currently we are using 0.6.4, and wanted to move the data into a new JG instance with version 1.0.0. The data size is around 4-7TB. What is the best way to do this? Thank you in advance.
1 replies
JJanusGraph
Created by Opan on 8/21/2024 in #questions
Implement new storage backend using Tablestore
Hello, thank you for allowing me to join. This is my first post so apologize if I post my question in the wrong channel. So, I am planning to setup a new storage backend using Tablestore from AlibabaCloud https://www.alibabacloud.com/help/en/tablestore/tablestore-hbase-client, is there a doc on how to get started to integrate a new storage backend in Janusgraph? Thanks in advance
2 replies