Janusgraph with bigtable with olap+oltp
HI folks,
we have recently setup janusgraph with bigtable. This is just used for oltp usecases, but given it's nature we can't run full graph queries like pagerank etc. We would like to know if we can setup OLAP janusgraph as well, which basically has the same schema as the OLTP db but runs against longer time period i.e. for OLAP usecases. Can someone guide us in achieving this? I was thinking to run both of systems together depending upon the query window. Was thinking if we can leverage ingesiton pipeline of OLTP for creating graph in OLAP.
6 Replies
I have never tried, but in theory HBase is compatible with BigTable. In that case https://docs.janusgraph.org/advanced-topics/hadoop/ shall work.
Might be related: https://github.com/JanusGraph/janusgraph/issues/2201
GitHub
OLAP traversals do not work with JanusGraph 0.5.2 and BigTable · Is...
JanusGraph Version: 0.5.2 Storage Backend: Google Cloud BigTable Steps to reproduce: Configure JanusGraph to perform an OLAP traversals on a graph persisted in HBase using Spark in YARN mode: read-...
thanks let me check these out. BTW I understand that janusgrpah has OLAP capabilities, but I was curiuos if both solutions could work hand in hand. like transaction logs from oltp is passed down to olap solution, so that we don't have to do double raw data processing for ingestion into graph.
Janusgraph is just a "framework". You can query your graph stored in your bigtable in OLTP or in OLAP mode.
The data are still the same and in the same backend.
In OLTP, you can run gremlin queries overs Janusgraph server / gremlin server or in embedded mod.
In OLAP, you can setup a spark cluster, allowing you to transforme your gremlin traversal on spark stuff.
You don't have to double data process you injection.
In OLAP, you can setup a spark cluster, allowing you to transforme your gremlin traversal on spark stuff.Do you have any such examples where spark cluster could be used with bigtable as a backend for OLAP queries?
unfortunetly no.
But you can look at the configuration file "read-hbase.properties" in the hadoop-graph folder (under the conf folder). I'm sure you can manage to do something.