Incremental schema changes - Property Key constraint does not exist
firstName
property.
2. day 2: we try adding lastName
property to the person node. This still works and one can create persons with lastName
3. day 3: we try adding fullName
property to the person node. This does not work any more as gp_traversal.addV('person').property('fullName', 'test full name')
throws error: Property Key constraint does not exist for given Vertex Label [person] and property key [fullName].
...Text predicate not serializable (containsPhrase, notContainsX, etc)
How do we generate transaction logs ?
RpcRetryingCallerImp while trying to connect JanusGraph with Hbase 2.2.7 under Cloudera Distribution
Potentially useless allocations when checking a field cardinality
JanusGraphVertexFeatures#getCardinality
(https://github.com/JanusGraph/janusgraph/blob/2c71b378339a3ab49b961eef29b5a042d018f513/janusgraph-core/src/main/java/org/janusgraph/graphdb/tinkerpop/JanusGraphFeatures.java#L161-L169).
From a profile (attached), it looks like...Unable to load GraphSON file
.json
graph into JanusGraph from the gremlin console, but I get this error:
gremlin> g.io("/opt/janusgraph/graphson-test1.json").read().iterate();
Label can not be null
gremlin> g.io("/opt/janusgraph/graphson-test1.json").read().iterate();
Label can not be null
Gremlin statement exceeds the maximum compilation size
'status': {'message': 'The Gremlin statement that was submitted exceeds the maximum compilation size allowed by the JVM, please split it into multiple smaller statements ...}
'status': {'message': 'The Gremlin statement that was submitted exceeds the maximum compilation size allowed by the JVM, please split it into multiple smaller statements ...}
maxContentLength: 524288
in the server .yaml
file to allow my client to sbmit larger scripts, because I found a significant performance improvement when submitting a larger set of queries at once. But now I'm hitting the JVM limit for compilation size, which I'd like to increase, if possible. ...Method too large
error. At least from a quick search, it looks like this is a hard limit which can be configured.
That's at least what I got from this Stack Overflow question: https://stackoverflow.com/questions/3192896/how-to-circumvent-the-method-too-large-error-in-java-compilation...Comma Separated Config Options Values Via Environment Variable?
janusgraph.index.some_index_name_here.elasticsearch.retry-error-codes=408,429
Much like was done in the unit test I wrote (https://github.com/JanusGraph/janusgraph/blob/487e10ca276678862fd8fb369d6d17188703ba67/janusgraph-es/src/test/java/org/janusgraph/diskstorage/es/rest/RestClientSetupTest.java#L240). But to my surprise it doesn't seem to be received well during startup:...Creating a customer serializer Io Registry in Java
Server can't be started due to `lost+found` folder
/var/lib/janusgraph
.
Experiencing this issue after restarting the container:
```sh
chown: cannot read directory '/var/lib/janusgraph/lost+found': Permission denied...Phantom Unique Data / Data Too Large?
Could not start BerkeleyJE transaction
JanusGraphManagement from Java client
~20% write performance hit when using custom str IDs?
mergeV()
) nodes to a JanusGraph 1.0.0 instance with a Cassandra+ES backend. Keeping exactly the same client code and test data, I noticed a 20% write slowdown when writing nodes with custom string IDs, rather than custom int IDs. In both cases, the IDs are exactly the same, with the only difference being that in one case I convert int to string, before submitting the query to the JG server (I'm using parametrized scripts submitted via gremlin-python).
Is this a known issue? (I could not find info about this in the documentation)...Mixed Index (ElasticSearch) Backpressure Ignored?
If the primary persistence into the storage backend succeeds but secondary persistence into the indexing backends or the logging system fail, the transaction is still considered to be successful because the storage backend is the authoritative source of the graph. ... In addition, a separate process must be setup that reads the log to identify partially failed transaction and repair any inconsistencies caused....
Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
JanusGraph authentication - restricted privileges
Issues faced for consistent indexing (both Composite & Mixed) [ElasticSearch]
Index wasn't created in ElasticSearch, giving a 404, when a vertex totals direct Index Query is performed
--> As a work around. for initial data 1000 Documents of sample data was ingested, and as we expected, the indexes were not present
--> data was re-indexed. Indexes were created in ElasticSearch, and some composite indexes needed re-Indexing as well. After reindexing, the performance was as expected...