Help me understand how you would approach this problem.
consider a huge Tree structure data ,which will be shown as roadmap to users .
when a user clicks on a node it expands 1 more layer to show all related paths + nodes.
each user can save part of the tree to their account and they can edit the saved data , add custom nodes to their roadmap etc.
they can even remove a relationship between 2 node.
Metrics
There are 10 lakhs nodes each represent a point in roadmap
there are 150+ node types , each type will have unique set of parameters along with common ones like ID, name and desc.
there are 200+mapping tables created to represent mapping between 2 unique node type.
there are 2 DB with same set of tables , one will act as master DB with all 10 lakh record which are shown in UI roadmap,
DB 2 will be used to save same master data but client specific , along with history tables and all the modification done so far.
in our current implementation :
from UI we can get all 10 lakh nodes to be saved for single client, we have to refer Redis which will contain all master table Data and migrate all the 150+ node table data and 200+ mapping table records into new set of node tables and mapping tables which will have extra column (clientID) .
we have written backend api in dotnet core C# , with input payload we search redis for related data with which we will create list of objects which will finally get inserted into postgresql DB. this process is way to slow since it usually takes 4 to 5 mins to migrate entire DB data into different DB under specific client.
What will be your approach to solve a requirement like this and if you can nudge me in right direction to resolve this requirement that would be helpful
11 Replies
Hi, How are you doing? I can help you clearly.
How would you solve this problem?
Hello,
I think is how to optimize searching in a huge data source issue, do you have any ides?
yeah, I am sure, first I would like to see your current backend api.
Is it a free job?
I'm just interested in this topic 🤣
Are you a developer too?
Yep
If you're working with a graph/tree with nodes, would it be better to use something like neo4j which easily supports unlimited levels of depth? https://neo4j.com/
Graph Database & Analytics
Neo4j Graph Database & Analytics – The Leader in Graph Databases
Connect data as it's stored with Neo4j. Perform powerful, complex queries at scale and speed with our graph data platform.
I will defenitely check this out.
its a problem i came across in one of my work project and it keeps bugging me if what i was doing is right or not, clearly there are systems which are able to handle large volumes of data , just curious on how it is done.
There is no problem with showing the graph data in UI screen, the problem is when we start migrating almost 90% of the Master DB content into Different DB under specific Client ID, these data migration must happen with in a transaction and the volume of data which is getting migrated is huge ,which is causing out of memory exception and api is way too slow
Unknown User•5mo ago
Message Not Public
Sign In & Join Server To View