7 Replies
turns out that using the
works, but what would be the best way for dynamic queries?
When you pass a value in an interpolation, it converts it into a parameter of a prepared query. Table names are an invalid location for parameters in prepared queries
To provide more guidance on best practices, it might be helpful to get some more info on your use case for dynamic table names in the query
my use case is to have an endpoint where you pass a tableName and get all the rows in the table
also you can pass an array of fields and get those fields too
As far as “Best practices” go, coupling your API that closely to the schema is generally not desirable
If you really want to do that, you need to make sure you validate the table names are actual tables in your DB before inserting them into the query, otherwise you are opening the opportunity for a sql injection (or just in general allowing access to tables that you may intend to gave behind access control, etc)
I'd suggest you create a map like this:
And then create the query like this:
That’s a great approach
hey, i was facing a similar issue. i am working on a multiplayer game, which has rooms, so a certain amount of players in a particular room can only interact with each other.
now each room will be storing some coordinates and data, which could go up to be in a few 100 thousands.
initially i had been putting in all the data in one huge table, and separating using room name while querying.
to eliminate the separation part each time for such a huge database, i was thinking i could make tables for each of the rooms on the fly.
sort of like
CREATE TABLE ${ roomName }
what would be the appropriate approach for this? if go with the single huge table approach, i fear the table would go into many millions in a month and the queries would start to slow.