Whats your use case? Bit more context?
Whats your use case? Bit more context?
44 Replies
@gruntlord6 jump in here, so arent spamming
right, but now I am thinking of a bigger workflow where I spin sub objects to do a task
and do seems easier to do that with then regular workers
which I guess is why workflows is a product
yeah
Workflows is built on DOs
so is everything apprently
Does your app have like users
Or something else that is an object defined within your backend
this one doesnt, but being single threaded makes it a bit slower
like how are you categorising your data
For my use case its streams of sports games
so each DO instance is a single match
with things like scehdule, scores, players etc
using an ID to call each match's DO
from the matches namespace
theres a list it has of named data sets, it gets each individual data set then pools it together and sends some formatted data to r2, then packaged everything from the local sql storage to d1
If each dataset follows a similar format them maybe a DO per dataset?
How big is each dataset
it works fine now but one of the lists is 189 sets so takes a while in the current setup, was considering having a parent process just spin a DO for each set, then having the aprent consolidate
yeah
I do simuilar
they vary a lot
I have another namespace for each league
which contains a list of IDs which are references to the match DO
I did it for a different data category and it was way smaller, only 90 lists
or could have KV that stores a parent reference to the dataset IDs
theres a seperate product I have put off working on where I want a master data set and each user gets a dedicated database for thier data
I was even thinking crazy, that each object could be its own DO since they tend to contain a huge amount of product data and meta data, which would be part of the customer DO which belongs to the single parent DO that orchestrates the whole rabbit hole
but I figure this would be an easier starting point then that mess
this one I already made was actually large enough that I ran out of memory dumping to R2 and had to reprogram that part
This is the perfect way to split them up
Like essentially as expected
just seems a bit over the top, but I also read the web sockets thing and figured that each DO could be updated in real time if users were changing the data
so I can make this pretty complex vs what my original plan was, just 1 db per user
Yeah youre thinking in DO terms now lol
which is still pretty solid and probably fine for what most people would do, but I figure may be worth the extra work
This is all standard procedure
idk seems like a lot haha
So much flexibility though
these cloudflare guys just try to tempt me with all these fancy use cases
man I dont have enough time to write all the things I want in DOs
well I guess I'll try this crazy concurrency thing tomorrow and see how 189 instances of storing this data works out lol
I do similar with sets of 80-120 instances at a time. Have like 25 sets like that
if that goes well I'll consider trying to make my complicated database plan
My main update code through these is super inefficient cos I do a bunch of heavy work in DO instead of in the calling worker
well the worse part is I probably have like 10 or so more lists of data sets to architect and they run daily
these were just the most important ones
You could do reads of the written data in DO, then compare values to what you are updating with
Only update whats necessary maybe
obvs that might not work for your use case
im technically reading duplicate data atm but its not enough extra duolication for me to really care
I should be just logging the changes on the base set
no it does, I just have bigger fish to fry
only so many things I can make and test in a day
now you see why peeps love working with em hey
If your use case is monetised correctly the runtime usage costs are essentially a rounding error
And you dont do stupid high numbers of storage writes
the very fact I spent an entire day taking something that worked and making it a DO was already dumb but I wanted the incresed resource effienct vs a simple cron
even without the DO I think I still fall under the $5 minimum based on my napkin math
just the thought exercise on working on a DO and getting it is very useful
for this anyway
It takes a bit to wrap the head around the different architecture paradigm
yea it was a lot of extra work, especially given the sql differences and the logic gate thing
input gate
I think its a lot more fleshed out though
it just makes me want to make more things when I already have things to make lol
Yeah but more efficient stuff
hahaha
not to write XD
just the ease of scalability with this stuff is great
and cos Workers are super extensible you can do a bunch of cool stuff
this actually replaced the backend I was running on a vps so it was a nice win
I have a graphics automation pipeline that can turn a photoshop file into a json, with editable values being easily automated, then render into a PNG, all through Workers
using DOs for easy/efficient handling of data in/out, deferring to Workers when doing larger tasks
I was already using images to convert things to R2 in a front end and a worker with D1 for a pages application to handle download count, this was initially just a thought exercise to see if I could take a complicated long running task and run in on an worker