Logging Service Best Practice (Service Workers)
Hello everyone, I want to build something with workers and I am not sure if it works out, thats why im asking
Basically I have some workers and I want to add some oberservability to it. I would like to use Service Bindings for that. Now here's the question I have. I have the following in mind.
Request is made to Worker One (availible online). It does it normal job does some additional "pre-parsing" for the logger (saving some request headers / response headers).
The request where the original reqeust is made to does a subrequest (through a Service Binding) to the logger worker. Due to limitation of our log provider (axiom) I would not be able to ingest every request when it happens.
How would I have to build the logger service, that it throttles the requests made ? or is there a better solution to what I'm thinking. I'm open for all sorts of "help".
What would the best practices be for that sort of case. KV ? Durable Objects ? A cron that iterates the KV keys... etc.
Thanks <:blob_wave:753870952873590814>
2 Replies
What are you planning to log to?
I have a similar need. I need to track each request to one of my workers, which I'm currently storing in a Timescale database. I have my main worker write to a queue (note: queues are in beta) and then a queue worker pulls large batches off that queue to write in large chunks to Timescale.
I'm not sure its the best approach, but it's working for now. I am a little unhappy that I'm paying more for the logging (queue + queue worker) than I am for the actual worker itself, but it is what it is.
I am planning to log "sort of everything". Some request headers some response headers (additionally some duration for alarms if responses take longer than average). But I really want a general log-pipeline where I could just integrate new services into
ingesting from everywhere to a unified and internal endpoint
these are the current fields I send to my log service