Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

Hello! I'm trying out analytics engine

Hello! I'm trying out analytics engine in my worker. But whatever I do, I always run into the Cannot read properties of undefined (reading 'writeDataPoint') error. It's a Typescript worker, and I'm simply calling env.VIEW_COUNTER.writeDataPoint() with ``` [[analytics_engine_datasets]] binding = "VIEW_COUNTER"...

1 per queue message, so 40 per consumer

1 per queue message, so 40 per consumer worker invocation

Thanks! Is it a good idea to create a

Thanks! Is it a good idea to create a complex index like: "platform_id+project_id+user_id+session_id+event_subtype" and then use your approach to, let's say, sum(_sample_interval) all events for the specific project_id?...

what are the datapoint write limits for CRON scheduled invocations in a worker?

i need some clarification on the AE limits — i currently have a CRON trigger running on a worker every 15 mins that checks for old content inside a D1 database and fires expiration events to analytics engine. the documentation says that there is a limit of 25 datapoint writes per HTTP invocation, but do scheduled task invocations have the same limits? can you do more than 25 datapoint writes in a CRON invocation? what is the limit?

Any chance to support `offset` or

Any chance to support offset or nextToken to fetch analytics data with pagination ? Currently only limit is supported in SQL. Works...

it needs 24 hours to show it I

it needs 24 hours to show it? I transferred domain to cloudflare almost 12 hours ago

From reading the docs this service seems

From reading the docs this service seems most useful for looking at trends and data points at a big picture rather than being optimized for accuracy. That makes sense for lots of data points and general analytics, but what's the suggestion for things that require higher levels of accuracy that may not tolerate sampling? For example, what's the suggested way to store view counts? Thinking a durable object may be the best bet but that doesn't give an easy way to do time series data. Alternatively could use D1 to just store it with SQL, but if there's a best practice or way to use the platform that integrates with analytics engine I'd prefer to go that route. Thanks for your thoughts!...

I can t seem to access the analytics

I can't seem to access the analytics engine inside hono. Does anyone know how to do that? I tried c.env.DOMAIN_ANALYTICS.writeDataPoint(dataPoint); I have this in wrangler...

jimh 0509 what sort of numbers are high

@jim.hawk what sort of numbers are high volumes? Let's say I wanted to create a billing system which is per request, I write into analytics for each request, per customer. The problem is if sampling kicks in, it then makes it tricky to bill on actual usage which would be unfair for customers....

I ve only scanned all the above so

I've only scanned all the above so apologies if this has already been asked and answered. Having created a similar ABR solution for time series data years ago, I have some questions about querying. When you say query billing would be by "rows per query" how is this affected by ABR sampling? If I query 1 months worth of 1Hz data, how many rows will this hit? I guess you'd hit the downsampled rows rather than all 2.6m rows! But how many rows would this be? The main downside I see to the current query API is that there's no control over read sampling and if you did add that as a feature, it would be super helpful to have a way to find out roughly how many rows a query would hit....

JPL | Data PM 4456 i dont mind you

@JPL | Data PM i dont mind you using this document for editing but wanted to let you know it was a copy inside my google apps account. Someone from cloudflare requested access and I gave it.

Hmm I d say better granularity could

Hmm, I'd say better granularity could have great value, for example per index if you're expecting one index per customer: if free users are in one dataset with say 30-day retention, then someone upgrades, do I lose their old data because of the dataset switch, do I copy everything over (probably getting sampled again so not viable imho), or do I write cross-dataset queries (no) just to retain their data for longer?

Weird Would this happen on a global

Weird. Would this happen on a global scale? I shouldn't have more than 2 requests per colo per minute, so it shouldn't be coming from one place...

The wording around how sampling is done

The wording around how sampling is done is unclear - which row from a sampled region gets stored? What if I'm using this to count interactions with a service, and a user spams some low-tier requests to "hide" high-tier requests if they are dropped when sampling? There are example queries on the docs page to "account for" sampling, and others aren't really affected, but if it's more openly communicated with examples I'm sure people would understand and accept it more blobnerd