Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

It seems the best way is:

It seems the best way is: 1. Use the "List images V2" endpoint to get all images (https://developers.cloudflare.com/api/operations/cloudflare-images-list-images-v2) 2. Use the "Base image" endpoint to fetch the images (make sure you throttle somewhat) and upload to R2 (https://developers.cloudflare.com/api/operations/cloudflare-images-base-image)...

Sippy

Hi can someone help here. I am trying to enable sippy for one of my bucket. But the curl execution fails with error {"success":false,"errors":[{"code":10063,"message":"Invalid upstream credentials"}],"messages":[],"result":null} I have checked the token is active and correct. Please help as I am completly stuck...

Question about serving JS and CSS assets

Question about serving JS and CSS assets (and probably) images from R2. What is the current perspective and the most optimal way to serve assets from R2 that are public and immutable through a custom domain? I see old threads about tiered caching behavior not working for custom domain + public bucket. Would that mean I should implement a worker to operate on my custom domain to read from R2 and respond with cache control headers to leverage tiered caching?

Posting here is preferred.

Posting here is preferred.

thanks for the reply. seeing slower

thanks for the reply. seeing slower downloads from india since yesterday. just now tested it, still the transfers are slow

Quick feedback on the S3 support of AWS

Quick feedback on the S3 support of AWS R2. The multipart support of AWS S3 says each part must be > 5mb (but the last part), and they can be of different size. But R2 insists on having each part of exactly the same size (which is not so easy to implement TBH).

Is it just me or downloads are flakey?

Is it just me or downloads are flakey?

Hi there, I believe there's a bug in the

Hi there, I believe there's a bug in the AWS S3 multipart upload compatibility. Your documentation says "The last part has no minimum size". But from my testing this is not true, if the last part is less than 5MB then R2 will fail to complete then multipart upload with EntityTooSmall.

Dangling domain

How to delete the r2 record?the bucket it targets has been deleted...

Vercel OG

methods allowed, GET. allowed domains, our domains & localhost ports. we added vercel og playground domain as well

HonoRequest - Hono

Hey everyone! I've got an annoying problem that I cannot solve with uploading multiple files over a form-data request to R2 with Hono. ``` const { assets, collection_name }: { assets: File[] | File; collection_name: string } = await context.req.parseBody({ all: true,...
No description

Kinesis

Ah yes, that sounds promising. There even is an example that is pretty close: https://developers.cloudflare.com/queues/examples/send-errors-to-r2/

Tokens

edit: finally figured it out (i swear i looked for a long time before posting) — if you create a normal cloudflare API token, and pick workers R2 as the scopes, it won't show an Access Key ID, even if it's transferred to be an R2-speicifc token with per-bucket access controls. you have to start at the R2 token UI in order to create a token that will show an Access Key ID for usage w/ the s3-compatible api

Like, there is a bucket someone runs to

Like, there is a bucket someone runs to serve map tiles(Protomaps). While the service is already on R2, I don’t want to hit them with the bill when I use a map, so I mirror the map files manually. Because they are super large(100 GB+), I can’t easily just do it in a Worker, so it would be cool to have some way of pointing R2 at a file and downloading it automatically(without needing rclone), or even better some way to script it

Is there any possibility that you could

Is there any possibility that you could use a worker or other sort of transform rule to either remove the spaces before storing or rewrite the url while in transit?

is there any documents or tutorial to do

is there any documents or tutorial to do it correctly? I am not good at DNS settings?

Red banners

So i created a r2 bucket and selected the "Specify jurisdiction" field and it is now throwing me this error every time going into the bucket on the dash.
No description

aws4fetch

It would be easiest to write it in JavaScript. As for how to access a file on GCS, you should be able to use a library like aws4fetch to pull the files down

is R2 able to add metadata for "put

is R2 able to add metadata for "put object" commands ? ``` import { PutObjectCommand,...
No description