Hi Team I need some help with building a
Hi Team, I need some help with building a custom hls streaming service using R2. I'm aware of the streaming service provided by cloudlfare but i want to build a custom one on R2. I need help to implement streaming of multiple hls files through r2. Do i need to create a presigned url for all the files or is there a better way considering my bucket is not public? Please help!!.
13 Replies
Hi @Sid | R2 . Can you please help with this?
Unknown User•14mo ago
Message Not Public
Sign In & Join Server To View
So sorry for that as i'm quite new to Cloudflare. Let me draw an analogy using AWS where we can do the same thing by combining s3 and AWS cloudfront. Cloudfront can directly access multiple files from a private bucket of s3 using cookies instead of creating a presigned url for every .ts file. Can we achieve something similar using R2 or maybe combination of workers and r2 or even ec2 and r2? I don't want to go with s3 option of AWS.
I hope im able to clear it @Sid | R2 .
Unknown User•14mo ago
Message Not Public
Sign In & Join Server To View
No i don't want to expose my bucket to the internet.
Let me put it this way. I have a private bucket with multiple objects stored in it. Now i need a way to access 3 objects from a bucket using a single signed URL. Is it possible?
You can achieve this by:
1- Attaching a domain to your r2 to make it public (will become protected in next steps)
2- Create a worker that intercepts any request to your bucket and returns the correct file if authentication is valid, otherwise return another response before reaching r2
I have done similar things as well to only allow public access to some origins + do some preauthentication on my files
You should not use an r2 binding, rather just forward the request to r2 to get the cache benefit
Makes sense @omar . Thank you so much. I will check it out.
I have some examples if you want to
Sure that would be great if you can share that as well.
app.get("/public/:application_id/:tenantId/:userId/:filename", async (c) => {
try {
const { tenantId, application_id } = c.req.param();
if (!tenantId) return c.notFound();
const tenantConfig = await getTenantCache(c.env.DB_EU, c.env.KV, tenantId, application_id);
if (!tenantConfig) return c.notFound();
if (tenantConfig.allowed_origins?.length) {
const origin = c.req.header("origin");
if (!origin || !tenantConfig.allowed_origins.includes(origin)) return forbidden(c);
}
if (tenantConfig.remaining_reads <= 0) {
return tooManyRequests(c);
}
c.executionCtx.waitUntil(consumeDownload(c.env.DB_EU, tenantId, application_id));
const result = await fetch(c.req.url);
return c.body(result.body, {
headers: result.headers,
status: result.status
});
} catch (e) {
return handleError(e, c, "Failed to download file");
}
});
My bucket files are stored in this hirearchy: public/:application_id/:tenantId/:userId/:filename
I am using hono, but you can achieve similar things easily with this snippet
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
if(!url.searchParams.has('allow')) {
return new Response("Bad request: Allow required", { status: 401 });
}
const origin = request.headers.get('origin');
try{
const result = await fetch(request);
const customHeaders: Record<string, string> = {};
if(origin){
customHeaders['access-control-allow-origin'] = origin;
}
return new Response(result.body, {
headers: {
...result.headers,
...customHeaders
}
});
}catch(e){ return new Response('Failed', { status: 500 }); } } This would apply to all your files regardless of their path Now you need to setup the interceptor in cloudlfare by adding a route to your worker Click on your worker, go to "Triggers" tab Go to routes And then add the route r2.mydomain.com/* Where r2.mydomain.com is the domain assigned to your R2 If you only want to control a subset of your r2 folders, you can use r2.mydomain.com/myprefix/* All the magic is done with "const result = await fetch(request);" which will forward the request past the interceptor into r2
}catch(e){ return new Response('Failed', { status: 500 }); } } This would apply to all your files regardless of their path Now you need to setup the interceptor in cloudlfare by adding a route to your worker Click on your worker, go to "Triggers" tab Go to routes And then add the route r2.mydomain.com/* Where r2.mydomain.com is the domain assigned to your R2 If you only want to control a subset of your r2 folders, you can use r2.mydomain.com/myprefix/* All the magic is done with "const result = await fetch(request);" which will forward the request past the interceptor into r2
Thanks again @omar..Makes sense but since Im new to cloudflare so probably need some time to understand all of it. Let me check this out.
Sure, let me know if you need more explanation
Sure and thanks again for all the details. Will save a lot of my time.