Using workers to compress images and upload back to R2 Bucket.
I'm using the worker script for compressing the image uploaded on the R2 bucket. I have setup the event using wrangler cli for pushing the event in a queue. I'm consuming the queue data in the worker script. My script creates a presigned URL and fetches the image from the bucket, it then compresses the image using jimp pacakge and then uploads it back to the bucket.
Everything works well until the image size is less than 3 MB. As soon as it crosses that size, my worker starts failing. Can you please help me with this as I need to move my code to production environment ASAP.
Since there is no direct support for the AWS S3 sdk, I had to use aws4fetch package to generate the presigned URL. Also there is no direct support for nodejs sharp package that is why I had to use Jimp package.
3 Replies
Hey @Sandeep , sorry to hear you're running into this issue. Because your application has many failure points, can you do some debugging to pinpoint where the failure happens (@ jimp, generating the presigned url, uploading to bucket, ...)?
https://developers.cloudflare.com/workers/observability/logging/real-time-logs/
Cloudflare Docs
Real-time logs · Cloudflare Workers docs
Debug your Worker application by accessing logs and exceptions through the Cloudflare dashboard or
wrangler tail
.Hello @jack I just tried the logs, here is the output of the issue what I'm facing. I tried to upload one 20 MB Jpeg file on the R2 bucket, it got uploaded well. However, while making a fetch request for the object, I got above error. Here is the worker code attached what I'm using. Please help me with this!
I'm getting below logs form the worker log stream
{
"outcome": "exceededMemory",
"scriptVersion": {
"id": "5c72e87a-332b-475b-8a03-2861ce1beb0e"
},
"scriptName": "hello-world-abc",
"diagnosticsChannelEvents": [],
"exceptions": [
{
"name": "Error",
"message": "Promise will never complete.",
"timestamp": 1716279401666
}
],
"logs": [
{
"message": [
"KEY=======>",
"Sample-jpg-image-20mb.jpg"
],
"level": "log",
"timestamp": 1716279400178
},
{
"message": [
">>>>>>>>>>>>",
"{"signedUrl":"https://<MY_BUCKET.ACC_ID.r2.cloudflarestorage.com/Sample-jpg-image-20mb.jpg","headers":{"authorization":"AWS4-HMAC-SHA256 Credential=5080868f24377e6b2058dc10c87e6619/20240521/auto/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=32dd98b7a38bb60e115159e747e54129817914768477c688181e20f30a94406f","x-amz-content-sha256":"UNSIGNED-PAYLOAD","x-amz-date":"20240521T081640Z"}}"
],
"level": "log",
"timestamp": 1716279400178
}
],
"eventTimestamp": 1716279400178,
"event": {
"batchSize": 1,
"queue": "test-bucket-queue"
},
"id": 3
}
@Sandeep You are exceeding memory capacity. From a cursory look, there seems to be reported issues with Jimp memory usage exploding.
See https://developers.cloudflare.com/workers/platform/limits/#memory