59 Replies
@kian
Now you don't clutter up the main R2 channel 🙂
Aaaaaaaaaaaaand it was because I didn't include a body in my replayed requests.
So where does
is it the
NoSuchBucket
come from in Ben's case 
aws s3
tool that you're using or rclone
?
NoSuchBucket
that I was getting was just a bad request (no body in the PUT
), thought I'd copied everything over but evidently not.
A few other people have reported NoSuchBucket
in the past with multipart uploads though, including rclone 1.59
, so I'm still curious as to how it happensUnknown User•3y ago
Message Not Public
Sign In & Join Server To View
Sure, go for it
Unknown User•3y ago
Message Not Public
Sign In & Join Server To View
My assumption is that it has a hiccup somewhere, possibly something gets lost in transit, and a bad request causes
NoSuchBucket
but I honestly can't understand how a client would do it.
I can cause that easily, but that's by sending a bad request.
How big are the files?Unknown User•3y ago
Message Not Public
Sign In & Join Server To View
I expect it to be a pretty niche bug
I think multipart concurrency in R2 can be a bit temperamental at a certain level which could cause it to intermittently say no - but in my experience that returns a signature error
Unknown User•3y ago
Message Not Public
Sign In & Join Server To View
Does that
upload failed
exit the command entirely?Unknown User•3y ago
Message Not Public
Sign In & Join Server To View
Make sure you're using a concurrency of <= 3 for multipart
How does S3 'know' what operation I ran, other than the HTTP method and whatever it can determine from the query params?I don't understand the question. The route is fully determined by the query params & method
Fair enough - that was pretty much what I was wondering, if that's how or not.
Lemme try the same scenario against S3 and see what they return
Let me know how you replicate the NoSuchBucket
It might be representative of a deeper error
Seems to be The So resending an existing part is fine (200) and then an aborted one returns NoSuchUpload.
Content-Length
related.
Since I sent with no body, it was sending Content-Length: 0
Ehhhh, maybe not so simple - I mean I am saying that by sending Content-Length
to 1
when I'm not actually sending 1 byte
NoSuchBucket
scenario doesn't actually give me anything but a 200 OK
from AWS
To clarify:
S3 returns
NoSuchUpload
when the multipart upload is cancelled, regardless of body.
R2 returns NoSuchBucket
if there is no body and NoSuchUpload
if there is a body.
As far as resending an UploadPart with a different body to the original one:
S3 accepts it with a 200 OK and returns the ETag in a header
R2 returns InternalError
if there is a body and NoSuchBucket
if there is no body
So I can just resend any random UploadPart to S3 with whatever body I want, or no body at all, and it'll accept it with a 200 OK status.

Doing the same in R2 returns
InternalError
if I provide a body, or NoSuchBucket
if I don't provide a body.
I don't know if this is an actual bug since it seems like a pretty niche case that is entirely made-up with my Postman requests, or if it's just a design decision that's up to the provider.Pretty sure the NoSuchBucket is an error on our end
Very strange. The code internally thinks it's returning NoSuchUpload but somehow the client is seeing NoSuchBucket rendered

(tracing through with console messages)
It'd explain why you couldn't line up the error messages (but could line up the timestamps) earlier
Well our logs only capture the http status annoyingly. NoSuchUpload and NoSuchBucket are 404
So tracing through the code it was clearly going to be NoSuchUpload
Funnily enough, if I do actually send an invalid bucket name, I get
NoSuchUpload
rather than NoSuchBucket
That's because of the decryption piece again 🙂
Can't trick me

(the bucket name is part of the key material used to encrypt the upload id)
anyway the mystery deepens
The
InternalError
I get when I'm re-sending an UploadPart
in Postman, I also can't replicate in aws-sdk-js
I think Postman is just cursed, somewhere.
What else is apart of the uploadId?
User-Agent or anything like that?No
Nothing you would mess up with postman
For the InternalError I know what that is. It should be returning BadUpload
Is what's apart of uploadId up to implementation or is it defined in the S3 spec? i.e AWS just says
Upload ID identifying the multipart upload whose part is being uploaded.
Although it's on my list of things to fix in the next month or so when mutlipart gets revamped (I won't be doing the work but I'll be overseeing it)
UploadId is an opaque string
It's tied to the account + bucket + object name
Gotcha
The internal error you're getting is because the upload size is changing but that's not really allowed
All parts except the last must be the same size
Oh yeah, I think I recall that discussion when ncw was getting the R2 provider into rclone
or at least I remember seeing it then - could just be losing my mind
Which is technically different from S3 but meh
The interpretation I wrote for that logic though is more complex than it needs to be I think. That's part of the cleanup work scheduled in the next month or so
By GA for sure
anyway - back to figuring out why
'Upload Not Found' == 'Bucket Not Found'
returns trueR2 needs to have a deep-dive blog post, or like the 'How Workers Work' pages in the documentation - it all sounds pretty
don't know why I'm googling about the two now since I'm fairly certain I won't be finding R2 on Google.

I have 3 deep dive blogs written but they haven't been scheduled
OMG. I found the bug. FML
The question is how many keystrokes it'd take to fix the bug haha
Just a bad comparison/match?
Kind of
Turns out we don't have
strict-boolean-expressions
on for our linter and I think there was a bad merge conflict or something. But the error conditions look like this:
Let me know if you spot the problem
uploadEmptyPartResp.val.category
is indeed BucketErrorCategory.UploadNotFound
but we're also taking that first conditional pathuploadEmptyPartResp.val.category == BucketErrorCategory.BucketNotFound, errContext
admittedly confuses meSo most likely @b3nw I'm guessing you were getting
TooMuchConcurrency
sent backSo
aws s3
defaults to 10
concurrency, uploadEmptyPartResp.val.category
was BucketErrorCategory.TooMuchConcurrency
but ended up going down the first path & returning error.NoSuchBucket
?Yeah
K, I gotta ask.
Whats the
a == b, c
syntax do?so in my psuedo-JS to replicate the same stuff (and
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Comma_Operator
console.log
instead of return
), somehow it matches every if
statement because of errContext
errContext
being 'truthy' is matching the if
blocks, regardless of uploadEmptyPartResp.val.category

Huh
Unknown User•3y ago
Message Not Public
Sign In & Join Server To View


bless eslint
Welp, interesting bug haha
I'd never heard of the comma operator before
2 or 3
I believe 3 works but could be a little flaky?
Id use 2

Found this when searching for concurrency :^)
I guess my InternalError for a changed part size will be BadUpload eventually and the NoSuchBucket will return the proper NoSuchUpload
It was a regression introduced May 30
(not that message)
The word regression just reminds me of Sentry emails - Sentry marked FOO-123 as a regression - constantly.
I couldn’t live without Sentry though
Our apps are PHP/Laravel and the stack traces & context it brings with exceptions is invaluable
Unknown User•3y ago
Message Not Public
Sign In & Join Server To View
Will archive this