RichEditor S3 private visibility
If you are uploading attatchments with RichEditor and are using private visbility, the whole url, including the expire signature is included in the url. This will cause the url-link to expire after a while. I guess this is a bug, but i haven´t debuged the Filament intrnals.
Form input example:
RichEditor::make('description')
->label('Beskrivelse')
->columnSpanFull()
->fileAttachmentsDisk('s3')
->fileAttachmentsDirectory(Filament::getTenant()->id . '/' . 'tasks')
->fileAttachmentsVisibility('private'),
Url stored in the text-field:
https://yourS3path.amazonaws.com/path/sub-path/filename.png?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=randomnumnber%2Feu-north-1%2Fs3%2Faws4_request&X-Amz-Date=20240705T135351Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Signature=randomsignature
Is there any workaround to get this working, or should it be reported as a bug?6 Replies
@ekrbodo have you figure this thing out? I have spent several hours trying to figure it out if there something wrong with my s3 bucket configuration but it seem really like a bug in filament. If I remove fileAttachmentsVisibility or change it to public nothing is getting uploaded to my S3 bucket
I ended up with creating an accessor which uses preg_replace to update the link. Kinda hacky, but works,
class UpdateSignedUrl
{
public static function update($content): array|string|null
{
$pattern_img = '/<img src="([^"]+)"/';
$pattern_a = '/<a href="(https:\/\/myawsbucket[^"]+)"/';
$pattern_text = '/(https:\/\/myawsbucket[^"]+)/';
$content = preg_replace_callback($pattern_img, function ($matches) {
$url = $matches[1];
$path = parse_url($url)['path'];
$signedUrl = static::generateSignedUrl($path);
return '<img src="' . $signedUrl . '"';
}, $content);
$content = preg_replace_callback($pattern_text, function ($matches) {
$url = $matches[1];
$path = parse_url($url)['path'];
$signedUrl = static::generateSignedUrl($path);
return $signedUrl;
}, $content);
return preg_replace_callback($pattern_a, function ($matches) {
$url = $matches[1];
$pathInfo = parse_url($url);
$path = $pathInfo['path'];
$signedUrl = static::generateSignedUrl($path);
return '<a href="' . $signedUrl . '"';
}, $content);
}
protected static function generateSignedUrl(string $path): string
{
return Storage::disk('s3')->temporaryUrl($path, now()->addMinutes(5));
}
}
//model
protected function body(): Attribute {
return Attribute::make(
get: fn (?string $value) => UpdateSignedUrl::update($value),
);
}
If the signed linkes is stored within a repeater / builder, im using a formatStateUsing to update the signed urls:
->formatStateUsing(function ($state) { return UpdateSignedUrl::update($state); })
If you have a private bucket, the uploaded documents are only reachable using the AWS keys to generate short lived signed URLs. So you if you are going to store public attachments, you need to create a public AWS bucket.@ekrbodo My S3 bucket is public. As a temporary solution I just commented out the visibility check in handleUploadedAttachmentUrlRetrieval which generates a temporary URL that expires after 5 minutes. The images I want to display are for blog images but the default visibility of public is not working for me nor placing ->fileAttachmentsVisibility('public') which is not storing anything in my public s3 bucket.
What is use case of using temporary URL that expires after 5 minutes?
If you are building a public facing solution, like a CMS or a website, a public bucket is the right chocie. If you are building a SaaS application, where you won´t allow files to be public available or need strict security, you need a private bucket. The temporary signed URL is a security measure
Try
->getUploadedAttachmentUrlUsing()
The default is to return a signedUrl if it is private. But you have to set it to private since s3 still requires authorization to upload to a public bucket.I also have the same problem