Filament exports under fly.io
Hello everyone,
I don't know if any of you host your filament app on fly.io. I'm now running into the problem that I can't set up the export properly. Filament normally uses queued jobs for this.
When I use the
sync
connection, the files are created, but my app reports HTTP 419 “This page has expired”.
I tried using the database
connector instead, but this requires a worker to be running to process the jobs. Now fly.io is designed to use a separate process for such tasks. However, the volumes available on fly.io can only ever be connected to one of the processes (or machines in the background). This means that when exporting with the queue, the export files are created correctly on the worker storage and I even receive a notification in my app. But of course I can't download them in the app because they are on the worker storage.
Where is my error in thinking here? How should it be done correctly? Or in which direction should I think or search?
Thank you very much in advance for your help!
Johannes Nazarov8 Replies
You should have a tempory url which allows providing a temporary time limited code to the file, looking at th edocs it is as per:
https://www.tigrisdata.com/docs/objects/presigned/
Presigned URLs | Tigris Object Storage Documentation
Presigned URLs are URLs that provide temporary access to private objects in a
Thank you very much, that was a crucial hint! I have now read up on it and experimented with it a little. That's exactly what I need! I now have a clean way of accessing persistent data from different machines.
I still don't want to run a separate job for mini-exports, but via “sync”. Can anyone tell me why I get this error message when I run it on fly.io? Do I have to configure something in relation to CORS, and if so, where?
Many thanks and best regards
Johannes Nazarov
@johny7 Not directly related to your question, but I have a Filament app deployed on Fly.io with many export actions, file uploads, etc. I use sync for local dev, but AWS ( SQS, S3 ) for queue and shared storage needs. Also have small cpu-1x machines for cron and worker(s).
Now I'm still having difficulties: If I simply create a file via
Storage::put()
, everything works perfectly. But with FilamentForm and FileUpload it won't work, the upload gets stuck.
I have now started to test different scenarios with FILESYSTEM_DISK=local
and =s3
. Even with local
it does not work, I always have to pass ->disk()
to the FormComponent. If I try this for s3, it hangs completely, even when uploading the file, even before submitting the form. No entries are generated in the log.
I already thought there were difficulties in writing the upload stream directly to tigris. So I set ->disk('local')
, but if FILESYSTEM_DISK=s3
is set, Filament does not even try to upload locally (storage/app/livewire-tmp
remains empty).
What am I doing wrong? Does anyone have an idea? @toeknee, would you be so kind?
Thanks in advance!
JNHow are you saving the files? is this in a resource or a custom filament form?
Hello. @toeknee
Please check your dm
@johny7 I'm vanilla S3 rather than Tigris. I created the CORS configuration for my bucket via AWS Console. You may also need to set FILAMENT_FILESYSTEM_DISK in your fly.toml.
https://filamentphp.com/docs/3.x/forms/fields/file-upload#configuring-the-storage-disk-and-directory
Aside from that, I have a private S3 disk defined named 'assets', and my FileUpload config looks like this:
@toeknee: I use the filament form in a resource.
I had previously set
FILESYSTEM_DISK
in both the .env and fly.toml. But even locally in my dev environment the error occurs when I configure my form component as follows:
I always have to set the following for it to work:
And then it only works as long as FILESYSTEM_DISK=local
is set. As soon as I set this to s3
, it doesnt matter what I write in
->disk().
But I haven
t actually worked with FILAMENT_FILESYSTEM_DISK
yet, Ill test it out.
Hello everyone.
Many thanks, @nanopanda , for the hint. With
FILAMENT_FILESYSTEM_DISK the obligatory
->disk() is omitted.
In addition, the CORS configuration was still missing from my Tigris bucket. I have now added this. The upload and deletion now work, but not the display of the preview in edit mode. I solved the display via a controller due to authorization issues. But in edit mode I get a CORS error for the retrieved image.
My browser sends the following request:
```http
GET /proben/01JJ2W97NFGPK757N74YHPQFJX.jpg HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br, zstd
Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7,ru;q=0.6
Connection: keep-alive
Host: vivo-app.fly.storage.tigris.dev
Origin: http://127.0.0.1:9000
Referer: http://127.0.0.1:9000/
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: cross-site
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36
sec-ch-ua: "Not A(Brand";v="8", "Chromium";v="132", "Google Chrome";v="132"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"
```
The answer is:
```http
HTTP/1.1 403 Forbidden
Content-Length: 294
Content-Type: application/xml
Server: Tigris OS
Server-Timing: total;dur=0
Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
X-Amz-Request-Id: 1737412527718410340
Date: Mon, 20 Jan 2025 22:35:27 GMT
```
I have already configured my bucket for test purposes so that
* is set for all origins, methods and headers, and the max age is set to
86400`.
Anyone have any idea what I'm doing wrong? Do I need to configure anything else in my app?
Thanks in advance!