Cache-Busting with solid.js

I have a CSR SPA, where the contents sit in static webstorage and the solid.js site is rendered on the client side. E.g.:
export default defineConfig({
ssr: false,
server: {
static: true,
preset: "static"
},
export default defineConfig({
ssr: false,
server: {
static: true,
preset: "static"
},
My CI will build the site when changes are made to the main branch, and replace the contents of the S3 folder. When that happens I also invalidate the cloudfront-cache, so all future requests go at least once to the newly update files and aren't served by cloudfront. BUT: There also seems to be some browser-caching going on, i.e. sometimes, after an update of my site, if I switch to a tab I had open of the solid-js SPA, I will get an unknown client error and some import problem of the _build/assets/filename-CRYkDoFJ.js where it tries to load one of the old ***-<id>.js build artifacts. Do any of you have experience preventing this? (I don't want my users to have to refresh their browsers if they have our site open in a tab and happen to come back to it after a few days away)
12 Replies
Bersaelor
BersaelorOP5mo ago
I mean, as far as I understand, solidjs already adds those hashes to the js files, so they are different on updates, but sometimes it feels like the old tabs want to still read the old files? to be fair, the error descibed above seems to happen regularly but not very often, and I can't trigger it by simply opening a bunch of tabs, updating the site via CI and clicking around in my tabs.
intelligent-worker-probe
I have the same problem. Everything you describe is a vite and not specific to solidjs. Its easier to search more info about that in vite in combination with react/vue. My current try is saving the build/commit-number as a version and fetching it in the client from the server, comparing the stringss and reload/refresh the browser if they mismatch -> Implemented just today. I guess I'l have to test and see if it works / is enough. My big solution (which I'm hesitant to implement) is uploading all build files to a cdn/s3 and just keeping all old assets around (<1-3 mb -> so a negligible amount of storage). Especially with a storage that supports deduplication, like backblaze b2. Another thing: you can cache indefinitely all files in assets dir. But you have to revalidate the index.html because they point to and import all the other stuff.
Bersaelor
BersaelorOP5mo ago
mhmm, let's see if I can manually set a cache-control max-age of 1h on just the index.html the issue can't be cloudfront, because on the backend-cf side I invalidate the whole sites cache after every update. so it has to be the browser that cached on the client side
intelligent-worker-probe
For index.html files -> caching has to be completely disabled. My settings: r.headers.set("Cache-Control", "public, no-cache")
Bersaelor
BersaelorOP5mo ago
mhhm, but setting no-cache would always make it go to the source s3 bucket, so the index.html wouldn't be cached on the edge anymore, wouldn't it? If I give it 1h max age, and make only deploy to prod during the night, maybe it's better situation for everyone?
aws s3api copy-object --bucket <bucket> --copy-source <bucket>/index.html --key index.html --metadata-directive "COPY" --metadata "Cache-Control=public, no-cache"
aws s3api copy-object --bucket <bucket> --copy-source <bucket>/index.html --key index.html --metadata-directive "COPY" --metadata "Cache-Control=public, no-cache"
and, again, the edge cached files get invalidated when I run the CI, only the local client needs to not cache the index.html mhmm
intelligent-worker-probe
If you use a cdn, you can set separate cdn cache headers. You then can let them keep indefinitely (on the cdn), since you will clear and update them manually via script on each deployment. The question is that cache headers you set on the client browser, since you can't clear them (on a users machine) and they are the cause of the "file imported but not found" issue.
Bersaelor
BersaelorOP5mo ago
yup, researching it atm. You can tell cloudfront about cache lengths using the same headers as the ones you communicate to the browser
intelligent-worker-probe
"file import problems" only happen on client machines, because CDN cache issues are easily debugged and reproducible. Client browser caching is not.
Bersaelor
BersaelorOP5mo ago
very true
intelligent-worker-probe
Default behavior of all CDNs. They all use the client cache headers unless special CDN headers are specified .
Bersaelor
BersaelorOP5mo ago
i guess we should also do the same to index.html.br and index.html.gz
intelligent-worker-probe
@Bersaelor Found a good solution?
Want results from more Discord servers?
Add your server