Cache-Busting with solid.js
I have a CSR SPA, where the contents sit in static webstorage and the solid.js site is rendered on the client side. E.g.:
My CI will build the site when changes are made to the
main
branch, and replace the contents of the S3 folder.
When that happens I also invalidate the cloudfront-cache, so all future requests go at least once to the newly update files and aren't served by cloudfront.
BUT: There also seems to be some browser-caching going on, i.e. sometimes, after an update of my site, if I switch to a tab I had open of the solid-js SPA, I will get an unknown client error
and some import problem of the _build/assets/filename-CRYkDoFJ.js
where it tries to load one of the old ***-<id>.js
build artifacts.
Do any of you have experience preventing this?
(I don't want my users to have to refresh their browsers if they have our site open in a tab and happen to come back to it after a few days away)12 Replies
I mean, as far as I understand, solidjs already adds those hashes to the js files, so they are different on updates, but sometimes it feels like the old tabs want to still read the old files?
to be fair, the error descibed above seems to happen regularly but not very often, and I can't trigger it by simply opening a bunch of tabs, updating the site via CI and clicking around in my tabs.
I have the same problem.
Everything you describe is a vite and not specific to solidjs. Its easier to search more info about that in vite in combination with react/vue.
My current try is saving the build/commit-number as a version and fetching it in the client from the server, comparing the stringss and reload/refresh the browser if they mismatch -> Implemented just today. I guess I'l have to test and see if it works / is enough.
My big solution (which I'm hesitant to implement) is uploading all build files to a cdn/s3 and just keeping all old assets around (<1-3 mb -> so a negligible amount of storage). Especially with a storage that supports deduplication, like backblaze b2.
Another thing: you can cache indefinitely all files in assets dir. But you have to revalidate the index.html because they point to and import all the other stuff.
mhmm, let's see if I can manually set a
cache-control
max-age
of 1h on just the index.html
the issue can't be cloudfront, because on the backend-cf side I invalidate the whole sites cache after every update. so it has to be the browser that cached on the client sideFor index.html files -> caching has to be completely disabled.
My settings:
r.headers.set("Cache-Control", "public, no-cache")
mhhm, but setting no-cache would always make it go to the source s3 bucket, so the index.html wouldn't be cached on the edge anymore, wouldn't it?
If I give it 1h max age, and make only deploy to prod during the night, maybe it's better situation for everyone?
and, again, the edge cached files get invalidated when I run the CI, only the local client needs to not cache the index.html
mhmm
If you use a cdn, you can set separate cdn cache headers.
You then can let them keep indefinitely (on the cdn), since you will clear and update them manually via script on each deployment.
The question is that cache headers you set on the client browser, since you can't clear them (on a users machine) and they are the cause of the "file imported but not found" issue.
yup, researching it atm. You can tell cloudfront about cache lengths using the same headers as the ones you communicate to the browser
"file import problems" only happen on client machines, because CDN cache issues are easily debugged and reproducible.
Client browser caching is not.
very true
Default behavior of all CDNs.
They all use the client cache headers unless special CDN headers are specified .
i guess we should also do the same to
index.html.br
and index.html.gz
@Bersaelor
Found a good solution?