Production Build is continuously failing

I write to express my rather perturbed observations regarding our deployments. Over the past day, they've been ceaselessly terminated due to an exceeding memory limit, despite my valiant efforts to amend our build parameters. It's rather puzzling that our most minor of CSS alterations could lead to such travesties, especially when just few days ago, all functioned seamlessly. Pray tell, has there been a shift in the underlying framework or system? If this vexing situation remains as is, might you suggest employing the 'wrangler' tool to manually upload to a preview branch? I would be remiss not to mention that previous communiques on this subject have, alas, gone unanswered. I ardently hope this one does not suffer the same fate. A prompt elucidation would be most appreciated. Warm regards,
33 Replies
JohnDotAwesome
JohnDotAwesome11mo ago
Hi @Logan - there have been some changes with how the container security layer handles memory, but they ought to be reverted now. Do you have a deployment ID I can look at?
Logan
Logan11mo ago
@JohnDotAwesome I believe it is 76401000-11b2-4c77-93bd-644552d03350 that is one of many of course. I have tried to mess with all of the configuration options like max-old-space-size=3600, which is the lowest we can go without our build HEAP_ALLOC failing. I have also tried gc_interval=100 and --optimize_for_size and another that failed again just a few moments ago - aa0d634a-7933-46f0-9af5-1a8999c14543 We have seen this happen quite often on our preview deployments to be honest, but never on production branch. Either way - they both need to be resolved or we will be forced to have to move away from cloudflare pages.
JohnDotAwesome
JohnDotAwesome11mo ago
Appreciate the info. I'm going to be digging into it today and Monday. One last thing, has this always happened or did it start recently? My hunch is that it started on the 13th? If so, we have some rolling back to do
Logan
Logan11mo ago
Exactly 7 days ago, which is the 13th. 🙂
JohnDotAwesome
JohnDotAwesome11mo ago
Rogert that. Standby
Logan
Logan11mo ago
We did see some of this on preview builds before that, but a couple retries fixed it. My assumption was less resources were provided for preview vs production builds so please note that as well
JohnDotAwesome
JohnDotAwesome11mo ago
The resources available are the same, so that would have been a fluke
Logan
Logan11mo ago
I see okay. Thanks for the information. I guess we just never ran into it on master branch until now. But I can confirm from our build history the build failures got worse after the 13th so yes We also don't have auto-push on master branch - we always "retry" for manual building
JohnDotAwesome
JohnDotAwesome11mo ago
Long story short, we upgraded our build cluster. Most builds just got faster. it seems a small percentage of builds started seeing memory and network issues
Logan
Logan11mo ago
got you thanks for the information that helps. Praying we don't need any emergency pushes over today and the weekend.
JohnDotAwesome
JohnDotAwesome11mo ago
I'm going to see about pulling you out of the new cluster to see if that helps
Logan
Logan11mo ago
okay great thank you
JohnDotAwesome
JohnDotAwesome11mo ago
FWIW, you can still deploy locally or from github actions via wrangler I know that's not ideal, but I just wanted to make sure you knew it was an option
Logan
Logan11mo ago
okay great. Is there a way to test that out on a preview branch or only to production directly via wrangler? any docs would be helpful as we have never used wrangler on our end.
JohnDotAwesome
JohnDotAwesome11mo ago
both preview and production work. One sec I'll get docs
Want results from more Discord servers?
Add your server