NuxtContent with 10k+ pages has Fatal Error with Netlify/Nitro

Hi everyone, just wondering if anyone has had issues with NuxtContent with a lot of markdown files. We've got a barebones clean install with a sample of 10,000+ markdown files with only the content module. It works fine locally, but get a deployment failure with Netlify between the 10k ~ 15k mark. We've followed the instructions to increase memory using NODE_OPTIONS but not reached a reliable solution. Is it just not able to handle this volume of content or is there a gotcha we're just not aware of?
9:47:21 AM: [info] ✓ built in 4.26s
9:47:21 AM: [success] Server built in 4271ms
9:47:21 AM: [info] [nitro] Initializing prerenderer
9:47:24 AM: [info] [nitro] Prerendering 1 routes
9:48:38 AM: [log] [nitro] ├─ /api/_content/cache.1713829627467.json (74234ms)
9:48:38 AM: [info] [nitro] Prerendered 1 routes in 77.2 seconds
9:48:38 AM: [success] [nitro] Generated public dist
9:48:43 AM: [info] [nitro] Building Nuxt Nitro server (preset: `netlify`)
9:49:03 AM: <--- Last few GCs --->
9:49:03 AM: [5460:0x6f1b850] 116359 ms: Mark-sweep 1997.8 (2083.2) -> 1996.7 (2082.4) MB, 247.5 / 0.0 ms (average mu = 0.801, current mu = 0.432) allocation failure; scavenge might not succeed
9:49:03 AM: [5460:0x6f1b850] 116970 ms: Mark-sweep 2012.7 (2082.4) -> 2011.8 (2113.7) MB, 539.4 / 0.0 ms (average mu = 0.559, current mu = 0.117) allocation failure; scavenge might not succeed
9:49:03 AM: <--- JS stacktrace --->
9:49:03 AM: FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
...
9:47:21 AM: [info] ✓ built in 4.26s
9:47:21 AM: [success] Server built in 4271ms
9:47:21 AM: [info] [nitro] Initializing prerenderer
9:47:24 AM: [info] [nitro] Prerendering 1 routes
9:48:38 AM: [log] [nitro] ├─ /api/_content/cache.1713829627467.json (74234ms)
9:48:38 AM: [info] [nitro] Prerendered 1 routes in 77.2 seconds
9:48:38 AM: [success] [nitro] Generated public dist
9:48:43 AM: [info] [nitro] Building Nuxt Nitro server (preset: `netlify`)
9:49:03 AM: <--- Last few GCs --->
9:49:03 AM: [5460:0x6f1b850] 116359 ms: Mark-sweep 1997.8 (2083.2) -> 1996.7 (2082.4) MB, 247.5 / 0.0 ms (average mu = 0.801, current mu = 0.432) allocation failure; scavenge might not succeed
9:49:03 AM: [5460:0x6f1b850] 116970 ms: Mark-sweep 2012.7 (2082.4) -> 2011.8 (2113.7) MB, 539.4 / 0.0 ms (average mu = 0.559, current mu = 0.117) allocation failure; scavenge might not succeed
9:49:03 AM: <--- JS stacktrace --->
9:49:03 AM: FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
...
2 Replies
gazmancooldude
gazmancooldudeOP9mo ago
Thanks @L422Y. it's not the enterprise plan (netlify). yes it works to a point, then same issue again. I haven't found another solution other than increasing the node memory
manniL
manniL9mo ago
@gazmancooldude did you try using the sharedPrerenderData option?
Want results from more Discord servers?
Add your server