Deployment failed, no other information
Hey folks, I'm trying to deploy my Pages app and getting a strange failure. When I deploy via command line or Git Action, it succeeds. But when I check the deployment in my dashboard, it says "Failed" and there's no other information that I can find about the failure. When I visit the site, there is a connection timeout reported.
Any ideas?
107 Replies
Hey,
That means your function failed. Is it over 1 MiB (if on free)? Does it run in pages dev?
Hi! Yes, it works with wrangler pages dev
oh, so there is a size limit on free tier?
my build output dir is 2.7mb
i had successfully deployed earlier and then made lots of changes, brought in lots of new code. makes sense i guess. i was not aware of the limit ๐ญ
Are you using any node libraries?
aws-sdk?
yes. running with nodejs_compat flag. i have some libs that use node:crypto
ok, and you enable that in the dashboard?
or just in wrangler?
yes
let me double check
yes it's set
wrangler seems to only work in dev when using pages, i have had to apply all bindings, vars and comaptibility flags in the dashboard
yes that's my understanding. wrangler.toml is only for local dev, dashboard is for preview and prod
I have run into this exact same issue, where I cannot get the logs for the failure and dev is working fine. In both cases this was due to me using a library that wasn't supported.
I found most references to help suggest you need to contact support to get the logs from them. Not a scalable solution though
here is a tweet thread with CF employees from last week https://twitter.com/karljensen/status/1767656398794457513
jensen (@karljensen) on X
@chatsidhartha Have been trying out a few more services, pages/workers and D1. One thing that is causing me to trouble is when I deploy and it fails I can't seem to find the error anywhere. I can roll back some changes and redeploy to get it working, but I need to know the error.
Twitter
If you were using the Git Integration it'd just tell you the error. Direct Upload doesn't. I can escalate and get the error if needed, I just run through common issues first to not waste their time
the entire output being that size is fine, just specifically concerned about the function itself
so far I have not been able to get a few things working with nodejs_compat, but they do work in works with node_compat. so what I have done is move some of the things like pg driver and react email template rendering to a worker, then I bind to that worker from pages
it's a Remix app not worker. the function is like less than 10 line of code. 9 to be exact lol
this is good to know, thank you. when researching stuff I rarely commit changes to see them in prod
hm...ok, maybe if i provide my own polyfills it would work. i have had just one problem after another with cloudflare pages ๐ฆ earlier was trying to use an svg sprite in my remix app and their bundler throws an error during the build, no way to configure a loader to make it work.
i'd be curious about these repro steps. I'm in the process of trying out CF services to cover the full stack
i want to run my own checks before deploying, that's why i'm using github actions instead of cloudflare's git integration/build.
that's sort of where i'm at too...prototyping a new app, evaluating the platform
i'll try out svg sprites
node_compat -> polyfills in node stuff like path, only supported in Workers (or in pages if you custom build everything). Uses this: https://github.com/ionic-team/rollup-plugin-node-polyfills/
nodejs_compat -> compat flag which enables node:path and some popular node stuff in from the runtime, does not increase bundle size. Possible in Pages. Docs: https://developers.cloudflare.com/workers/runtime-apis/nodejs/
yes thank you, I was able to figure that part out which lead me to try a specific worker and http proxy for my db
There's been talk of getting node_compat to work in Pages but outside of custom building the function yourself first, not possible at the moment sadly
the other thing that got me was wrangler only working for dev, but once I realized that things stated to work properly with service bindings and hyperdrive
was especially confusing when I was able to configure my pg-proxy worker using wranger, but not my production pages
yea pages dev uses wrangler.toml, but not for deployment
at least searching discord and cloudflare forums confirmed this stuff for me
it's def a pretty confusing mess right now if you try to do more advanced stuff.
I try to stick to Workers for everything, and just do really simple stuff in Pages Functions.
CF has stated before they plan on converging Pages and Workers into one product, but that's a while away
Yes i'm using pages, and nodejs_compat flag. Am I supposed to be building in some other way? I build my project with remix and then deploy with wrangler pages deploy or the official cloudflare wrangler deploy action
its all marked beta, so I don't expect it to be smooth to be honest
haha, yes it seems very messy. see my thread here for the svg debacle https://discord.com/channels/595317990191398933/1218628784450834482
i actually enjoy this part of the exploration when there is no pressure to ship something
yeah same ๐ this is a personal project.
well at least you're not suffering trying to get something out to prod lol
nah, for that I still use nextjs pages lol.
or remix on a vps
but soon I will be edgeworthy
doing it that way is fine, just not going to have the polyfills. Are you still struggling with the build? If you get the deployment-id I can ask the exact error it threw
?pages-deployment-id
The Pages deployment ID is a unique build identifier.
It's the UUID in the browser bar (for example, a URL would be
dash.cloudflare.com/ACCOUNT_ID/pages/view/PROJECT/DEPLOYMENT_ID
where the deployment ID looks something like a398d794-7322-4c97-96d9-40b5140a8d9b).
This ID can help troubleshoot some issues with Pages builds so if you have a failing build make sure you grab that ID for the Pages team to use.it is good to know that if I run into this issue with the build failing I can commit through git integration and get back the error that way
4d716274-5d90-47bb-85d6-8b9874089a66
yet another frustrating wrinkle there. when you set up your project initially, you must choose between git provider integration, or manual deployment
if you're already in manual mode, you cannot change the project to git provider mode
you need to create a new project and point it at your git repo
good to know, I think I could have two pages projects though? one manual and one git?
I have been using Fly.io before this to deploy my remix apps, and it has been great. I jumped into cloudflare because i'm building an app that requires lots of file uploads and media streaming and I wanted to use R2 because it's cheaper than S3, and I figured edge functions would be faster too, and integrate better with R2
yeah, that's totally fine
have you also looked at images/stream from CF?
about to do that actually : )
not yet, havent gotten that far haha ๐ my plan was, upload raw audio files to R2, run some async encoders and store them in CF's CDN, stream/download from there
i got uploads working ๐ kind of neat, since there's no local FS in CF Pages, had to stream directly through. So form upload from client->cf page function -> R2 all without writing a temp file ๐
oh sweet, yeah most likely not applicable then. R2 lets you setup buckets, unfortunately Images/Stream put everything in a root. I do have an app in production that streams videos and serves signed thumbnail imges that uses Images/Stream
then i built account management and brought in tailwind and some icons and it all went to hell haha
that sounds fun. is there a reason to not have the user upload directly to R2?
i havent explored user posting file to R2. I am not sure that's possible
one advantage is i want to put the upload metadata into my database, so i want to be sure it succeeded before i do that, having it all in a server function makes that easier
yeah, I haven't used R2, just a lot of S3/S3 compatible services and I default to client -> s3
4d716274-5d90-47bb-85d6-8b9874089a66 not sure if you saw this. if you have any info that would be great, ty much appreciated
I see, so you have their client tell you upload to s3 succeeded?
how do u do that w/o exposing secrets? pre-signed urls?
yeah its presigned urls
Cloudflare Docs
Presigned URLs ยท Cloudflare R2 docs
Presigned URLs are an S3 concept for sharing direct access to your bucket without revealing your token secret. A presigned URL authorizes anyone with โฆ
a lot of the time I have the user upload, it gives them progress and feedback on success and then they fill in other fields and submit the form
i will likely be doing that for profile images shortly in my own research
I actually have a demo from 2 years ago that I wrote in remix to do multipart uploads direct to s3 with resume. https://github.com/jensen/massive
GitHub
GitHub - jensen/massive
Contribute to jensen/massive development by creating an account on GitHub.
ooh cool, thanks i will take a look. i'm really digging remix so far
thanks for the link on pre-signed urls. they mention what i was doing too ๐ https://developers.cloudflare.com/r2/api/s3/presigned-urls/#presigned-url-alternative-with-workers
Cloudflare Docs
Presigned URLs ยท Cloudflare R2 docs
Presigned URLs are an S3 concept for sharing direct access to your bucket without revealing your token secret. A presigned URL authorizes anyone with โฆ
yeah, its nice to have options
the massive uploader is great info, thanks so much for that. i had looked at uppy too...probably should have looked further into this, but i am only handling audio files, not video, probably not too big
no problem. for this project you probably dont need to change it. 100mb seems like a pretty good limit
was away for a sec, escalated now
thanks for the help today Chaika
Hey! Sorry that you aren't able to see the error for this yourself, it's a known issue that the team are working to resolve in the future.
In the meantime I can tell you the error for that deployment is
Error: Script startup exceeded CPU time limit.
which would be this limit: https://developers.cloudflare.com/workers/platform/limits/#worker-startup-time
Let me know if you need any more clarificationthank you!
(Script startup exceeded in my experience is usually a large dependency, or something else you're just doing in the global scope which is way too big, you get 400ms of startup cpu time)
Thank you. Surprising. I have a remix site built on pages, i am not sure why it's taking so long to load, it's still a very small prototype of what i'm hoping to build.
Yeah that is strange, I wouldn't know either. Are you executing any Functions code that might end up in the global scope?
you said your build dir is 2.1mb? mine is 370k and its a remix app. what code is being included that takes up that extra few megs?
yeah i think i know what it is. i'm creating my db connection and adding it to context for my remix app to use
using Prisma accelerate db proxy/distributed conn pool
good point i have no idea lol. probably all those stupid icons that i cant even serve lol
Ah yeah that would do it
let me know if that was it, but sounds reasonable
i thought that'd be most efficient, to ensure one db conn per request. normally with remix app you use a global for your db connection, in a long-running server env like node.js
since build folder isn't the same as function size
@jensen it was sourcemaps ๐
i'm down to 592K w/o them
yeah, was curious why mine wasn't close to that.
It is.. unless you hit the startup CPU limit ๐ฆ
For DB connections we have #hyperdrive-beta but not sure if it would work for your usecase, just worth mentioning: https://developers.cloudflare.com/hyperdrive/
still not supported in Pages ๐ฆ
:(
yeah, I found that out the hard way. ended up creating an http proxy and then directly service binding to a worker that can make the pg driver connection using hyperdrive
drizzle query -> http proxy -> worker service binding (pg-proxy.ts) -> hyperdrive -> pg database hosted on cloud
oof
i haven't tried with prisma, but I have also been looking for an opportunity to move away from prisma
that was my first reaction, but then it up so it would work in dev as well. and started to do some queries and was pretty impressed
hyperdrive from pages sounds like it would be a good solution, too bad
when I thought of doing it the first time, I was convinced that if it worked I would throw it away
but it has an almost 0 migration cost for when pages does allow for hyperdrive and pg driver works directly
since all my queries are in drizzle, and the only bridge is the http proxy for the sql queries
yeah would be the same for prisma i think. so your worker is the http proxy?
my worker is the connection to hyperdrive using the pg driver (not avail with nodejs_compat in pages). the http proxy is a drizzle feature
https://orm.drizzle.team/docs/get-started-postgresql#http-proxy
like this, but without the express part
Drizzle ORM - PostgreSQL
Drizzle ORM is a lightweight and performant TypeScript ORM with developer experience in mind.
and since I use a service binding to a CF worker, then "http proxy" is not a great name
since the proxy directly runs the function but keeps the http interface
ahh ok. yeah prisma can take an https conn string w/o any extra setup
cool thx, that actually seems pretty simple. assuming it is fast (400ms) prisma accelerate was supposed to be fast ๐ฆ
hard to say for sure what you will see with speeds, but after doing this, I didn't feel like the proxy was the bottleneck
i only tried it a few days ago, so anything can change
and I also haven't tried it with prisma. it was that direct callback support that I noticed with drizzle
i've been using prisma for a few years now in production apps, but have always just used the client with connection string to a pooler
in a serverless env (rather than edge)
prisma accelerate is their "connection pool/cache on the edge". maybe connecting to my database directly would be faster lol. im using supabase
@jensen you have been super helpful, thank you so much ๐
@LeftHandRyan it has been nice meeting and chatting with you. you are welcome. thank you for helping me get some of my assumptions confirmed and also sharing with me the approach to finer control for uploading the files streamed through workers
are you using a direct connection to your supabase db? i've used supabase a lot before this
so i was using prisma edge client -> prisma accelerate -> supabase
yeah, so from accelerate to supabase is that a 5432 port?
yes
Hey @Erisa | Support Engineer @Chaika thanks for your help earlier, I really appreciate it. I've timed my startup code when running locally using wrangler pages dev, and it takes single-digit milliseconds, 1-5ms. I'm connecting to the remote db, not a local one. So I don't think that's the cause. any other suggestions?
I have tried removing my connection to db from my pages function but still getting the error. Is there any way to investigate this further? I'm very disappointed w/ this situation. My app is not large or complicated, in fact it does nearly nothing and is very small.
Are you able to share a minimal reproducible example as a repository on GitHub? I wonder if it's worth reporting to Remix as well in case they're doing something in their Function
Yes, I can do that. I'm also happy to share the repo with you now https://www.github.com/ryangildea/mixdown-player
will try making a simpler demo too
It seems to be private, if its not meant to be public you can share with
Erisa
and I can have a lookoops, ok will do
oh whoops silly mistake i completely got that url wrong ๐ https://github.com/rgildea/mixdown-cloudflare
GitHub
GitHub - rgildea/mixdown-cloudflare
Contribute to rgildea/mixdown-cloudflare development by creating an account on GitHub.
i'm going to roll that back a few commits to before I started stripping out stuff
No worries thanks
Will see if I can reproduce or find out anything with that, if you can get a more minimal example that may help more
all set
hey, I took a look
I think I know what the problem is
the tailwind dependency that react-email has is about 6MB
i pulled the repo, removed react-email and
10:43:16.158 Success: Assets published! 10:43:19.565 Success: Your site was deployed!Right now the
@react-email/components
package includes a massive tailwind file. If you switch the imports that are like this:
to something like this:
And then remove the @react-email/components
package because it would be redundant when you install @react-email/html, @react-email/container, @react-email/text, etc...
I think it will deploy
It doesn't seem like you are using the <Tailwind>
component at this time
You would also have to change your impor for import { renderAsync } from '@react-email/render'
oh for real? You're amazing thank you
I will try that ๐
it worked! thank you so much
@jensen curious how you discovered that. I checked my build output dir and the whole thing was less than 1MB. Maybe i'm missing something about the build process.
the build output dir is not the entire bundle
Seems like it must be installing the dependencies in the worker somehow
there are still a lot of imports from node_modules
I figured it out by going through your build/server/index.js and having some familiarity with the libraries
but the true size of what is loaded isn't represented by just the build dir
yes that makes sense, deploying to a node env you'd have to install the dependencies of course. i guess didn't think about how pages handles that behind the scenes. and i'm not sure how this impacts initial load time. it must load the whole tree into memory...?
thank you again dude. i was "this" close to giving up on cloudflare and heading back to nodejs and fly.io
all good, I'm actually curious if it is possible to use the tailwind stuff for react-email with workers. I haven't bothered yet, but I'll add to my list. May need to prebuild the email templates rather than load the tailwind runtime
yeah that's probably a good idea. kind of a pain ๐ i wonder what the total build + deps size limit is for pages
i think they said 400ms
so depends on the machine running the code I guess
looks like it is 1MB on the free plan, and 10MB on the paid plan