No, it first fails on parsing the sql
No, it first fails on parsing the sql because of the missing quotes around the dates. Once i fixed that it succeeds parsing the sql and then breaks with the d1_reset_do error.
I wrote a script to chunk the inserts in the dump so hopefully that works out but the export 100% has some issue
3 Replies
Btw, I just got it working by using some old local .sqlite file in my .wrangler folder. Basically the same dataset just exported via
sqlite3
just like the docs say.
Works perfectly in one go.Also the row write make no sense to me – the 46686 rows read are about the amount of insert statements in the sql dump.
Why does it need 20x the writes as the reads?
indexes?