Is the billing page broken? I clicked view next to monthly usage on the Vectorize dashboard and I ge
Is the billing page broken? I clicked view next to monthly usage on the Vectorize dashboard and I get an oops page 😅
17 Replies
Remix + Cloudflare - running wrangler types generates the correct vectorize types, however the proxy stub doesn't exist in the loader (cloudflareDevProxyVitePlugin). Is this due to beta?
can the 1,000 namespaces per index limit be easily increased? we use Pinecone but was looking at using Vectorize for a new feature and this limit is really putting us off
yes, vectorize is currently not available on local
Thanks for the update!
We're currently working on supporting local binding for vectorize, hopefully completed this month :soontm:
That would be great, thank you. In the meantime I’m generating embeddings using workers ai, then storing them in pg vector / querying in Postgres, seems to work well enough!
what is "local binding for vectorize" mean?
For a lot of Cloudflare products, you can use a similar version of the product locally. So when you run “wrangler dev”, the binding still works, so you can run/test locally - for Vectorize, that isn’t available yet, so you have to run it remote to test it (either deploy, or wrangler dev remote)
Hi is there an additional information on the eventual consistency behaviour, can I assume the vectors are available for search in order? does the mutation identifier have an meaning? can it be used as a high water mark? I have a use case were a want to group vectors, it's tricky to do without knowing when vectors will be availible? presumable without a hwm I'd have to use a queue with some kind of back off but that won't help if the vectors are not indexed in order. presumble then the only solution would be repair on each read but i'm capped to 100 nearest neighbours.
Hi! The DB state is eventually consistent, and all mutations (upsert, insert, delete, create metadata index, ...) are processed in the strict order they were given to the API. This means the index state is always reflecting all the mutations that were given to the API up to the last applied mutation, processed in order.
The mutationId acts as a high watermark indeed, you can compare the mutationId returned by any mutation operation and the one you get by calling https://developers.cloudflare.com/vectorize/reference/client-api/#get-index-info ; this will return the vector count, the last applied mutationId (again, in the sequence ordered as provided to the API) and the last UTC datetime this mutation corresponds to (useful if you don't keep track of the mutationIds)
Unknown User•4w ago
Message Not Public
Sign In & Join Server To View
Does anyone know of any methods to create a backup for an index? I'm looking for a script/code that can pull all the data from one index and insert it to another.
I'm very eager to move off of pinecone, purely from a simplicity perspective.
I can work around some limitations, however there are a couple big blockers.
1. Metadata filtering has to support more than =, !=... E.g. <, <=, >=, > ... Even a filter for in list or not in list.
2. Not a blocker, but having an actual view in the dashboard would be a big help
3. The comment I saw above about local Wrangler support removes the other blocker
I can't understand why vectorize has so many limits. I don't think I can develop a production-level project under current limits.
Compared to Pinecone, which has ZERO limits as long as I pay for what I need.
Really hope one day all limits go away.
The Vectorize limits aren't artificial but are a result of the internal architecture/technology backing Vectorize. I expect they'll be raised over time (and in fact they were recently) but not lifted altogether.
And nothing has zero limits: Pinecone docs say they have them too, it just scales by plan (but there's an upper bound): https://docs.pinecone.io/reference/quotas-and-limits
Pinecone Docs
Quotas and limits - Pinecone Docs
Search through billions of items for similar matches to any object, in milliseconds. It's the next generation of search, an API call away.
Oh I missed this
I know there must be some limits to protect the platform. But I think, at least, should the total records of the specific index be not limited?
I know I can separate the index manually, but I really hope the platform can auto scale and I don't need to care how many records I will insert.