SRR And Web Crawlers. Help Me Understand

I just released a new business that has more than 150 pages of docs. I custom-coded the entire site and docs section using SolidStart. I have SSR turned on. Here's my issue. Crawlers aren't finding my pages. I'll give two examples. 1. Algolia just added a crawler to my site. Their crawler only found 2 pages, not 150. 2. I tried to use a sitemap generator. Same problem. It only found 2 or 3 pages. Can anyone help me understand why crawlers aren't finding my pages and how I can fix this? Either I screwed something up, or I fundamentally don't understand how this works. My site address is: https://www.quickstartjs.com Thanks! Chris
4 Replies
Madaxen86
Madaxen863mo ago
I couldn’t find a robots.txt Which is essential for crawlers It should contain an address to your sitemap Example User-Agent: * Disallow: Sitemap: https://www.example.com/sitemap_index.xml
ChrisThornham
ChrisThornhamOP3mo ago
Haha. Chicken or egg. It's hard to build a sitemap without a crawler. I just coded a custom sitemap generator using import.meta.glob from vite. Do you think it will work if I add that sitemap to my site and a robots.txt file?
Madaxen86
Madaxen863mo ago
This should do it. A crawler typically hits the robots.txt and follows the sitemap(s) from there. And be aware that the time it takes to get indexed by google can vary between a few hours to several weeks
ChrisThornham
ChrisThornhamOP3mo ago
Sweet! Thank you. I agree. The "get indexed" timeline is extremely variable. I've been an online entrepreneur for 15+ years now, and I still don't fully understand how Google's algorithm works. Haha. Have a good one.
Want results from more Discord servers?
Add your server