Crawler: FAQs
- Can I merge records from non-Crawler index to the index generated by Crawler or vice versa?
- Does the crawler support RSS?
- Can I crawl my site with an invalid certificate?
- UNABLE_TO_VERIFY_LEAF_SIGNATURE
- Crawler has SSL issues but my website works fine
- When are records deleted?
- When are pages skipped or ignored?
- How to see the Crawler tab in the Algolia dashboard?
- The Crawler doesn’t see the same HTML as me
- One of my pages was not crawled
- What can I crawl with the Crawler?
- What is the user agent of the Crawler?
- Which IP address of the Crawler can I add to my allowlist?
- [Crawler] Can I verify and crawl the same domain from multiple Algolia apps?
- Why Crawler cannot extract complicated PDF files?
- Is it possible to crawl private pages that require a login to access?
- Why are IgnoreNoIndex, IgnoreNoFollow not ignoring pages?
- How can I customize how my content is indexed using the Crawler?
- How do I restart the Crawler?
- Why the number of records in .bak index is not the same as the primary index?
- What is a "SafeReindexingError" (Crawler)?
- Is there a reason the .bak index would show 0 records even when the main index has records?
- How do I unblock my Crawler?
- Why doesn't the automatic Crawler validation work?
- Where do I find my crawler-api-key?
- Where do I find my crawler-user-id ?
- Is there a free crawler option?
- How can I index to multiple indices in one crawler?
- How can I split my crawled data?
- Do we have a way of tracking external links that are broken using the Crawler?