Crawler: FAQs
- Crawler - Why am I getting a "rendering JavaScript is not allowed with this plan" error?
- Can I crawl individual URLs (records), or am I limited to requesting a crawl for the entire site?
- Am I billed for every crawl in Algolia, even if the records haven't changed, or only for updated records?
- Are Algolia crawls automated or do I need to run them manually?
- Do failed or ignored crawls count against my usage?
- Crawler: What is a crawl request?
- Crawler - 'you already have a crawler with that name. Please specify a different name and try again'
- Crawler: Index not allowed with this API key
- Domain verification issue
- Why am I getting the crawler error Cannot validate api key?
- Can I merge records from non-Crawler index to the index generated by Crawler or vice versa?
- Does the crawler support RSS?
- Can I crawl my site with an invalid certificate?
- Why am I seeing the error - 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'?
- Crawler has SSL issues but my website works fine
- When are records deleted?
- When are pages skipped or ignored?
- How to see the Crawler tab in the Algolia dashboard?
- The Crawler doesn’t see the same HTML as me
- One of my pages was not crawled
- What can I crawl with the Crawler?
- What is the user agent of the Crawler?
- Which IP address of the Crawler can I add to my allowlist?
- [Crawler] Can I verify and crawl the same domain from multiple Algolia apps?
- Why can't the Crawler extract complicated PDF files?
- Is it possible to crawl private pages that require a login to access?
- Why are IgnoreNoIndex, IgnoreNoFollow not ignoring pages?
- How can I customize how my content is indexed using the Crawler?
- How do I restart the Crawler?
- Why the number of records in .bak index is not the same as the primary index?