Indeed, there are cases where you need to stop crawlers from triggering searches. The best way to minimize the impact of bots on a site by conserving/protecting your SEO is to do the following:
- use of Robots.txt to determine what should or shouldn't be crawled.
- use of sitemaps to properly indicate to a bot that affects SEO what it can/should crawl
- make sure meta tags properly indicate what paths can or shouldn't be followed by bots.
If properly done, a "good bot" that affects SEO will have access to product pages without having to rely on "searching" consuming api requests to explore the sites content.
You can find more information in google's own documentation on different strategies you can take here.