Search Engines are using robots (or so called User-Agents) to crawl your pages. The robots.txt file is a text file that defines which parts of a domain can be crawled by a robot. In addition, the robots.txt file can include a link to the XML-sitemap.
Caution: this option allows every bot to crawl every single page of your website.
Click customize from here to set additional rules.
Get 10 days of free, full access to the entire platform.Start your trial