Register for free

Robots.txt Generator

Search Engines are using robots (or so called User-Agents) to crawl your pages. The robots.txt file is a text file that defines which parts of a domain can be crawled by a robot. In addition, the robots.txt file can include a link to the XML-sitemap.

explain it to me step by step!

The following explanation tells you how to craft ONE rule. You can do this as often as you want.
1. Enter your root. Just enter ’ ⁄ ’ and add your allowed/disallowed URLS relative to the root folder of your server -OR- enter your root directory (e.g. en.ryte.com) here and use full URLS below (without http or https!):
- way a) root: /, urls: /wiki
- way b) root: en.ryte.com, urls: en.ryte.com/wiki
2. Choose the bot(s) which you want to allow or permit to crawl your site.
3. Enter the paths you want to either grant or permit access to (Enter (dis)allowed URL).
4. Click ’ Add ’ to save your rule.
5. Start over again or download your robots.txt file.


Select Bot













Enter Root Directory

There are two ways to do this - either you just enter ’ ⁄ ’ and add your allowed/disallowed URLS relative to the root folder of your server OR you enter your root directory (e.g. en.ryte.com) here and use full URLS below.

Enter disallowed URL

Enter allowed URL

Enter Sitemap Name

I understand:

I am aware that i am using the robots.txt generator at my own risk. No liability will be accepted by Ryte for errors or missing indexing of the website.

Caution: this option allows every bot to crawl every single page of your website.
Click customize from here to set additional rules.

Ryte screenshot

Get more traffic and customers by optimizing your website, content and search performance. What are you waiting for?