Disallow Robots Txt
"Robots.txt is a text file that tells search engines & web crawlers which pages & directories to not index. Example: User-agent: * Disallow: /private-data/"
Robots.txt Disallow Example
A robots.txt file is a simple text file, which is located on your website's root directory. It contains instructions for web crawlers, such as search engine crawlers, how to crawl your website and which pages or files they should not access. The robots.txt file should be placed in the same directory as your website's index file.
When a web crawler visits your website, it will first look for the robots.txt file. If the file is present, the crawler will read the file and follow the instructions in the file. The instructions are in the form of lines starting with "Disallow:". Anything after the "Disallow:" is the path of the page or file that the crawler should not access.
User-agent: *
Disallow: /admin/
Disallow: /login/
Disallow: /register/
Disallow: /wp-admin/
Disallow: /wp-login.php
The above example instructs all web crawlers (denoted by the * in the first line) to not crawl the "/admin/", "/login/", "/register/", "/wp-admin/" and "/wp-login.php" pages on the website. This is useful for websites that have sensitive pages or files that should not be indexed by search engines.
It is important to note that the robots.txt file is only a suggestion to web crawlers. It is not a guarantee that they will follow your instructions. Some search engines, such as Google, do follow the instructions in the robots.txt file, but others may not.