Robots Txt close from indexation

Learn how to create a robots.txt file to block search engine bots from crawling and indexing your website, with an example to get you started.

Robots.txt Disallowing Indexing

A robots.txt file is a text file placed in the root directory of a website that specifies which parts of the website search engines should and should not index. By adding a robots.txt file to a website, a webmaster can tell search engine crawlers which parts of the website to index and which to ignore. This is particularly useful if there are parts of a website that a webmaster does not want search engines to index.

For example, let's say that a website has a section containing private customer information. To make sure that search engines don't index this information, the webmaster can add the following code to the robots.txt file:


User-agent: *
Disallow: /private/

In this case, the asterisk (*) specifies that the robots.txt file applies to all search engine crawlers. The "Disallow" directive instructs the crawlers not to index any pages that are located in the "/private/" directory of the website. This ensures that none of the private customer information will be indexed by search engines.

Another example of using robots.txt to disallow indexing is to block access to specific files. For instance, if a website has a PDF file that contains confidential information, the webmaster can add the following code to the robots.txt file:


User-agent: *
Disallow: /confidential.pdf

This code tells search engine crawlers not to index the confidential.pdf file, ensuring that the information contained in the file remains private.

Robots.txt is an important tool for webmasters who want to control which parts of their websites search engines index. By adding the appropriate directives to the robots.txt file, webmasters can ensure that search engines only access and index the parts of their websites that they want to be visible to the public.

Answers (0)