Googlebot


The Googlebot is Google’s crawler, which collects documents from the Internet and compiles them for the index and provides them later for the Google search. It collects documents through an automated process, which operates much like a web browser. The bot sends a request and receives a response from a server.

If certain parameters allow the Googlebot access, it uploads a single webpage, which is accessible at a URL and stores it initially in Google’s index. That is how the Googlebot crawls the global Internet using distributed resources. The Googlebot’s computing power is distributed across a huge system of data centers, so it can crawl thousands of websites simultaneously.

General information

Google’s crawler technology is basically an algorithm that works independently. It is based on the concept of the WWW (world wide web). The Internet can be envisioned like a very large network of websites, including nodes, links, hyperlinks.

Mathematically, this concept can be described as a graph. Each node is accessible through a web address, the URL. The links on a website lead either to further subpages or other resources with another URL or domain address. Therefore, the crawler distinguishes between HREF links (the connections) and SRC links (the resources). How fast and effectively a crawler can search through the entire graph is described in graph theory.

Google is working here with different techniques. On the one hand, Google uses multi-threading, i.e. the simultaneous processing of several crawling processes. On the other hand, Google is working with focused crawlers, which focus on thematically restricted subjects, for example, searching the web for certain types of links, websites, or content. Google has a bot to crawl images, one for search engine advertising, and one for mobile devices.

Practical application

Webmasters and site operators have different options to provide information on their websites to the crawler, or possibly even to deny it. Each crawler is initially labeled with the term “user agent.” The Googlebot’s name in the log files of the server is “Googlebot” with the host address “googlebot.com.”[1]

For the search engine Bing, it is “BingBot” and the address is “bing.com/bingbot.htm.” The log files reveal who sends requests to the server. Webmasters can deny access to certain bots or grant them access. This is done either through the Robots.txt file, using the attribute Disallow: /, or with certain meta tags of an HTML document. By adding a metatag on the webpage, a webmaster can grant limited access to the information on their website to the Googlebot as required. This meta tag could look like this:

<meta name = "Googlebot" content = "nofollow" /> 

The frequency at which the Googlebot is supposed to crawl a website can be defined. This is usually done in the Google Search Console. This is particularly advisable when the crawler reduces server performance or if the website get frequently updated and should therefore be crawled often. How many pages of a website get crawled is mandated by the crawl budget.

Relevance to SEO

It is particularly important to know how the Googlebot works for search engine optimization of websites, not only in theory, but above all in practice. It is recommendable to provide a new URL to the crawler (seeding) i.e. provide the bot an address as the starting URL. Since the bot will find content and additional links on other websites through links, an HREF link on a specific resource can ensure that the bot will receive a new URL.

You simply send a ping to the WWW. Sooner or later, the Googlebot will come across the seeded address. In addition, it is recommended to provide sitemaps to the bot. This gives it important information about the structure of your website and at the same it will know which URLs it can follow next. This is particularly useful when an extensive website has been relaunched.

Since the Googlebot can read different types of content, not just text or images, you should keep an eye on developments. Google has been working for several years on reading Flash content, dynamic Web pages, JavaScript and Ajax code and is already partially successful in these areas.[2] Certain methods such as GET or POST can already be identified by the Googlebot and parts of Flash content can be read as well.[3]

References

  1. How to verify Googlebot. googlewebmastercentral.blogspot.de. Accessed on 03/19/2014
  2. GET, POST, and safely surfacing more of the web. googlewebmastercentral.blogspot.de. Accessed on 03/19/2014
  3. Improved Flash Indexing. googlewebmastercentral.blogspot.de. Accessed on 03/19/2014

Web Links