« Back to front page

The Google Quality Rater Guidelines

Since they’re going to be leaked at some point anyway, we can just go ahead and publish them. – that is probably what drove Google to officially publish its Quality Rater Guidelines.

Google uses these guidelines to prepare its quality raters. They are also a "content piece" for sending messages in Google’s marketing mix.

What are the guidelines?

I will start by admitting that it wasn’t a simple read, but one that is still very interesting. To save you the 160 pages, I will simply summarize my findings here. Once again: It is no longer a leak if Google publishes these guidelines. Instead, it is basically a message to us all. Despite my enthusiasm for this document, I still have to point out: we only get to read what is intended for us.

And yet: I believe that every online marketer should take time and study how Google ranks and evaluates web content as this is frankly stated in this document.

searchqualityevaluatorguidelines

Figure 1: The Google Quality Rater Guidelines

Perhaps I should briefly explain what Quality Raters actually do: they receive a series of search requests from Google and enter these in the search slot. Then their work begins. They are supposed to evaluate every search result. Not just with "good" or "bad", they are supposed to classify each of the results. More on this later.

What benefits can be derived from the document?

I found one particular statement on page 8 that simply motivates you to read on:

"We have very different standards for different types of pages. By understanding the purpose of the page, you'll better understand what criteria are important to consider when evaluating that particular page."

This means:

1. Different pages are evaluated differently. And the connection between websites and Google is always through search requests, this means that different search requests are evaluated based on very different ranking factors. We should therefore quit quantifying our SEO measures based on generalized statements – no matter what kind.

2. This statement also shows Google’s inability to identify the different goals of websites using algorithms. Meaning: Google is not quite in control of everything that is stated in this document and hence the need for Quality Raters.

By the way: 2016 will probably be the year of "Machine Learning". So why the human quality raters? That’s exactly the reason why! If they are able to correctly sort out and evaluate a sufficiently high number of search requests and documents, the algorithm can use this as a basis to identify a pattern and hence make itself "smarter".

Which search requests does Google classify?

Off we go. The first surprise is that in this document, Google goes beyond the three common search motivations:

  • “transactional / do”,

  • “informational / know”, and

  • “navigational / go”.

We now have six search categories:

- Know: It’s all about information

- Know simple: This category includes very simple, quick to answer knowledge questions e.g. how tall is Barack Obama?

- Do: Something should be done or bought. Examples include product searches.

- Device action: The user would like his mobile device to do something for him. For Google, these are search requests that start with "Okay Google".

- Website: A user wishes to access a specific website – but uses Google to look for the URL. This could be a homepage such as "ibm", or a sub-page such as "New York Times Health Section".

- Visit in person: An example is when you have visited a new city, would like to eat Chinese food, and so you ask Google for a "Chinese Restaurant".

The "Device action" and "Visit in person" categories show how important mobile web is for Google. I would love to know how common such requests actually are. I bet there are quite a few (especially for “Visit in person”). We should definitely take this into account when deriving our online marketing measures.

One more thing about the "Know simple" category: The fact that this is explicitly specified is a clear signal that we can all forget about the "good" old rules such as "You need at least 500 words for your text to be ranked highly". A page only needs to be as long as is necessary to answer the questions of users. And it doesn’t necessarily need to be long for questions such as "How tall is Barack Obama".

In addition to this category, there are also search requests where the "freshness" of the destination website is expected. Here, Google names four different types of keywords:

1. Breaking News, e.g. a tornado

2. Events such as sport events, TV shows, etc.

3. Current information queries, e.g. number of people living in a city.
These must always be up to date.

4. Products: This class includes concrete product names and categories. "iPhone" and "Windows operating system" are given as examples.

The same applies here as well: Actuality need not be simulated rhythmically. "Freshness" is expected if something new happens in the world of the search request, may it be a hurricane or the constant population influx in Paris. On the other hand, companies that produce pencils will rarely have much to do with product innovations and hence little to do with freshness…

Which websites does Google differentiate in the index?

Google certainly has a way of differentiating "good" from "bad" pages. Right at the beginning of a document, Google explains the emerging raters on the Internet:

"Websites and pages should be created to help users. Websites and pages which are created with intent to harm users, deceive users, or make money with no attempt to help users, will receive a very low Page Quality rating."

And I tend to think that this is a distinction that Google is (still) not able to make using algorithms: is this a website that wants to make a lot of money with little effort – or is there at least a desire to help users? At the same time, those that Google calls "bad" websites are generally not the websites of our customers. These are pages created by con artists and have very little to do with content…

What I find much more interesting is how else Google classifies websites. The raters must learn to differentiate the following special websites from "normal" websites:

- Homepages. You should also have a look at this, especially if your sub-page is displayed as the search result. Just like most users probably do. Always treat your homepage with love…

- "About us" pages. Although these are not the sole indicators of a website’s reputation, I believe that the JPG image of imprint pages of most affiliates still say a lot about the website’s prominence.

- YMYL (Your money or your life) is an interesting clause in the document: it refers to pages that deal with our money or our life. They have a lot of influence on our happiness, health, and wealth of users. Such YMYL pages are therefore analyzed thoroughly. They include pages with information about legal topics or medical issues, financial pages, or payment pages of online shops or bank transactions.

What is evaluated in a page?

Once a rater lands on a website following a search request, it must first clean up a little in order to remain within Google’s grid. The guide recommends the following regarding the sequence:

"Do not worry too much about identifying every little part of the page. Carefully think about which parts of the page are the MC (Main Content). Next, look for the Ads. Anything left over can be considered SC (Supplementary Content)."

This leads to the following three content elements of a website:

1. Main content. This of course entails the subject matter. It is the news in a news website or in the case of an online shop, the product information and the purchase options. Similarly, for a currency converter, this is the currency converter.

2. The ads. These are advertisements or anything else that is publicly paid for. Presence or absence of ads does not lead to any positive nor negative ratings. However, raters should still be able to identify the ads.

3. Supplementary content – this is everything else that, according to the document, can also be very helpful. It often includes reviews (that could be placed on a different sub-page from the main content) and related articles. Surprisingly, Google expects supplementary content more from larger websites than smaller ones.

But that’s not all: the raters should not just evaluate the quality of the website and its content – but also its reputation. Only that is different here. More about this in the next chapter.

740x400-GoogleGuidelines-02

Which criteria play a key role?

So what next? We already have a classification of search requests and a distinction of possible search requests, the actual work can now begin. Roughly speaking, the quality raters then classify each search result based on whether or not it provides a credible answer to the search request.

And – surprise! – this begins with the search results. The rater must first analyze the snippet (hereby called the "Web search result block"). This consists of the title, URL or breadcrumb, and description. "Special content result blocks", i.e. the universal search, direct answers, etc. are also analyzed. If we were to assume that no user is going to click on these links (e.g. in an inquiry to find out the amount of calories in a banana, which is directly answered in the SERPs), the rater should also be able to refrain from evaluating the underlying landing page. In this case, we simply remain within the Google world, it is all about optimizing the search results and not the underlying landing pages.

As for the "normal" search results, it is all about the respective landing pages. These can now be analyzed based on whether or not they suit the search requests. Note: it is not about the pages being "good" or "bad" – it is about whether they meet the requirements of the search request or not.

"Fully meets" is probably the exception category and has very high standards. But let’s take a look at some "Highly meets" results:

The Michael Jordan page on Wikipedia is hereby given for "Michael Jordan". According to the document, it is a mobile-friendly article that is very helpful to most users. Similarly, the www.kristenwiig.org fan page is given for the search for "Kristen Wiig". The page is described as informative and contains over 50,000 photos and 300 videos.

kristen-wiig-1024x796

Figure 2: A "fully-meets" fan page!

If we now take a look at one of the poorer reviews, "slightly meets", we find the following examples: for a search for "Britney Spears", an article on tmz.com is given. The article dates back to the year 2006 and very few users would find it interesting. Another example that is particularly interesting for all those who wish to create comprehensive holistic pages is the search for "Abe Lincoln’s birthday". The Wikipedia page "List of Presidents" is not exactly well-suited in this case, even though the answer is in there. This is because the answer is not prominently placed and the user has to do quite a lot to find it.

presidents-1024x768

Figure 3: Sometimes Wikipedia is not the ideal search result for "Know Simple".

One more example for "Slightly meets":

Here, the article called "5 Tips to Fix a Sexless Marriage Or Relationship" is given for "Lack of Sex and Problems with my Marriage". According to the document, this is not a very good result since, and I quote:

"The quality of writing in this article, which was created by a person without expertise in marriage or relationship counseling, is poor. Even though the article is about the query, the page is low quality and untrustworthy."

SEOs who compile web text and who are responsible for the content will probably be unhappy with this assessment…

The examples in the "Fails to meet" category are indeed entertaining – but much less informative since they are quite obvious. Nonetheless, one wouldn’t do any harm: the search request "Tooth loss five years old" does not suit the page titled "Gallery of Pennsylvania Fishes" on fishandboad.com, even though it contains all the searched words. This means that even though the keywords match – the meaning is totally different.

The last example probably illustrates two things: one, it makes little sense to simply scatter terms on a page so that Google can identify them as keywords. Secondly, this is a sign that something like that can still happen in the search results…

One actual knee-slapper: the movie's sub-page on the official Dreamworks website is still placed second if you search "American Beauty". However, this page is ranked as "Fails to meet" in the quality rater guide because:

"This is the official website of the movie American Beauty. However, the landing page is extremely difficult to use (even seems broken on a mobile phone) and there is no satisfying or helpful content on the page—it Fails to Meet the user intent."

In other words: Google finds the official page of the movie to be quite lousy – but still places it second in the search results.

american-beauty-1024x532

Figure 4: Fairly poor content ("Fails to meet") – but still 2nd in the search results: the official DreamWorks website for the movie "American Beauty"

As shown by this example, the evaluation of the content is not all. Even when the content is poor, the official website of a movie is still ranked highly. Similarly, when responding to a query on symptoms of some nasty disease, a possibly less verbose answer of a renowned medical page is still better than a lot of text from "some-shallow-page.com".

This reputation is evaluated based on "E-A-T":
"Expertise", "Authoritativeness", "Trustworthiness".

It comes down to the expertise (at least for topics where skill and experience are quite important) and the trustworthiness of the author. The raters should therefore not just consider the actual landing page, but also the entire website. This mostly applies to the homepage and "About us" pages. In addition, the reputation should also be checked externally particularly for YMYL pages. Here, all possible sources (review pages, Wikipedia, etc.) should be taken into account.

The author's expertise should also be ascertained. This does not always have to be formal. Below are some of the statements in this regard:

"High quality medical advice should come from people or organizations with appropriate medical expertise or accreditation. High quality medical advice or information should be written or produced in a professional style and should be edited, reviewed, and updated on a regular basis."


"High quality financial advice, legal advice, tax advice, etc., should come from expert sources and be maintained and updated."


"High quality advice pages on topics such as home remodeling (which can cost thousands of dollars) or advice on parenting issues (which can impact the future happiness of a family) should also come from ‘expert’ sources which users can trust."

"High quality pages on hobbies, such as photography or learning to play a guitar, also require expertise."

One little puzzle for all online marketers: How can I make it clear to the quality rater (which probably is not well versed in my topic) that the content on my sub-page was written by competent authors? If you can answer this question, you will certainly be successful…

SEO, uh, online marketing 2016: it’s all about trust, content, and focus

In order to ensure this page is placed in the Fully-Meets category if users search for "Google Quality Rater Guidelines", I should be able to prove that I have significant expertise in this field. In a way, I would have to summarize these 160 pages in a few words without being banal and still provide recommendations for the intended target group. I would thus write:

Google is still far from finding the ideal search results using algorithms. This document is after all a part of the map which Google uses to find its way there. The road extends beyond the operable websites and takes into account the trustworthiness of the authors and website operators who can prove their expertise. The goal is far from standard tricks that are placed somewhere in the websites and that take the user’s search request seriously and provide perfect answers.

That would also be a good path for online markets. You too now have the map that you need to get there.

Related post: http://www.thesempost.com/googles-webmaster-guidelines-breakdown-of-all-changes-made/

Ryte users gain +93% clicks after 1 year. Learn how!

Published on May 4, 2016 by Eric Kubitz