If you regularly surf the internet, you’ll find that not all websites always work smoothly. 403 errors can be a reason for this, although they are not as common as a 404 error message.
403 errors are less common, but they can also occur. These occur when a client accesses a resource for which they have no authorization, meaning the request cannot be satisfied. In this article, we explain what causes a 403 error, what consequences this can have, and what you can do about it.
The 403 error, also called 403 forbidden error, or HTTP 403 error code, is issued by a server if a client (browser) lacks the required access rights. Access is "forbidden" and the message "Error 403 - Forbidden" appears in the browser window.
The more detailed explanation:
If a client such as a browser wants to retrieve a URL from a server via http, the server first verifies this request. If the page exists and can be displayed, the server sends the status code 200 OK. The browser can then load the website and display it to the user. This "transaction" between the client and server usually goes unnoticed by users, unless errors occur.
The most common errors you encounter are 4xx errors – these belong to a class referred to as client errors. Error 403 is one of those. If a browser connects to a server via http, the server can deny access. In this case, the server will return the 403 forbidden error and the browser cannot access the desired resource.
Figure 1: Notification from the server when attempting to access an admin page of a WordPress blog.
Even if error code 403 initially suggests a client error, it is ultimately due to the server settings or the settings of the respective CMS, whether a client has access to certain directories or URLs or not.
If URLs cannot be displayed by browsers, they have no added value for website visitors. A 403 error inevitably leads to a negative user experience and significantly limits your website's usability. As a result, your site may not be visited as often if 403 errors frequently occur.
For google, a 403 error is also a problem because the Googlebot cannot crawl the contents of the URLs in question and render them like a browser. There is therefore a risk that the pages might be removed from the Google index.
In 2014, Matt Cutts granted a grace period of 24 hours if the Googlebot had found a 403 page. According to Cutts, that's the length of time the system allowed the URL to remain in the crawling system.
In a round of questions about SEO on Reddit, Google's John Mueller also commented on the topic of 4xx errors. There, the tips became more specific:
Figure 2: Statement by John Mueller on 4xx errors. (Quelle)
So one thing is clear: If a URL does not deliver content for a client request, even a Googlebot request, it will be removed from the index.
There are several reasons why a website might return a 403 error. In many cases, the access block is deliberately set and that often makes sense:
In addition to this meaningful restriction, users can be excluded if directories are unintentionally blocked. This can happen in the following cases:
403 errors can also occur for bots when they try to crawl your site. For example, if the Googlebot is not allowed to search important directories due to the defaults in robots.txt, which are important for the functionality of your website, this might result in this kind of error. Forbidden errors are also possible if you use the robots.txt to exclude central directories with content from being crawled.
Ryte can help you identify 4xx errors. The quickest way to find out about these errors is to click the "Critical Errors" report found in the Ryte Dashboard.
In addition, you can use the Ryte Website Success tool to check the status codes of your website. Take note of when your project was last crawled.
Figure 3: Check status codes of a website with Ryte Website Success.
The Google Search Console (GSC) will also show you if there are 403 errors. You can find the corresponding report in the "Crawl Errors" section:
Figure 4: Determining crawling errors with the GSC.
If access to directories or URLs on your website is denied to clients, you should take action.
Figure 5: Check robots.txt with Ryte.
The Google Search Console is also suitable for checking robots.txt. You can find the report in the segment "Crawl" in the old version of the GSC. The robots.txt tester has not yet been integrated into the new user interface (as of July 2019).
Figure 6: Test robots.txt with GSC
With the tool "Fetch as Google," you can check whether the Googlebot is prevented from crawling important areas due to restrictions in the robots.txt.
403 errors are first and foremost client errors, but these errors are also caused by an incorrect configuration of the server or the robots.txt file. If you have 403 errors on your site, you should act quickly – otherwise Google will not index your URLs as they do not deliver content and are negative for the user experience.
Find and fix your 403 errors with Ryte FREE
Published on 07/26/2018 by Philipp Roos.
Philipp is an extended member of the Ryte family and supports Ryte with the latest SEO know-how and digital marketing news.
Become a guest author »Be ahead of the curve with digital marketing insights straight to your inbox.
Sign me up