Register for the Ryte Newsletter
Get the latest SEO and website quality news! Exclusive content and Ryte news delivered to your inbox, every month.
The user agent or agent is the identifier or name with which a program logs in to the web server to request a document. The name is passed through the HTTP header. Software programs and search engine robots can be identified by this name.
The user agent allows one to see exactly when a search engine robot visited the website, which can be helpful for the evaluation of log files. The fact that the name can be selected freely gives rise to the problem that robots can disguise themselves as other user agents. This allows the robot to identify Spam websites, for example. A bot with an incorrect identifier can check a website incognito, since the web server identifies it as a normal browser rather than as a search engine bot.
How it works
If a browser wants to access a webpage, the program tells the document when the request is made, which agent is used (Agent Name Delivery). The identifier of the user agent consists of a string of different information. It contains the operating system as well as the version of the operating system or the product name.
If during this process the program provides the following identification, for example:
Mozilla/5.0 (Windows;U;Windows NT 5.1;de-DE; rv; 1.7.6) Gecko/20050226 Firefox/1.0.1
it means that the user used a Mozilla Firefox version 1.0.1 with the operating system Windows XP and a German language pack. It is also possible to identify that the user used the Render engine Gecko version 1.7.6, which was released on 26 February 2005.
Types of user agents
- Web browsers: Webbrowsers are programs used to surf the Internet. They allow you to view and execute websites graphics, or applets. Examples of web browsers are Internet Explorer, Safari, Mozilla Firefox or Opera.
- Web applications: These are programs used for content maintenance, communication, or execution of files, such as streaming (example: Spotify), video clips, Flash Player or the Adobe Acrobat Reader.
- Spider (Crawler): Identifies search engine programs that periodically search the Internet for new information and changes to documents.
In order for search engines to be able to display search results to a user, the web has to be searched daily for content. The web servers, however, cannot tell if a person or a search engine robot visits a webpage. User agents help the web server distinguish between human users and search engine robots.