Google Everflux


Google Everflux denotes the constant change of position of websites in the Google search result pages. Everflux is a portmanteau word composed of the English words “ever” and “flux” for fluctuation (variation). It means “constant fluctuation” and describes the phenomenon of the positions in the SERPs often shifting within a short time period by a few places. Sometimes new websites quickly reach the upper positions in the index, but soon thereafter are listed lower again. These fluctuations were created by the way Google generated its search results. According to current information, this is no longer happening.

Short term positioning with Fresh Crawl

Google Everflux is based on the specific way Google updates its index. For this purpose, a distinction is made between the “Fresh Crawl” and “Deep Crawl.” The fresh crawl is carried out on a continuous basis. Google scours the network continuously for new content and integrates it as quickly as possible into a separate index. Sometimes websites may be listed within a few minutes in the SERPs and often rank near the top of the search result listings because of their currentness.

However, this positioning is not permanent. The deep crawl will make the final decision on the eventual ranking of a page. The fresh crawl preferably focuses on websites whose contents change very regularly, for example, news magazines and current blogs. The fact that current content very quickly gets into the index and occupies high positions, is part of the Google Everflux. However, this applies also for the depth of the crawl. The Fresh Crawl collects only relatively superficial information for the Google index. Deeper subpages will be recognized usually only after a Deep Crawl.

Long-term positioning with Deep Crawl

A Deep Crawl is carried out on a monthly basis and includes an update of the entire dataset. It is also called data update or data refresh. In this context, both the data of the main index as well as the newly acquired websites are evaluated. An index update follows, which is often described in the SEO industry as a Google dance.[1] A lot of new and updated content that has lost relevance after a few weeks (such as news on the death of a celebrity, the results of soccer matches or the latest movie starts) won’t make it as part of the deep crawl. They will either be listed far down in the index or not at all. They therefore do not cause any permanent change in the previous rankings.

Google Everflux has deployed its full force here, because the corresponding subpages quickly managed to get in the index, but disappeared from the scene just as quickly. Only a relatively small fraction of the newly indexed websites gain a permanently good ranking if Google determines relevance and thus rewards the website in question with a high position. Existing rankings may shift in this case because of Google Everflux.

Even high-quality websites often have trouble keeping their ranking after deep crawls. This is because the fresh crawl indexes websites only partially. Thus, they are included in the Google index but a complete assessment cannot be performed due to the incomplete dataset. But once the Googlebot visits the website again later and indexes it completely, sudden fluctuations can be observed in the SERPs. Of course, other factors such as on-page and off-page optimization, back links, and visitor behavior are decisive factors of the position in the SERPs. Google Everflux simply refers to the slight variations in the search results and not to the long-term ranking through good content and reasonable optimization.

Current Positioning with Caffeine and Hadoop

Google changes its core algorithms relatively regularly and integrates them into the system. Once an algorithm update is implemented, it will affect a portion of the search queries, for example, 1 to 3% of all search queries. Such an update changes the dataset when crawling, indexing, and ranking. The Google Caffeine update was a fundamental change which meant that the index is more current by up to 50%, according to Google. Caffeine addressed the infrastructure that Google uses in the formation of the search results. After its introduction, the index is constantly and globally updated (continuous update). Part of Caffeine is the incremental search. The Caffeine algorithm searches the web using various crawlers constantly looking for new content, while the index is updated.[2] Fresh and deep crawls are already differentiated at the level of infrastructure.

In this wise, Google can use the existing crawl budget efficiently and so-to-speak, measure the ever-growing Internet.[3] The enormous amount of data can be processed thanks to big data applications such as Hadoop and MapReduce in particular. Using Content Delivery Networks, the contents of the index are output as SERPs to users. Caffeine was therefore more than just a change in the algorithm. Google remodeled its infrastructure to cope with the increase in websites and content on the world wide web in the future. In this context, the Rank Brain algorithm plays an important role as well. Google is able to accurately process search queries that have never been executed before and to deliver suitable results.

Relevance to search engine optimization

The initial benefit of Google Everflux for search engine optimization was that new websites were rapidly absorbed into the index and did not have long wait times. However, it was in many cases necessary for the search engine optimizers to prevent an inevitable drop in the SERPs. If the rankings of existing websites in the index deteriorated in favor of new, current content, there is usually no need to worry. In any case, the way Google crawls and forms its index has only limited impact on various aspects of SEO. These continue to play an important role and they change more likely because of algorithm updates rather than by crawling or index updates.

SEOs should still monitor their rankings using appropriate tools and always ensure that their web content is top-quality and up to date. If the ranking remains unchanged and bad for weeks, causal research should urgently be done, especially with respect to search engine optimization and the technical requirements. Possible causes may be, for example, a Google penalty or bad performance of the website in addition to violations of the webmaster and quality guidelines. If a penalty has been levied, a Reconsideration Request can be submitted.

References

  1. Explaining algorithm updates and data refreshes mattcutts.com. Accessed on 01/25/2016
  2. Our new search index: Caffeine googleblog.blogspot.de. Accessed on 01/25/2016
  3. Google’s New Indexing Infrastructure “Caffeine” Now Live searchengineland.com. Accessed on 01/25/2016

Web Links