google crawler

A New Crawler Is Launched by Google to Improve Googlebot’s Performance

In order to streamline R&D tasks, reduce the workload on Googlebot, and optimize operations, Google releases GoogleOther, a new web crawler.

Google recently unveiled “GoogleOther,” a new web crawler intended to relieve some of the load on Googlebot, Google’s main search index crawler.

The addition of this new crawler will help Google in streamlining and optimizing its crawling processes in the long run.

Web crawlers, often known as robots or spiders, automatically find and scan websites.

The index for Google Search is created by Googlebot.

A generic web crawler called GoogleOther will be utilized by several Google product teams to collect publicly accessible content from websites.

Separating Responsibilities Between Googlebot and GoogleOther

The primary objective of the new GoogleOther crawler is to replace Googlebot’s current non-essential functions.

Googlebot is now free to concentrate only on creating the search index that Google Search uses.

In the meantime, GoogleOther will take care of research and development (R&D) crawls and other tasks that are not specifically related to search indexing.

GoogleOther Acquires Googlebot’s Infrastructure

With respect to host load restrictions, robots.txt (although with a different user-agent token), HTTP protocol version, fetch size, and other aspects, GoogleOther shares the same infrastructure as Googlebot. As a result, it also has these same restrictions and features.

Essentially, GoogleOther is just Googlebot using a new name.

Implications for SEOs and Site Owners

Given that GoogleOther shares Googlebot’s infrastructure and constraints, its launch shouldn’t have a significant impact on websites.

However, it represents a significant step forward in Google’s ongoing efforts to enhance and synchronize its web crawling processes.

You can keep an eye on GoogleOther in the following ways if you’re worried about it or you can hire an SEO company to do it for you:

  • Analyze server logs: Examine server logs on a regular basis to spot requests coming from GoogleOther. You may learn from this how frequently it crawls your website and which pages it views.
  • Update robots.txt: If necessary, make sure your robots.txt file has been updated to include specific GoogleOther rules. This will make it easier for you to manage how it accesses and crawls your website.
  • Monitor crawl stats in Google Search Console: Observe any changes in crawl frequency, crawl budget, or the number of indexed sites since the launch of GoogleOther by keeping a check on the crawl analytics inside Google Search Console.
  • Track website performance: Keep a close eye on the performance indicators of your website, like load speeds, bounce rates, and user interaction, to spot any potential connections to GoogleOther’s crawling efforts. This will enable you to determine whether the new crawler is resulting in any unexpected problems for your website.

Related Links:

https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers

https://www.rankpointer.com/how-to-rank-higher-on-google-using-seo-techniques/

 

Leave a Reply

Your email address will not be published. Required fields are marked *