I hope you enjoy reading this blog post.
If you want to get more traffic, Contact Us
Click Here - Free 30-Minute Strategy Session
Be quick! FREE spots are almost gone for this Month. Free Quote
John Mueller from Google recently addressed the question of whether removing pages from a sizable website can effectively resolve the problem of pages that are detected by Google but remain uncrawled. During his response, John provided valuable insights into resolving this issue and shared some general recommendations. His advice is particularly relevant in light of recent Google updates, including those related to Google Search Console.
John Mueller, a prominent figure at Google, recently tackled a common concern related to large websites: the presence of pages discovered by Google but not crawled. This issue has become increasingly important in light of the latest updates to Google’s algorithms and the evolving features of Google Search Console.
Click Here – Free 30-Minute Strategy Session
Be quick! FREE spots are almost gone for this Month
In response to this query, John Mueller provided valuable insights and practical advice on how to address the problem effectively. He emphasised the significance of considering various factors and implementing the right strategies to resolve this issue. With the ever-evolving landscape of Google updates, it’s crucial to stay informed about the latest improvements in Google Search Console updates to optimise your website’s performance.
Search Console, a service offered by Google, serves as a platform for communication regarding search-related matters, providing valuable feedback and addressing issues.
The indexing status within Search Console plays a crucial role, as it informs publishers about the extent to which their website’s pages are indexed and eligible for ranking in search results.
To access the indexing status of webpages, website owners can refer to the Page Indexing Report available in Search Console.
When a report indicates that a page has been discovered by Google but not indexed, it often signifies the presence of an underlying problem that requires attention and resolution.
Although Google’s official documentation mentions only one reason for Google discovering a page but declining to index it, there are multiple potential factors contributing to this outcome.
Please note that the information provided here should serve as a general overview, and it is recommended to refer to the official Google documentation for more comprehensive and up-to-date details on Search Console and its features.
There is a misconception that removing certain pages from a website will help with Google crawling errors the rest of the site more effectively. This is because there is a perception that Google has a limited crawl budget, which is the amount of time and resources that Google allocates to crawl each website.
However, Google has repeatedly stated that there is no such thing as a crawl budget in the way that SEOs (search engine optimisation professionals) perceive it. Instead, Google uses a variety of factors to determine how many pages to crawl from a website, including the website’s server capacity, the quality of the pages, and the relevance of the pages to search queries.
One of the reasons why Google is choosy about how much it crawls is that Google doesn’t have enough capacity to store every single webpage on the internet. This means that Google must prioritise which pages to crawl, and it will often focus on pages that are more likely to be relevant to user search queries.
If you are concerned about Google crawling your website effectively, you should focus on improving the quality of your pages and making sure that they are relevant to user search queries. You should also avoid creating duplicate pages, as this can confuse Google and make it less likely that your pages will be indexed.
If you are still having trouble with Google crawling your website, you can use Google Search Console to troubleshoot the issue. Google Search Console will show you a list of pages that have been crawled by Google, as well as any errors that have been encountered. You can use this information to identify and fix any problems that are preventing Google from crawling your website effectively.
Google’s John Mueller has stated that there are two main reasons why Google might discover a page but decline to index it:
According to Mueller, Google’s ability to crawl and index webpages may be hindered by a website’s capacity to handle increased crawling. As a website grows larger, more bots are required to crawl it. This issue is further compounded by the presence of other legitimate bots from companies like Microsoft and Apple, as well as various other bots, some of which are associated with hacking and data scraping activities.
Consequently, during peak hours, particularly in the evening, a large website may experience a significant strain on its server resources due to the thousands of bots attempting to crawl it. Therefore, one of the initial inquiries made when addressing indexing problems with a publisher is the condition of their server.
In general, websites with millions or hundreds of thousands of pages necessitate a dedicated server or a cloud host. Cloud servers are preferable as they offer scalable resources such as bandwidth, GPU, and RAM. In some cases, additional memory allocation may be required for specific processes, such as increasing the PHP memory limit. This adjustment helps the server handle high traffic and prevents the occurrence of 500 Error Response Messages.
To troubleshoot server issues, it is necessary to analyse the server error log.
Insufficient indexing of pages can be attributed to a website’s overall quality, which Google evaluates and assigns a score or determination to. It is an intriguing factor that influences the extent to which Google indexes a website’s pages.
John Mueller has mentioned that the quality assessment of a website can be influenced by specific sections or segments within it. In other words, the evaluation of overall site quality takes into account the impact of individual sections on the website as a whole.
During one of his Office Hours videos, John Mueller from Google defined site quality, emphasising that it extends beyond the mere textual content of articles. According to him, site quality encompasses the overall website experience, including elements such as layout, design, presentation of content, image integration, and page speed. These factors collectively contribute to the assessment of a website’s quality.
According to Mueller, the process of determining site quality by Google is a time-consuming endeavour that can stretch over several months. Understanding the context and relevance of a website within the broader internet landscape requires a substantial amount of time for analysis. This evaluation period can typically last anywhere from a couple of months to even longer, exceeding six months.
When it comes to optimisation, taking a holistic approach by focusing on the entire site or specific sections is a broad perspective. However, the key lies in optimising individual pages on a scalable basis, which is particularly important for ecommerce sites with a vast range of products. Here are some factors to consider:
Main Menu Optimisation
Ensure that the main menu is optimised to direct users to the crucial sections of the site that are of interest to most users. The main menu can also include links to the most popular pages, enhancing user navigation and signalling to Google the importance of these pages for indexing.
Linking to Popular Sections and Pages
Prominently linking to popular sections and pages from the homepage allow users to easily access the most relevant content. Additionally, it indicates to Google that these pages hold significance and should be indexed accordingly.
Improving Thin Content Pages
Thin content refers to pages that lack substantial and valuable information or are mainly duplications of other pages (templated content). It is insufficient to merely fill these pages with words; the content must be meaningful and relevant to site visitors. For product pages, consider including measurements, weight, available colours, product recommendations, compatible brands, links to manuals and FAQs, ratings, and other pertinent information that users will find valuable.
To further assist in optimising your site and resolving indexing issues, refer to the Google Search Console guide and leverage useful tips provided by Google Search Console. Additionally, you can follow guidelines on fixing noindex errors, ensuring that your pages are appropriately indexed.
Learn More: 11 Issues Affecting Website Crawlability and Their Solutions
How to Fix Nonindexed Errors
While it may appear sufficient to place products on shelves in a physical store, the truth is that knowledgeable salespeople often play a crucial role in driving sales. Similarly, a webpage can fulfil the role of a knowledgeable salesperson by effectively communicating to Google the reasons for indexing the page and assisting customers in making product choices.
In the digital realm, there are instances where webpages may be discovered by search engine crawlers but are . This situation of being crawled but not indexed can be improved by optimising the webpage’s content and ensuring its relevance to users and search engines.
I hope you enjoy reading this blog post.
If you want to get more traffic, Contact Us
LEAVE A REPLY