Low-value or low-demand pages

The Yandex search database contains data about numerous pages. A user search query may return a large number of results that are relevant to a greater or lesser extent.

The algorithm may decide not to include a page in search results if demand for this page is likely to be low. For example, this can happen if the page is a duplicate of pages already known to the robot, if there's no content on the page, or if its content doesn't completely match user queries. The algorithm automatically checks the pages on a regular basis, so its decision to include or not include a certain page in search may change over time.

To see the pages excluded from search results, open Yandex.Webmaster, go to Indexing → Pages in search (Excluded pages), and look for the pages with the "Low-value or low-demand page" status.

Tip. If you own an online store, you can use Turbo pages and connect to Yandex.Market using the YML feed. This will help Yandex get more detailed information about website pages and may improve their representation in search results.

Why are pages considered low-value or low-demand?

When selecting pages, the algorithm takes into account many factors. Based on this, the pages excluded from search results can be divided into the following types:

A page may be considered low-value if it's a duplicate of another page, or if it doesn't contain any content visible to the robot.

How to fix
  • Check the page contents and its availability to the robot:
    • Make sure that the page headings (title, h1, h2, etc.) are correct and describe its contents well.
    • Check if there's important content that is provided as an image.
    • Ensure that JS scripts aren't used to display important content. Check the server response to see the HTML code of the page as the robot gets it.
    • Make sure that iframe isn't used to display content.
  • If a page doesn't have a value, it's better to hide it from indexing:

    • If the page duplicates the content of other site pages, use the rel="canonical" directive to indicate the original page, or specify insignificant GET parameters in the Clean-param directive in robots.txt. You can also disable page indexing by using the HTTP 301 redirect, instructions in the robots.txt file, or the noindex meta tag or HTTP header.
    • If the page is technical and has no useful content, prohibit its indexing using the Disallow directive in robots.txt, or the noindex meta tag or HTTP header.

The algorithm doesn't limit the site in the search results. For example, if any previously excluded page is updated and becomes eligible to appear in search results, the algorithm checks it again.

Note. Yandex doesn't have quotas for the number of pages to be included in the index. All pages that the algorithm considers useful to users are indexed regardless or how many there are.

See our recommendations:

What answers does your site give?

Presenting information on a site

FAQ

Pages sometimes appear in and disappear from search results

Our algorithm regularly checks all pages, almost every day. Search results may change, which is why the relevance of pages in search results may change as well, even if their content remains the same. In this case, the algorithm may decide to exclude pages from the search or return them.

Pages are configured to issue the 403/404 HTTP response code or the noindex element is used, but links are excluded as low-demand

The algorithm doesn't re-index pages, but checks the contents of the pages that are currently included in the database. If the pages were previously available, responded with the 200 OK status code, and were indexed, the algorithm can continue to index them until the robot visits these pages again and tracks changes in the response code.

If you want such pages to be deleted faster, prohibit them from indexing in the site's robots.txt file. After that, the links will automatically disappear from the robot's database within two weeks.

If you can't do this, see the recommendations in Indexing.

There are similar pages on the site, but one page is included in search results and the other one is not

When checking pages, the algorithm evaluates a very large amount of ranking and indexing factors. Decisions for different pages, even those with very similar content, may vary. It's possible that similar pages respond to the same user query, which is why the algorithm only includes the page that it considers the most relevant in the search results.

Our site contains pages that shouldn't be indexed, but they are excluded as low-demand pages

The number of pages excluded this way doesn't negatively affect the site's ranking. However, if these pages are removed as low-demand, they remain available and can take part in the search. This means the algorithm may include them in search results. If you are sure that such pages are not needed in search, it's better to prohibit indexing them.

Why were duplicates excluded as low-demand pages?
The contents of pages may be slightly different or may change dynamically, which is why such links can't be considered duplicates. However, because of content similarity, pages may compete with each other in search and duplicate each other, making one of them less popular.

Tell us what your question is about so we can direct you to the right specialist:

Pages with different content can be considered duplicates if they responded to the robot with an error message (for example, in case of a stub page on the site). Check how the pages respond now. If pages return different content, send them for re-indexing — this way they can get back in the search results faster.

To prevent pages from being excluded from the search if the site is temporarily unavailable, configure the 503 HTTP response code.