Noindex Tags: What You Need to Know for Ecommerce SEO
Ecommerce brands have the unique SEO challenge of managing thousands of product pages and category pages. When dealing with such a large volume of webpages, it becomes vitally important to use every tool at your disposal to help your most valuable pages rank and be seen by your customers.
Canonical links, robots.txt files and XML sitemaps are all tools in our ecommerce SEO 101 checklist, that can help accomplish this task. Noindex tags are yet another powerful ecommerce SEO tool that can be used to optimize your site and boost your rankings, organic traffic and sales.
In simplest terms, noindex meta tags tell search engines how to crawl and index your site. Specifically, they are used to communicate which pages should and should not be indexed.
This is important because every search engine has assigned your ecommerce site a unique crawl budget, i.e. the maximum number of URLs their bots will crawl before moving on to another site. By telling bots to ignore certain low-value pages, your crawl budget can be spent on high-value, revenue-generating pages instead.
However, noindex tags can easily be misused. This guide breaks down everything you need to know about using noindex tags for ecommerce SEO, including best practices and common mistakes to avoid.
What Are Search Meta Tags?
Meta tags are pieces of data that provide important information about a webpage to search engines. This information, known as metadata, is not displayed on the page; rather, it is added to the page’s source code. Metadata is used by search engines to make decisions about how a page should rank, or what content should be displayed in the search results.
There are many different types of meta tags, each with their own function and purpose. The title tag, meta description and image alt tag are among the most well-known meta tags used by ecommerce brands and digital marketers. The robots meta tag is yet another type of meta tag, which we will explore in more detail in the following sections.
What Is the Robots Meta Tag?
The robots meta tag is a directive that tells search engine robots which webpages to crawl and index, and which pages to ignore. The primary directives associated with robots meta tags are index or noindex, and follow or nofollow.
By default, if you do not specify a robots meta tag for your webpage, search engine crawlers will read that as index, follow. This enables search engines to add the page to their index so it can be discovered by users, and to follow links on the page to discover other pages.
If you wish to instruct search engines to do otherwise, a web developer will need to place the relevant robots meta tags in the source code of your page. For instance, let’s say you want bots to follow links on the page, but not index it. The robots tag for this directive would read:
<meta name="robots" content="noindex" />
or
<meta name="robots" content="noindex, follow" />
Again, because index, follow is the default, the follow is implied and can be added or not; search engine robots will read both of the above meta tags the same way. Noindex or noindex, follow is the most commonly used tag.
If you would like bots to both deindex the page from search and ignore all of the links on the page, the following robots tag applies:
<meta name="robots" content="noindex, nofollow" />
In the next section, we will provide specific examples of how noindex tags factor into ecommerce SEO.
How Do I Use Noindex Meta Tags on my Ecommerce Site?
Noindex tags are one of the only surefire ways to guarantee that search engines will ignore live pages on your site and not display them on the SERPs.
Why might an ecommerce brand want to deindex their webpages? Below are some scenarios in which the noindex meta tag might be beneficial for your site’s SEO:
Your page has duplicate content (for example, a Google Ad landing page that features the same content as one of your category pages; specific filtered category pages that are helpful to the user but not worthy of your site’s crawl budget; or a product page that is very similar to another product page, such as the same product in a different color or size).
You are experimenting with new category pages, product pages or blog content and want to see how they perform with users, but not with search.
You have a product with limited stock you don’t want people searching for and finding.
You are running promotional campaigns on your site that have little to no search value.
You want to clean up your site for search engines, either by telling them to ignore non-essential pages like “checkout” and “account,” or by deindexing duplicate pages.
In all of the above scenarios, noindex meta tags are used to instruct search engine bots to ignore certain low-value pages, freeing up your crawl budget for more important pages.
Noindex Tags: Warnings & Things To Know
Because noindex meta tags are a guaranteed way to disallow a page from search, you do need to be careful when using them. Accidentally adding this directive to an important page you want to be crawled and indexed could be disastrous for your ecommerce SEO.
Pages with noindex tags should be removed from your XML sitemap if possible. Leaving too many deindexed pages in the sitemap can confuse the search engines as to which pages are actually important to crawl, and which should be ignored.
The same goes for your top navigation; as an SEO best practice, noindex tagged pages should be removed to avoid causing confusion for search crawlers.
Final Thoughts
When it comes to ecommerce SEO, robots tags are one of the most important tools in your toolbox. Yet while a properly applied noindex tag can free up your crawl budget and help your site rank higher, an incorrectly applied meta tag can wreak havoc on your organic traffic.
As a best practice, we always recommend hiring a qualified technical SEO expert to evaluate your site before implementing noindex meta tags.
Need help evaluating if noindex tags are the right ecommerce SEO solution for your site? Contact Whitecap SEO for more information and a free quote.