fbpx
Loading
Home/Blog/Internet marketing/A complete guide to using Google Search Console

A complete guide to using Google Search Console

What is Google Search Console, and why is it needed

Google Search Console (GSC) is a free service that provides information to optimize site content, improve the website, and correct technical errors.

Benefits of the Search Console service

Search Console provides an opportunity for an SEO specialist to analyze the state of the Internet resource and implement improvements to promote the site.Using Webmaster Google Tools, that is, the console, you can find out:

 

  • which affects site traffic;
  • where the site ranks in Google’s search results;
  • which key queries should be used to promote the site;
  • are there any errors on the website.

How to set up Google Search Console

The Google Search Console setup process is not complicated. To get started, you must have a Google account. If you do not have an account — create it.

GSC installation

  • Sign in to your Google Account.
  • Go to the service website.
  • Choose the type of resource: domain resource or with a prefix in the URL.
  • Click Continue.

 

In the first case, addresses with any subdomains and a protocol prefix are suitable, in the second case, the address must refer to the specified or have a specific protocol prefix.

Confirmation of domain ownership

If you have chosen a domain resource, you need to verify ownership of it through DNS. For this, enter the domain name in the confirmation window, copy the text entry and enter it in the DNS configuration of your domain, then click Confirm.If you have chosen a resource with a prefix in the URL, enter the site address and choose one of several verification methods offered by the system:

 

  • using an HTML file — you need to download it and upload it to your site;
  • using the HTML tag — it is necessary to add to the code of the main page of the site the head section before the first body section;
  • through a Google Analytics account — the main page of the site must contain a fragment of the analytics.js or gtag.js code, and the tracking code must be located in the head section of the page;
  • through a Google Tag Manager account — you will need a fragment of the Google Tag Manager container and permission to publish it;
  • using a DNS record — you need to log in to your provider and domain names account and add a DNS text record to the DNS configuration for the domain.

Google Search Console Features

Google Webmaster provides access to several reports and tools for website analysis and its promotion on the Internet.

Overview Report

It is a report on all key indicators. It consists of several sections:

 

  • efficiency;
  • covering;
  • quality;
  • improvement

URL validation

This tool allows you to find out about the current indexing status of a page. After checking the address, you will receive a detailed report on the indexing status, with additional information about the work, the detection method, the canonical page, and other parameters.

Efficiency report

In the Performance section, you will find a graph of clicks and key queries by country, page and device. It is used to control situations by keywords: the disappearance of some from the top and the appearance of others, the rise, and fall of certain words, etc.

What information can be found in the report

As a result, you can analyze the dynamics of search traffic from Google in terms of impressions and clicks, find out from which devices users view the site more, from which countries, and which queries and pages are most popular.You can check the site’s effectiveness:

 

  • in search;
  • in Google News;
  • in the recommendations.

 

The most important report is Site performance in search. In the reports on recommendations and news, there is no option to view data on average site position and specific phrases. The disadvantage of Google Search Console is the ability to view only one parameter within a particular report. Therefore, you can use other applications for more convenient data filtering.

Coverage Report

This section shows four types of pages:

 

  • Pages with errors that the search engine could not index due to a 400 or 500 error or an incorrect redirect. There may be other reasons, such as blocking the page in robots.txt.
  • Warning pages — indexed pages with errors. Most often, indexing is limited by the robot exclusion standard.
  • Error-free pages — successfully indexed pages without errors.
  • Excluded — pages that have not been indexed. The reason for exclusion from the Google index is indicated for each page.

 

Using the information from the Coverage section, you can detect all errors that prevent page indexing, find incorrect redirects on the site, duplicate pages, and incorrect directives in the robots.txt file.

What errors should you pay attention to

  • Indexing with the noindex tag is not possible. Due to the specified directive in the code, the Google robot cannot detect the page. If you want it to be indexed, the directive must be removed. It is easy to check its availability: open the page in the browser and perform a search.
  • The page is not indexed due to a request to remove it. It is necessary for the verified site owner to find out who sent such a request using the URL removal method. Such requests are valid for three months, after which the Googlebot will return to the page and index it even without a request.
  • Blocking in the robots.txt file. Such a directive does not allow the Googlebot to access the page. At the same time, the page can be indexed in other ways — if the search engine can find the necessary information without downloading it.
  • Error 401: Googlebot is not available due to an authorization requirement, and you must either give it access or deny the required authorization.
  • The page has been scanned but not indexed. In this case, you will have to wait because the indexing takes more time.
  • Page found but not indexed. It means that the page is not added to the Google index. Perhaps due to the heavy load of the site, Google did not have time to analyze everything completely. In this case, indexing is postponed to another time.
  • Page with the canonical tag. This page duplicates another that the search engine considers canonical, but the canonical site address is set correctly. In this case, you don’t need to do anything.
  • A copy page, while the user did not select the canonical option. And it is necessary to specify the canonical version. To understand which page you need to specify, you should study the non-canonical version using special programs.
  • The canonical versions selected by the user and the search engine do not match, the page appears as a copy. It happens when Googlebot indexes the URL chosen by the system instead of the desired page. The solution is to mark the page as a non-canonical duplicate.
  • The URL is not chosen by the user as canonical, the page is a copy. The address is a copy and is not selected as the only correct one. And when the user wanted to index the page, the system chose another one. The reason is that Google selects one canonical page from various copies and indexes only it. It differs from the previous error only in the presence of a request for user indexing. To prevent such errors, clearly define the canonical.
  • Error 404. It can be when Google finds a site without a request for scanning and a sitemap, perhaps on another site through a link. It is possible that the page was previously available but removed, and Googlebot is still looking for this page. We recommend using a 301 redirect when deleting a page.
  • Redirection. If you made a redirect — everything is fine, but if not — you need to look for the reasons for the error. Maybe it’s an incorrect setting of the site’s content management system.
  • Incorrect 404 error. A 404 error is displayed to the user, but there is no such response code on the page.
  • Error 403 — the page is blocked due to forbidden access. To solve the problem, you should either block access to the page using noindex or robots.txt, or fix the error.
  • URL blocking due to client error (4xx). It happens when the server encounters a client error unrelated to the previously learned errors.

Sitemap files

The site map helps the search engine move around the site, significantly speeding up its indexing. Therefore, creating a site map, or Sitemap, is an important element of site optimization.The Sitemap files section provides information about the added sitemap: the type of file, the time it was sent, the number of addresses, and the day the robot processed it. Here you can find out about the presence of errors in the site map. Here is a list of them:

 

  • The URLs are not available. For some reason, the pages are not available to the search engine.
  • Failed to navigate to the web address. It is possible due to multiple redirects during the transition or because the robot cannot scan the relative addresses in the sitemap.
  • URL cannot be used. It means some addresses are located at a higher level than the sitemap file or on a different domain.
  • Compression error. When trying to restore a compressed Sitemap file.
  • An empty Sitemap. There are no addresses in the sitemap file.
  • The maximum size of the Sitemap.xml file has been exceeded. It should not exceed 50 MB without compression. To solve this problem, split the sitemap into multiple files, specify them in the Sitemap index, and then submit the index.
  • Invalid attribute value. It indicates that the attribute associated with one of the XML map tags has a null value.
  • The date has been entered incorrectly. The format or value of at least one date in the file is incorrect. The W3C time and date encoding format is correct.
  • Invalid tag value. At least one tag in the sitemap file with a null value.
  • Invalid URL. An invalid address specified, possibly with invalid characters or errors.
  • Incomplete URLs. Some files in the Sitemap index do not have full addresses.
  • Invalid XML: there are too many tags. It means that there are duplicate tags in the sitemap file.
  • Missing XML attribute. It means that at least one of the tags does not have a required attribute.
  • Missing XML tag. At least one of the entries in the Sitemap does not have a required tag. It can also mean that there is no video icon URL.
  • There is no video title.
  • Attached Sitemap Index Files. It means that at least one entry in the Sitemap index file contains its address or the address of another Sitemap index file.
  • Parse error. It means that the search engine has not parsed the XML content of the sitemap file. The most common reason is the presence of unescaped characters in the web address. Therefore, in all XML files, the symbols &, ‘, “, <, > , and some others must be escaped in any data values.
  • Temporary error. The sitemap could not be processed due to a temporary system error. You usually don’t need to send the file again.
  • Too many Sitemap files in the index file. There should not be more than 50 thousand of the
  • Too many addresses in the Sitemap file. Similar to the previous point, they should not be more than 50,000.
  • Unsupported file format.
  • Invalid path: missing www prefix. The addresses listed in the sitemap do not contain the www prefix on the file path.
  • Incorrect namespace. It means that the namespace in the root file is not set correctly or is missing, the URL is incorrect, or there is a spelling mistake.
  • The sitemap file starts with a space. XML files must begin with an XML declaration that indicates which version of XML is being used.
  • HTTP error [error code]. It occurs when trying to load a sitemap file.
  • Too big video icon. Allowed sizes: up to 160×120 pixels.
  • The addresses of the video and the playback page are the same. The player page URLs and video URLs in the sitemap must not match.
  • The video address points to the playback page. The URL in the <video:content_loc> tag in the Sitemap file for the video points to the page with the player.
  • The sitemap contains addresses that are blocked in the robots.txt file.

Removals feature

This tool allows you to quickly remove unnecessary pages from the index. Especially useful for removing old 404 pages that may have been in the index for a long time. You can remove the site address, current page description or cached version from the search.To delete an address, you need to create a request in the Temporary removal tab, enter the page address in the Temporary URL removal tab, select Remove only this URL, and confirm these actions.

Page quality

This section analyzes the simplicity and comprehensibility of a page according to the search engine. It includes mobile adaptation, basic Internet metrics, security, protocol expansion, and more.

The main Internet indicators

These indicators allow you to understand how well the site pages are adapted for mobile phones.

 

  • LCP (Largest Contentful Paint) — time to fully load the first screen of the page.
  • CLS (Cumulative Layout Shift) – measurement of layout shift (blocks of text or other elements) during reading and subsequent page loading.
  • FID (First Input Delay) is the time between the user’s first interaction with the page (click on a button, link, etc.) and the browser’s response.

 

In the section, you can see effective and ineffective addresses and the number of pages that need improvement.

Mobile Device Compatibility

Here you can see how convenient the site is for browsing on smartphones and other mobile devices. Errors are marked in the graphics in red, and all existing problems are collected in a separate Help block.

What errors should you pay attention to

  • Unsupported plugins are used. These are plugins that are outdated and incompatible with most mobile browsers. We recommend using modern technologies with wide support.
  • The viewport meta tag is not set. This meta tag tells the browser how page elements should resize based on screen sizes. To fix the error and configure the correct display of the site on mobile devices, you need to write the viewport tag properties in the code.
  • The viewport meta tag does not have a device width set. Due to the fixed width of the viewport specified in the code, the page does not adapt to screens with different diagonals. In this case, you need to use responsive design by adjusting the scaling of the pages to fit the screen size.
  • Content is wider than the screen. The user should not use horizontal scrolling while browsing the site. To resolve this issue, check the scaling of the images and make sure that the CSS width and positioning values are relative.
  • Too small font. In this case, users have to apply scaling, which is inconvenient. Specify font sizes after setting the viewport, so they display correctly on mobile devices.
  • The interactive elements are too close. In this case, it is difficult for the user to click on one element without hitting another. In this case, you need to think about the optimal sizes of buttons, navigation elements and others, and the distance between them, so that the user can comfortably interact with the site.

Improvement

Here’s what information the search engine couldn’t find on the advanced results pages and error information for the following parameters:

 

  • Navigation bar
  • Search window on the site
  • Question and answer page
  • FAQ
  • Vacancies
  • How-to
  • Review
  • Image license
  • Video
  • Logotype
  • Goods
  • Recipe
  • Data set
  • Fact check
  • Event
  • Special message

Security and manual actions

This section shows the issues that caused the search engine to sanction the website. It can be both security problems and the actions of users or site administrators.Manual actions include:

 

  • Spam from users.
  • Spam on a free hosting server.
  • Artificial links to and from the site.
  • Masking or hidden redirection, including for mobile devices.
  • Masked images.
  • Not useful content.
  • Keyword spamming or hidden text.
  • A mismatch between the AMP version of the page and the canonical.
  • Violation of content rules in Google News or recommendations

 

There are such security issues:

 

  • Hacking: use of malicious software, code, content, URL.
  • Scam pages and embedded cheat resources.
  • Downloading unknown/malicious software.
  • Links to download malware.
  • Withdrawing money from a mobile phone without warning.

 

If you see anything on this list, your site is under Google sanctions for security or license violations. A complete list of sanctions and reasons for their application is available in Google Help.

Outdated tools and reports

Here are the mechanisms from the old version of the console which are not present in the new one:

 

  • Checking robots.txt. It checks the correct operation of robots.txt — file syntax and address blocking.
  • Targeting by countries and languages. It checks multilingualism on an Internet resource. A site will rank better if you link it to a specific country.
  • URL settings. The function is useful when sending special address information. It prohibits scanning pages with parameters.
  • Message. All the messages sent to the website’s mail appear.
  • Contact with Google Analytics and other resources. To set up the connection, go to Settings — Connections.
  • Scanning speed. Allows you to reduce the frequency of resource scanning by search engines to improve performance.
  • Marker. The tool allows you to transfer detailed data on the site to Google. It can be used only after scanning by the robot.
  • Web tools. Advertising report, testing, abuse, inappropriate messages, and more.

Links

Here is given:

 

  • a list of backlinks to the site, including anchors;
  • referring domains;
  • linked pages;
  • list of internal links;
  • the most common link texts.

 

This report will be useful to hide link mass from SEO services and to find out which pages to add links to.

Settings

The section consists of several blocks. The first is resource settings. The following data are indicated here:

 

  • verified site owner and other users;
  • what type of rights these users have;
  • resource-related services;
  • site address changes (required when moving to another domain).

 

The second block — scanning: general data on the number of Googlebot requests to the resource over the last 90 days.The third block is about the marker. Here you can see the type of Googlebot and the term of the site’s inclusion in Google’s Search Console account. You can also delete the resource here.

Google Search Console Insights

A beta testing tool allows you to evaluate the quality of your content in Google search. The key evaluation parameter is the number of page views for all time.It provides an overview of site statistics with a summary of views and sessions, displays new content and the most popular content in the last 28 days, indicates the main traffic channels, how users find the site in a search and social networks, and conversions from other sites.

Conclusions and recommendations

Using GSC, you can get acquainted with all the necessary information for improvement and development of the site, and for an SEO specialist, this is an indispensable tool for:

 

 

We advise you to check all reports in the Webmaster Google Console once every 7-10 days for errors to correct them promptly and to regularly check your email, where Google sends information about the site’s performance.

Worth a read