I asked the Google search team 'How can I be posted on Discover?' 'Is it OK to divide long articles?'

The contents of the question and answer session by the Google search team held at the “Google SEO Office Hour” in November 2022 have been released. Gary Illies of the Google search team and others answer questions from Google search systems and website operators.

November Google SEO Office Hours | Google Search Central Blog | Google Developers


Is it possible to use multiple comma separated values in one schema field? For example, ' GTIN is equal to value1 and value2'.

Check the documentation for specific features. Guidance may vary by feature. But in general, it's good markup practice to specify one value per field. A GTIN is a unique product identifier, so there should be one value in this case. When specifying a GTIN and ISBN, use the GTIN property and then the ISBN property so you know which value applies to which property.

Google Search Console 's disavow tool is not currently available for domain properties. How should I use it?

If you are enforcing domain-level validation, you can validate prefix level without requiring additional tokens. Please verify that host. Also, remember that randomly disavowing links that look odd or have been flagged by some tool is a waste of time. I can't change anything. Use the disavow link tool if you paid for the link and can't remove it afterwards.

How does the ' Helpful Content Update ' affect sites that accept guest posts for financial gain?

Our system is able to identify sites with low quality content or content that was created only for search engines. Sites that accept guest posts for monetary gain without carefully vetted content and links run the risk of degrading service results not only through Helpful Content Update, but also through other systems we already have in place. I am indebted.

◆ Q4:
What to do if Google doesn't detect canonical tags correctly?

Canonicalization is based on more than just the 'rel='canonical'' element. When Google finds remarkably similar URLs, it tries to pick the one that best describes your content. At that time, not only 'rel='canonical'' but also redirects, sitemaps, internal links, external links, etc. are considered. If you have strong feelings about which URLs to use, you need to make sure all signals match. Remember, canonicalization is mostly about 'what URLs do you want to see?' It does not affect the ranking of content.

If the content of the site is short, is Google more likely to crawl and index the page?

That's an interesting question. Content length does not affect how often it is crawled or whether it is indexed. It also does not contribute to the crawl rate of the URL pattern. Niche content may also be indexed. Such pages are not penalized, but in general, popular content on the Internet, such as content that many people link to, is easier to crawl and index.

Is the dynamic sorting of the list causing the product images not to be indexed?

Dynamic sorting of the list is unlikely to be the reason the product images are not indexed. Product images are referenced from the product description page so that you can see which product the image is for. Optionally, create a sitemap file or provide a Google Merchant Center feed so Google can find all of your product pages independently of your listing page.

Do you have a timeframe for site migration? We are currently in the process of migrating a large site to a new domain. After four months, there is no sign that the new domain will rank for the old domain in search results. What should I do?

It's perfectly normal to see ranking fluctuations with such a big change. There is no timeframe for when to settle down. If it's still moving something, it will continue to fluctuate. It's hard to say what to do next without looking at the site itself. Also, for site-specific help, please post on the forums so that we can understand your specific situation and get more specific advice.


HTTP/3 will probably improve performance, even indirectly, so could it improve SEO?

Google currently does not use HTTP/3 as a ranking factor. As far as I know, we don't use it in crawl either. In terms of performance, I don't think the user gains from using HTTP/3 will be significant enough to impact core web vitals. Having a fast server is always a good idea, but I doubt that using HTTP/3 has any direct relevance to SEO. Similarly, using faster RAM in your server would be hard to see a direct link to SEO.

Why does Google keep using backlinks as a ranking factor? Link building campaigns should be prohibited.

First, backlinks as a signal are far less influential than they were many years ago when Google search began. Google has hundreds of robust ranking signals to help rank the most relevant and useful results for every query. Then there are the full link building campaigns, but these are basically link spam as per Google's spam policy. We have many algorithms that can detect and disable unnatural links at scale. If you build a spammy link, it's very likely that it's already disabled as soon as it's discovered by our system, and spammers and SEOs who spend money on links are simply wasting their money. It's just there.

Is it a problem that most of the anchor links are the same thing?

is not a problem. Rather normal. It is normal for products linked within an EC site to always be linked with the same URL. So it's good. From an SEO point of view, nothing to worry about.

If you add a schema for your website, do you usually add a schema for your software application as well?

It depends. If your site is about software applications, you can certainly add structured data for software applications as well. Just nest everything so that there is one website node on the home page instead of multiple website nodes. That's the key point in this matter.

Will too many noindex pages cause problems during discovery and indexing?

noindex is a very powerful tool that allows you, the site owner, to keep your content out of the index. Because of this, it won't have any unintended effects on you when it comes to crawling and indexing. For example, having lots of noindexed pages won't affect how Google crawls and indexes your site.

If the URL and page do not use the same language, will it affect my ranking?

From an SEO point of view, it doesn't hurt if the URL and page content are in different languages. Users, on the other hand, may care, especially if the URL is shared with others.

How should content creators respond to sites that use AI to plagiarize or alter content and display it high in search results?

Scraping content, even with minor modifications, is against our spam policy. We have many algorithms that pursue such behavior and demote sites that are scraping content from other sites. If you see a site repeatedly scraping content appearing high in search results, please feel free to report it via the spam reporting form.

Is it true that Google rotates indexed pages? Because the site I work on has page A indexed Monday through Thursday and not indexed Friday through Sunday.

No, not at all. We are not rotating the index based on the day of the week.

Should we look at the ratio of pages that are indexed versus those that are not? Should you be aware of the potential wasted

crawl budget on unindexed pages? '.

No, there is no magic ratio to watch out for. Also, unless you have a huge site of 1 million pages or more, you probably don't need to worry about the so-called 'crawl budget' of your website. Removing unnecessary internal links is nice, but for small and medium-sized sites, it's more about site quality than SEO.

How to enable ' Discover '?

You don't need to take any action to get your content ready for Discover. Google does it automatically. Google uses different criteria than search results to decide whether to show content or enroll it in Discover. So getting traffic from Search doesn't guarantee you'll get traffic from Discover.

Many SEOs are frustrated by Google Search Console flagging millions of URLs as noindex excluded. These are all pointless internal search pages linked from spam sites. Is this a crawl budget issue?

noindex is there to keep it from being indexed, it has no unintended side effects as mentioned above. If you don't want a particular page or URL to be indexed by Google, keep using noindex. Don't worry about crawl budget.

If I break an article covering a long topic into smaller articles and link them together, does that count as thin content?

You don't know until you see the content. But word count alone doesn't tell you how thin your content is. A thorough article that delves into a topic in depth is good, as is breaking it down into easy-to-understand topics. Depending on the topic and the content of the page, you know your readers best. It's all about what works best for your users, and whether you're delivering enough value on each page, no matter what the topic.

According to the HTTPS Reports help center documentation, a high number of 404 pages can cause Google to stop crawling and processing URLs. How many 404 pages do you mean? Also, would having a lot of 404 links in the website affect this?

Hmm, this is troublesome. Thank you for asking this question. It originally stated that the HTTP status code 404 would stop crawling at the site level, but later corrected a typo in the documentation. It should have been an HTTPS certificate error instead of a 404 error. It doesn't matter how many 404 errors you have on your site. 404 errors do not affect the crawling of the entire website. 404 pages are a very healthy element of the internet.

What is the current state of

Key Moments video markup? I feel like this snippet only works for YouTube videos.

Key Moments markup is live and used by various video providers. It's not YouTube specific. If you feel that our online documentation isn't enough, try participating in our public forums.

My new website 'Weird All' is not showing up in searches when I enter the full name of the website, what should I do? Only singer Weird Al is displayed.

Most people who type 'weird all' probably end up at Weird Al. It's really hard to differentiate between these. Even if it's not a typo, our system treats it as a typo and tries to guide people to what they might be looking for. When it comes to this kind of SEO, it's important to choose a site name that doesn't look like a typo of a well-known name.

Is it possible to get the FAQ snippet with just pure HTML? It's just as simple HTML as possible, with no reference to schema markup.

If you're talking about FAQ rich results, you currently need FAQ schema markup. Some features may be able to automatically retrieve content posted on web pages, but you should check the specific documentation to see if this is possible. Markup is absolutely necessary for a rich FAQ.

Does self-referencing canonical help with deduplication?

In my view they do nothing. However, sometimes pages are displayed with other URLs. For example, a page may be referenced in UTM tracking parameters. This makes the canonical less self-referential, which is practically helpful.

It seems that search engines are promoting strong sites more and more and lowering weak sites more and more. Don't you think there should be some degree of randomness?

Our primary goal is to provide the most useful results to people searching.

Why doesn't Google take measures against copy sites?

Thank you for your report. We are aware of such attempts and are investigating. In general, sites with spam-like scraping content violate Google's spam policy, and Google's algorithms successfully demote them in search results.

Will comment pages that are internally linked with 'rel='ugc'' be excluded from indexing?

No, not at all. Pages are indexed for a variety of reasons. ``rel='ugc''' is treated differently than, for example, a ``rel='nofollow''' link, but does not harm the linked page. If you don't want your page to be indexed, be sure to use the noindex tag instead of the robots meta tag .

How does Google evaluate product reviews for digital products that don't have affiliate links?

Our system focuses on physical products available for purchase, but our system may also evaluate content related to digital products. It is not mandatory to provide an affiliate link when reviewing products. It's a best practice to include links to other resources that your readers find useful.

My site has over 10,000 pages. Writing meta tags takes a long time. Are there any shortcuts?

The meta description documentation has guidance on this subject. Especially for large database-driven sites like product aggregators, it might be a good idea to generate meta descriptions programmatically. But make sure what is produced is of good quality. The description should be unique and specific, relevant to each page. Don't use the same meta description over and over again.

I have 10,000 pages of thin content on my site. How can I remove it?

Simply delete the page. You don't need to do anything. But removing thin pages doesn't automatically make your site more valuable. Regardless of thin pages or deleted pages, it is important that the remaining pages are of value.

I'm new to SEO, but I see many websites and videos recommending buying backlinks. Are your backlinks strong or should you focus on maximizing the quality of your site?

There are always people looking for shortcuts and tricks to manipulate, search, rank, or spend money to make their site appear important to search engines. Backlink link spam is one example of these tricks. But compared to 20 years ago, we no longer use these links to advantage. They also launch many algorithms to defeat link spam, so you probably shouldn't waste your money on link spam. That money is what is needed to create a great website with great user experience and useful content.

Do I have to have a cached copy of the page to show up in search results?

No, you don't need to cache your pages. The caching system is somewhat independent of search indexes and rankings. Even if a page does not have a cached page, it cannot be a signal of page quality

Why is Google Search Console reporting an unknown internal URL as the referring URL for 400 series errors?

Google does not index all pages, so the URL may be unknown. If the crawler has come across the page or has not indexed it, it will report it as unknown.

Should I mark automatically created pages as nofollow on my WordPress website or blog, such as 'abc.com/page/one' or 'page/two'?

If you want to restrict crawling or indexing of specific pages on your site, using robots.txt disallow rules is better than nofollowing URLs pointing to those pages. It's a lot less work and probably more robust that way.

How do I understand Search Console performance reports?

We blog about

performance data and more. There is a difference between looking by query and looking by page, and there is some degree of privacy filtering in the per-query data. In many cases the trends are the same, so the details are less important, but we have a lot of documentation for those who want to know more.

How can I prevent paid content from appearing in search results and Discover for non-paying users? I've properly implemented the markup for paid content, but I'm concerned that some users aren't satisfied.

I recommend that method. If you don't want your paywalled content to appear in searches, you can opt out of indexing the page.

Will my spam score affect my site's ranking?

Google does not use third party SEO tools to determine individual page scores. So keep in mind that if there is a tool somewhere to flag a website's spam score, Google won't use it. But that doesn't mean the tool isn't useful. I encourage you to understand what this tool is trying to convey and if there is something you can do, do it.

If a product is sold out and will never be restocked, what should I do with that product page? Should I remove the product page or should I redirect to a specific page?

From a search perspective, deleting pages is fine. From a site usability point of view, it may be better to keep the page open for a while or redirect it in case the old page is referenced by a third-party blog or bookmarked by a customer. I can't.

Will it be referenced even if there is no return tag in the

hreflang ?

All valid hreflang annotations are taken into account. For example, a broken one with no links to individual pages removes just that connection. So if you have a page with 3 valid hreflang annotations and a page with 1 broken annotation, the broken annotation will be ignored. However, it still uses other annotations specified on that page. Also, hreflang is a per-page annotation, so if you do this across your website and some things work and some don't, just take into account what works and ignore what doesn't. That said, if you find broken annotations like this while testing your site, my recommendation is still to fix it.

How should a website implement hreflang when you have no control over a branded site in many countries?

Great question. hreflang is an important and complicated topic. Implementing hreflang for many variations of a site is really difficult. But there is an easy way to control all hreflangs from one place. Site map . Search sitemaps.org for 'sitemap cross submits' to learn how to set it up and add hreflang to your sitemap instead of your HTML pages. This can greatly simplify the process.

Can hreflang sitemaps be placed in any folder?

Yes, there is nothing special about hreflang sitemaps. A traditional sitemap with hreflang annotations added. You can place it on your site like any other sitemap file. It can be sent using a robots.txt file and can be placed anywhere. Once submitted through Search Console, it can be placed anywhere within a verified site. The same concept applies to sitemaps that contain image and video annotations.

What could be causing my website not to be indexed?

I would first check Google Search Console to see if there are any errors blocking crawling of your site. But Google doesn't index everything on the web, so you also need to make sure the quality of your content stands out.

I run sites in 13 languages. When one person leaves the company, the articles in that person's language stop updating. The client is giving me two options, either remove the x-default or use the last translated language as the x-default, which one is better?

This is entirely up to you. You can specify the language on a pageset basis and it doesn't have to be the same across your entire website. One time you default to English, another time you default to Japanese, and so on, where users find value. But keep in mind that x-default means that the page will be shown even to users who are not searching in the specified language.

Why do product review updates affect non-review content?

If you're seeing widespread impact across your site, it's probably not caused by product reviews. Other updates may be the cause.

What's the best way to remove old content so it's not indexed? Is it a redirect? If so, what page would be best to redirect to?

When you want to permanently delete and retire a page, you can provide a 404 or 410 status code for the location. Alternatively, you can redirect to another page. How and what you do is up to you. Ultimately, it should be meaningful to you, not search engines.

Does text in images affect image search rankings?

Text around images certainly helps in understanding the image. However, Google can extract information around the image from the caption and image title, so first make sure the image is near relevant text and that text is descriptive.

Is it beneficial for local businesses to use local listing websites?

Don't think that adding your site to reputable local listings is the way to improve your SEO. Use them as a possible way to get more traffic, independent of search. Local search is another thing.

Is it bad for SEO to add query parameters like 'add to cart' to my site's URL?

No, it's not bad for SEO per se. However, for very large websites, adding query parameters to the URL unnecessarily can have an impact in terms of crawl budget. You have to judge your website and ask yourself, 'Is this really a large website?' Also, isn't the extra parameter you're adding to your website's internal links exploding in the number of URLs discovered? If so, try to reduce the number of parameters you add to your URL. However, if you have a small to medium sized website, you may want to wait until you have a large website to manage these parameters.

In addition, Google is soliciting questions for office hours in December 2022, and urges you to refer to Google Search Central if you have specific questions about the website.

in Web Service, Posted by log1p_kr