.Google.com's John Mueller addressed a question concerning why Google marks web pages that are actually forbidden from creeping by robots.txt as well as why the it's secure to neglect the related Browse Console files about those creeps.Robot Website Traffic To Concern Specification URLs.The individual talking to the question recorded that bots were actually developing links to non-existent inquiry specification URLs (? q= xyz) to pages with noindex meta tags that are actually likewise obstructed in robots.txt. What prompted the concern is actually that Google is actually creeping the hyperlinks to those web pages, getting obstructed by robots.txt (without watching a noindex robotics meta tag) after that getting reported in Google.com Browse Console as "Indexed, though blocked out through robots.txt.".The individual talked to the adhering to question:." Yet below is actually the significant question: why would Google.com mark webpages when they can not even find the information? What is actually the perk during that?".Google's John Mueller verified that if they can't crawl the page they can not see the noindex meta tag. He likewise helps make an interesting mention of the internet site: search driver, recommending to overlook the end results given that the "common" customers will not find those results.He created:." Yes, you're appropriate: if we can't crawl the page, our company can't view the noindex. That mentioned, if our company can't creep the webpages, at that point there is actually not a great deal for our team to mark. Thus while you could find a number of those webpages with a targeted internet site:- question, the typical customer won't see all of them, so I wouldn't bother it. Noindex is likewise alright (without robots.txt disallow), it just indicates the Links will definitely end up being crawled (and also find yourself in the Look Console document for crawled/not indexed-- neither of these statuses lead to issues to the rest of the internet site). The fundamental part is that you do not produce them crawlable + indexable.".Takeaways:.1. Mueller's answer confirms the restrictions being used the Website: search evolved search operator for diagnostic causes. One of those reasons is actually considering that it is actually not connected to the regular hunt index, it is actually a different trait altogether.Google.com's John Mueller talked about the internet site search driver in 2021:." The quick answer is that a website: inquiry is not implied to become comprehensive, neither utilized for diagnostics functions.A website concern is actually a certain type of hunt that confines the end results to a particular web site. It is actually basically only the word site, a bowel, and then the site's domain name.This concern confines the end results to a particular web site. It's certainly not meant to be a thorough collection of all the pages coming from that site.".2. Noindex tag without using a robots.txt is actually alright for these kinds of conditions where a bot is linking to non-existent webpages that are actually obtaining found through Googlebot.3. URLs along with the noindex tag will definitely generate a "crawled/not indexed" entry in Look Console which those won't have an adverse result on the remainder of the website.Review the question and also respond to on LinkedIn:.Why would Google.com mark pages when they can't even view the content?Featured Image by Shutterstock/Krakenimages. com.