Hello,
My presumption was that clicking “hide this page from search results” would in effect tell bots and crawlers to buzz off and avoid any page with that option turned on. However, when I check https://www.google.com/webmasters/tools/robots-testing-tool?hl=en&siteUrl=https://www.alexthedefender.com/ I can see that every single page on my site comes back as ALLOWING Googlebot.
Now, I think I understood this correctly earlier, however, it would be great if somebody there could explain exactly how all of this is handled from a back-end stand point and also if there is a way to utilize code to effectively modify robots.txt file or even replace it? I see we have ability to access some .js files and what not and was just wondering this after running into above in the course of auditing my own work and site on day 1 of soft launch to see if there are any issues that still need fixing is all.
Thanks
Omid