My client asked me to prevent google bot from indexing the website, so I have added the following meta to the head tag of my main layer which (in theory) is the head of all pages:
<meta name="googlebot" content="noindex">
This should prevent Google from indexing any pages, however, it is not preventing it. Apparently my client observed somehow that Google bots are still indexing the site (I do not know how he knows that Google is still indexing the site, because, as usual, clients are not too descriptive) so it seems that this is not solving the problem.
In fact I wanted to prevent google from indexing the site by adding a meta in the header of my template used at all the pages. Why is this still not enough for google to prevent it from indexing the site? How should I fix the issue?
Thank you very much.
TL;DR: Google might accidentally ignore the meta tag that you added to the website. If you recently added the meta tag it will follow it when the bot recrawls the website. If you want to get rid of all search bots, and not just google use
<meta name="robots" content="noindex">
Here is the official answer from Google -
Also, please note that your client may be confusing the Google bot with another webcrawler bot. In that case, I recommend adding:
To the HTML document so that no webcrawlers can index the site, not just googlebot.