Everyone has this question – How can I be found on Google, when they search for my name or my keywords? What if you don’t want to be found? Interesting question, right?
There may be legitimate reasons that you may want to maintain a great website but want to stay low on Google radar. (Sounds weird to SEO professionals, but valid cases exist).
Regardless of the purpose, answer is simple, to be not found.
Solution #1 – Use the Robots.txt file with Disallow: /
# robots.txt for http://www.yoursite.com User-agent: * Disallow: /
Once this file is named as robots.txt and uploaded to the root of the server, this will force search engine robots to not to come into the any pages. Basically, for a rule following robots, this is the door lock you need to put your entire site out of the view. Note, you might need a FTP tool to upload such file to the server.
Use of Meta Robots in theory is already unneeded after the implementation of Robots.txt file as specified above as GoogleBot will not even come to the page to read the Meta Tags. However, if you are of type, who thinks just-in-case, then following meta tag should be implemented:
Solution #2 on each page level with meta robots:
<meta name = "robots" content="noindex, nofollow">
This line of code in the <head> … </head> by itself is sufficeint to tell search engine bots not to index the page nor to follow any links on that page. Note, this is a pagelevel command versus robots.txt being a site wide command.
If there are more than 1 instructions for Search Engine bots on blocking/restricting access, generally bots are supposed to follow the more restrictive instruction.
Hiding from Google – Mission Accomplished
Implement the above codes on your site level or on page level and you will NEVER be found via organic search in Google. Then again, think about why would you want to do it – other than to handle duplicate content due to pagination, or tags in WordPress etc. If you can think of such use cases, feel free to share any such scenarios below.