Google have announced the release of a new “unavailable after” meta tag that will allow webmasters to inform Google when a specific webpage should no longer be indexed. Based on the Robots.txt protocol, webmasters can now effectively remove their webpage’s from search results after a certain period of time.
How can it be used?
The new tag can be used in a variety of applications; here are 2 examples I’ve quickly put together:
You have a special deal that expires
If you want to avoid disappointing visitors by sending them to a page with an expired special offer, you can choose to have these pages removed from the index when the offer ends.
Free content becomes paid
Content that was provided free on your website goes into an archive that users must pay to access. You can remove this content from search results using the unavailable after tag.
How can I implement the tag?
From the Google blog:
To specify that an HTML page should be removed from the search results after 3pm Eastern Standard Time on 25th August 2007, simply add the following tag to the first section of the page:
<meta name=”GOOGLEBOT” content=”unavailable_after: 25-Aug-2007 15:00:00 EST”>
The date and time is specified in the RFC 850 format.
This information is treated as a removal request: it will take about a day after the removal date passes for the page to disappear from the search results. We currently only support unavailable_after for Google web search results.
The tag is currently only supported by Google, and has been well received by the web community so far. Do you think other search engines should follow suit? Have your say with our blog comments!