I'm trying to work out how many cases I need to cover in robots.txt

How does robots.txt handle URLs that are redirected?

Let's say have a link on my homepage

<a href="https://example.com/disallow-1"> Disallowed link </a>

And an .htaccess rule that permanently redirects the target

Redirect 301 /disallow-1 /disallow-2

And a robots.txt field

Disallow: /disallow-2

Would that be enough to suggest that the link not be crawled?

Or would I need to cover both cases?

Disallow: /disallow-1
Disallow: /disallow-2

What if instead of the redirect, my the page at /disallow-1 had /disallow-2 as its canonical?

<link rel="canonical" href="https://example.com/disallow-2" />

0 Answers