Will using !# Ajax crawling on non-JS accessible site generate duplicate content on search engines?

191 views Asked by At

I have a website that works with both JS on and off. All links on the page are in regular format of the form <a href="/pagename">but if a visitor reaches it with JS available they will be modified into <a href="#/pagename"> and handled using hahshchange event.

This results in 2 possible URLs pointing to same content (www.site.com/pagename and www.site.com/#/pagename).

Note: If you reach www.site.com/pagename with JS on you will be automaticaly redirected to www.site.com/#/pagename

Now I'm wondering if I should implement the hashbang format (www.site.com/#!/pagename) or not since I don't know if this will result in duplicate content when crawled by bots? Google's FAQ wasn't of much help on this specific subject.

1

There are 1 answers

1
John Conde On BEST ANSWER

This probably will cause duplicate content issues but it's hard to say for sure since crawlable ajax is a new thing. But you can easily solve this by using canonical URLs.