Google let us know this week that the canonical tag is now functional across domains. I think that’s a fine feature (not sure why it didn’t roll out that way the first time). I personally don’t have any reason to control cross domain canonical issues, but I can imagine several different applications. Maybe it’s an offering for the spammer who wants to go straight! Nah.
But it made me want to look at the canonical tag today. About 4-5 months ago I checked, and didn’t see an effect. I heard online buzz that it wasn’t doing much for anyone at that point. OK – it’s new, I’ll wait.
But now it’s been long enough. I was prompted to check again, but alas, the canonical tag really hasn’t been proven to do anything yet again. It’s been months (February???) since this thing was put out and touted as the end of duplicate content issues, but I haven’t seen any decrease in my indexed pages. In fact, I’m up about 10,000 pages in Yahoo and about the same in Google (in Bing I’m up off the charts, but that’s Bing for you). I just reviewed notes from a site with more than 30 thousand pages from 2008. The actual site’s page count hasn’t increased or decreased drastically in a year. This is pretty annoying.
Is it just that the site: is that inaccurate? Or is the canonical algorithm run so infrequently that it hasn’t permiated my client’s site yet (unlikely – it’s a hugely popular commerce site, but maybe there’s just too many pages to consolidate). It’s hard to help search engines with the duplicate content issues when things aren’t working or reported accurately. Makes me want to recommend hash tags in URLs, expensive meta robots implementations, or other nofollow tricks.