And they did. The tests in the footer of this site showed it. Not only could Google index my pages that had the anchor text in them, but they could also index the thin destination pages. And they did so within 3 hours! Hey, they did say they’re obsessed with speed this year.
I also tried:
Finally I tried:
Again, Success! I’m pretty satisfied with Google here. Bing and Yahoo? Not so much – they were only able to index the page with the anchor text. And even then, not every time. But they never claimed to be able to (that I’m aware of).
Now this isn’t surprising to some groups of SEOs, but it really is interesting how often I still hear old SEO recommendations as being critical today. Granted, this test isn’t exhaustive (the actual PageRank associated through JS links wasn’t tested – just crawlability), but it’s valid. I think some SEOs really need to get caught up to Google, and start implementing what really matters – user value, context, authority, recommendation, and community. Whatever you want to call it (SEO 2.0 or not), the wave is starting to build right now – get in front of it, and down shift on the old school SEO tactics.
Oh no – not another guy trying to create another marketing acronym!
Well, I care less about the acronym and the ‘coolness’ of labeling something, as I do the real principal behind what it is. As an SEO who came up with it for 10 years, I’ve realized I’ve taken a different path than many. I don’t get excited by the algorithm manipulations anymore. I don’t really get involved in the forum arguments on SEO minutia. I started my professional life as a marketing guy, in love with the art of thought and context, and somehow deviated into web and graphic design. SEO was a chance to connect it all together. Now I think I’ve changed in the same direction that search engines changed (or will continue to change). It’s not about “original content” as much as it’s about “original, valid, creative, editorial content with a purpose.”
Algorithms are headed in the direction of trust, reputation, and influence. Google wants to rank pages based on the way people would rank them if asked. Of course, there’s no way every human to assist Google on the billion of pages, so Google’s algorithms will have to grow. And based on the progress (and patented algorithms) we’ve seen in the last year or two, it’s really likely that they’ll get closer to achieving that goal. Is SEO dead? The odds of it dying are as likely as search dying – nada. It will just change, even if it means another acronym.
If you don’t have anything good to say, don’t say it at all. Don’t create noise. There’s enough of that. But if you have a passion and a purpose, sing it from the rooftops. Defend it like it’s your child. Specialize in your vertical, and be an authority. Care less about the algorithm and more about your niche and the people you can connect with. Make the content easily available. Make it readable and crawlable (= searchable), and groom it to be your voice. Then, market the hell out of that content. Set it to the top of your hierarchy and speak to it from your other pages, other venues, other channels.
For me, SEO is art. And for me, in 2010, it is more art than science. The split is now flattening in my opinion. And if I had to pick an area to focus solely on, this would be it.
For as long as I can remember, going to Google, Yahoo and Bing (or MSN, or Live), you could use the site operator to find out how many pages you have indexed.
Go to your engine, and type:
Check out the results. Interesting to see what they give you. But the problem is, this is sort of bunk data. See, search engines don’t crawl all the pages they know about. They also don’t index all the pages they crawl. Thirdly, they don’t publish all the pages in their index with the site operator. Google once said they prefer not to display this data because it’s not really valuable to the average site owner, and not necessarily worth the processing power.
Google’s response to webmasters (and SEOs) is to give you a better, more accurate count through Webmaster Tools. But it’s not as accessible as going to Google.com and typing “site:” into the engine.
SEOmoz put out an article about using Google Analytics to get a better view of not the pages Google knows about, but the pages Google serves. Now that is actionable!
It’s a must read article. Knowing what pages serve and what pages don’t help you identify the pages that need the most attention.
Don’t have Google Analytics on your site? Hopefully you have some kind of sophisticated web analytics package that is configured to retrieve this type of page-level data. The more data you have, the less guessing you’re doing within your SEO strategies.
Google is smart. They want to be smarter. Every search engine dreams about developing an algorithm that actually predicts a searchers intent, and if there is anyone out there who might solve the puzzle, it will probably be Google. I appreciate when Google can look at a search I make and understand when I misspelled something. But what happens when I’m trying to rank something that is intentionally spelled incorrectly?
Well, Google hasn’t quite planned for that. At least their algorithm doesn’t address it.
I’m working with two companies now that have altered, cute spellings for their brand. One ranks properly for the misspellings, the other doesn’t. The first has been in existence for a while with a lot of links, the second is equally old, but very small – it doesn’t have a large link portfolio. Both have the intentionally misspelled terms in their URL.
I’m not alone. Other webmasters frequent the Google help forums with the same issues. We do not want Google auto-correcting our spelling. Frankly, I don’t even want a “did you mean…” link in the result pages, but I understand the option. To be a company with an intentionally misspelled name or product, it’s pretty important you hammer home your intentions with your content and your links. You have to work harder to get Google to notice, just like you do your consumers. Led Zepplin and Def Leppard didn’t get spelled in the “officially” correct names until they were household names. Same concept applies.
So I’m not sure if this made the SEO rounds (possibly I’m just too busy hiding from my Christmas shopping responsibilities), but it was news to me. Google is playing with removing the displayed URL and replacing with a breadcrumb trail type display.
It doesn’t matter if you’re logged in or not. It doesn’t just happen if you’re ranked in the top position (like sitelinks). Honestly, it seems pretty random. I haven’t figured out what triggers it, but it does seem like a useful variation of the horizontal sitelinks. Is it better than seeing the usual display URL? Maybe since keyworded URLs still aren’t completely prevalent yet on the web. Function over form?
Try it – I Googled transformers toys but it’s happening elsewhere. All the more reason to have a good site hierarchy… now it’s actually marketing for you in the search engine result pages.
Google let us know this week that the canonical tag is now functional across domains. I think that’s a fine feature (not sure why it didn’t roll out that way the first time). I personally don’t have any reason to control cross domain canonical issues, but I can imagine several different applications. Maybe it’s an offering for the spammer who wants to go straight! Nah.
But it made me want to look at the canonical tag today. About 4-5 months ago I checked, and didn’t see an effect. I heard online buzz that it wasn’t doing much for anyone at that point. OK – it’s new, I’ll wait.
But now it’s been long enough. I was prompted to check again, but alas, the canonical tag really hasn’t been proven to do anything yet again. It’s been months (February???) since this thing was put out and touted as the end of duplicate content issues, but I haven’t seen any decrease in my indexed pages. In fact, I’m up about 10,000 pages in Yahoo and about the same in Google (in Bing I’m up off the charts, but that’s Bing for you). I just reviewed notes from a site with more than 30 thousand pages from 2008. The actual site’s page count hasn’t increased or decreased drastically in a year. This is pretty annoying.
Is it just that the site: is that inaccurate? Or is the canonical algorithm run so infrequently that it hasn’t permiated my client’s site yet (unlikely – it’s a hugely popular commerce site, but maybe there’s just too many pages to consolidate). It’s hard to help search engines with the duplicate content issues when things aren’t working or reported accurately. Makes me want to recommend hash tags in URLs, expensive meta robots implementations, or other nofollow tricks.