With the surprise news that Adobe hooked Google and Yahoo up with a special reader for the spiders (which allows the engines to parse the .swf files and index/follow deeper content), does that mean the SEO’s PE special weapon can be abandoned?
I’m still going to stick with it for a while for my SEO blog and my clients’ sites. Google is adopting the reader first, and has technically been lightly reading some flash files already, but anyone who’s been in the game long enough knows that a lot of these properties launch with half-powered products all the time. Their track record isn’t stellar, so why not a ‘better safe than sorry’ approach? I don’t think I would consider dropping PE until at least a few more months after MSN jumps aboard. I’m not sure I would cease building products for the PE method (depending on the cost vs. value), and simple on-page coding is so easy – it seems like a no-brainer.
What about the other engines that will never be this advanced? Do you care about them? In preperation for vertical and social searching, I think it’s wise to consider what they could become. I think this news is going to spark a huge influx of flash sites, but I’m thinking this still seems like a bad idea.
Yahoo’s Search Marketing Blog posted an article recently called The Top Of The Page . Granted, this is a blog mostly about paid search (not so much an SEO blog), and thus, promotes spending money. This article was written by iProspect, and was very basic in SEO, but this paragraph just bugged me. It reminds me of many, many SEM posts I’ve read from agencies and search engines:
In organic search results you may be competing with competitors that have several years’ head start, thousands of pages of optimized content, thousands of incoming links, and thousands of digital assets that they’ve distributed all over the Internet. But paid search (like Yahoo! Sponsored Search), using compelling ads and strong calls to action, can be used as a great equalizer to overcome any advantages they have in the algorithmic results.
Let’s read into the questions and comments these kinds of statements project to readers: “So give up on SEO! “, “Buy your way to the top! “, “You can’t beat competitors who are years ahead”, etc. Really? Is this good advice? Uh, no. I think this paragraph is a little over-targeted, but then again, look at the source. The truth is, good SEO can beat these aged sites by creating authority. Engines care about new, timely information dissemination. Engines care about user value. Engines are adapting to the idea of sorting out the stale, trusted content from the new and pertinent content.
Don’t get me wrong – paid search has its place and power. I would never knock it. But in my opinion, SEO, when done right and constantly tag-teaming with paid search, is the only attack plan. Monitoring the ebb and flow is key, but so is recognizing the full capacity of SEO as it really is today, and not necessarily believing the impressions the Yahoo Search Marketing Blog is providing.
I use image search often, mainly with Google. I use it to find inspiration in design or better clipart for this SEO blog, but occasionally as a content searching feature. If I’m looking to find info about a new guitar, I might use the image search, then once located, continue on to the content around that picture.
If you use image search, you probably already realized that it fails compared to text search. The first handful of results are often in the ballpark, but it quickly goes extremely south from there. Unlike text search, where a few content-rich sites can often suffice (depending on specifics of your search), in image search a much larger set of like-results is usually desired. I can almost guarantee, though, that without the ‘adult content’ filter on, you’re going to get some kind of completely unrelated adult picture a third of the time. When using image search at work, be careful that nobody is standing behind you. Those pictures can be freaky!
Obviously, it’s hard to favor image search and really back it if it’s so wonky.
But the future of image search always sounded interesting. The idea of engines using apps to map the parts of a picture even better, determining the shapes (i.e., faces), and using it more confidently as part of the algorithm sounds pretty darn groovy to me. Imagine – no matter what the image file was named, or the content around the image file on websites, by image searching for “Frank Zappa”, the maestro might show up consistently, instead of unrelated R-rated images.
Or what about giving an engine a picture and asking it to return similar pictures? This technology is being used commercially with Like.com (like that red shirt on your friend’s myspace page? Just show it to Like.com and they’ll present similar items you can buy). In my opinion, this would really improve search for the better. Describing things in the present state doesn’t typically fine tune the search the way it should, in part because of blended search; ultimately, I find it making the search process longer. But if I could show the engine what I was looking for, well, that would be swell.
Google can read. Google can listen (1-800-GOOG411). So why not see?
Update: Good article over at Google’s blog – New Search By Style Feature.
However, in any case, the spam you’re seeing may not be doing what you think.
So what prompted my colleague’s email? He was calling out a site called Moosejaw (I really don’t want to give them a link… just add a ‘.com’, and you’ll get there). This ecommerce site seems pretty legit initially; decent products, good enough design, and a feeling of order security. Scrolling to the bottom, though, displays a lot of obvious SEO influence in the form of huge amounts of keywords and anchor text.
The site has healthy backlinks, a good internal structure, and other legit factors that I believe offset any negative judgement this anchor text stuffing might have caused, even though it is an obvious SEO effort. Upon seeing this site, my first thought was, “eh… fine, I’m sure it’s only ranking for some really long-tail, obscure queries”. Nope – it ranks pretty well for some popular product queries, too. It looks like a quality retailer, but Google’s whole stance is that they don’t want to serve sites that are obviously manipulating the algos. If this site was adding user value instead of SE value, then I’d be all for its high rankings. But really, what customer on the planet is going to even spend 1 millisecond on the bottom text? Are they going to follow any one of 50 links there? No. Hell no.
Google has cleaned up the index a lot since I was posting to my last blog. I remember writing a post calling out lots of front page results for simple queries, like “photoshop tricks”, “mixed drink recipes”, and “video game cheats”. There were lots of affiliate trash and ‘made-for-Adsense-sites’. Recently there was an illness in my family, and I found of my results were pretty fulfilling, unlike previous years where I just became frustrated. The webspam still shows, but I find it’s for more obscure longer-tailed keywords most often. I still believe that engines know when a page has been excessively SEO’d, and may flag certain heavy-handed techniques (the amount of these flags, based against the determined value of the page makes it hard to figure out exactly what heavy-handed actions were too much). In the past, when I’ve tried to break semantic markup and put header tags around keywords on my smaller sites, or get crazy with nofollows, I didn’t get a gain – in fact, I think I lost a step or two. When Overstock.com does it (they even put an <h2> on their breadcrumb trail), it’s clear it doesn’t negatively affect their rankings.
I’ve had discussions with other colleagues about keyword density. The SEO community generally agrees that it doesn’t exist, at least to an absolute level across all pages. A particular conversation I remember having is when a colleague Googled a keyword, and the snippet showed a handful of these terms bolded. Clearly, the snippet was just trying to show keyword relation to a user – in fact, all the SERP results had bolded keywords – but it does not necessarily suggest that these bolded words had an effect of the serving. Again, it seems obvious, but based on what SEO’s have learned, just doesn’t appear to be the case. In reviewing some of these served pages, as exemplified from the snippet, there was definitely some keyword stuffing in these authoritative pages. Was this enough to punish the page? Apparently not. They ranked great. Did it help them? Probably not – it was more likely due to all the other things the site was doing right, and the page authority. So it looks like Google ignored it and judged the page overall as valuable. All part of the spam score, I imagine.
Is this what is happenening for Moosejaw? Are they flagged for the heavy handed links? Yeah, possibly. But maybe the overall value of the site offsets that spam. I’m more in agreement with that theory. Ultimately, I don’t think Google loves spam – I just think you get away with SPAM a little more if you’re doing everything else right. I believe this is what’s giving you the rankings even though it’s easy to call out the obvious spam techniques.
I love SEO blog posts where SEO’s talk about their findings. This industry is full of testers that still can’t provide empirical data, no matter how much they try. For every tester, there’s always someone who has a reason to bust the results with their own tests or philosophy (these ‘busters’ are most often the same SEO personalities, unfortunately). Sometimes, though, despite the ‘busters’, these SEO’s can make a compelling case. I thought Branko Rihtman did a great post called Anatomy of a Google Filter/Penalty over at SEO Scientist. I’ve often speculated about NoFollows, since they’ve recently become one of the new “SEO flavors of the year”. I do believe engines – mainly Google – do try to recognize SEO efforts, and have thresholds. Do ill-timed or excessive NoFollows trip this scrutiny?
Granted, I wasn’t first (generation) to the Twitter party. I’m pretty sure I wasn’t even third or fourth. But I’ve been here about 6 months, and still don’t know if it’s living up to the hype (yet)… Kind of like second life, despite shout outs on The Office. But I DO think there is something to Twitter, and life streaming in general. I love the idea. I love taking this online networking web… 0′s and 1′s, and continually turning it into something that becomes more valuable than the telephone ever was. There’s art, love, life, philosophy, zen, existentialism, religion, and a lot more on Al Gore’s Interweb. I love the idea of this mind/matter technical extension, and being able to one day say more with less in the cyber age. Is Twitter the way to do it? It’s a way. Is Twitter for everyone? No, but either is poetry. Is it even wise to expect that computers can take our intellectual depths, or profound realties to an evolved level?
I just re-read that first paragraph. No, I am not smoking pot.
Though I’m letting stream of consciousness run this post, I do think that there’s way more to the internet than what we see now. I think it will grow in the next 5 years to something less “computer”, and more “human”. Not Terminator type human, but more of cerebral type thing. Then again, I’m 33, and I don’t know what a 15 year old is experiencing at this point in the computer lab of his/her school, and a handful of social network profiles.
I’m hopeful. Without growth, there is no evolution. Add me on Twitter so I can continue to be part of the wave. Bill Sebald.
301′d to homepage
I use image search often, mainly with Google. I use it to find inspiration in design or better clipart, and occasionally as a content searching feature. If I’m looking to find info about a new Fender guitar or Corvette, I might use the image search; then once located, I may continue on to the content related to that picture. That content, after all, is the main ranking factor for images.
If you use image search, you probably already realized that it, well, sucks eggs compared to text search. The first handful of results are often in the ballpark, but it quickly goes extremely south from there. Unlike text search, where a few websites full of content can often suffice, in image a much larger set of results is usually desired. I can almost guarantee that without the ‘adult content’ filter on, you’re going to get some kind of completely unrelated nude picture a third of the time. When using image search at work, be careful that nobody is standing behind you. Those pictures can be freaky!
Obviously, it’s hard to favor image search and really back it if it’s so wonky.
301′d to homepage
Relatively recently, Google started to incorporate ‘memory’ into their AdWords system. A user who first searched for “hockey”, then for “sticks”, would ultimately be served ads for hockey sticks. Not a bad idea, since about 80% of all unsuccessful searches result in keyword refinement (e.g., going back and trying a different query). Since the first and second searches are probably related, then this looks to be good for publishers, and gives more play on the keywords.
In April, at SMX Sydney, it was relayed by Marissa Mayer (VP of user experience – don’t know who she is? Look her up. She’s good people!) that this memory is coming to natural search as well. It was even given a name – “Previous Query”. Is this going to be in the form of personalized search when you’re logged in?
Sounds good. I’m all for a better search experience. Google is the bandleader, but other technologies and verticals are impeding. If Google can make their ‘best-of-breed’ general engine better, then they better do it fast.