Truth is, “improve” is what they did. They had to get better at this if they really wanted to be able to catch spamming.
So the http://www.greenlaneseo.com/&force=1 (a URL that appears nowhere else on my site or in the code of the linking page) was causing a 404 to show in my Webmaster Tools report.
Ah, the SEO report. Sometimes the bane of our existence. Some agencies spend the majority of their time creating monthly detailed monstrosities, while others might send quick, white-labeled exports. Meanwhile, smart companies (like Seer) look for ways to use APIs and programming to speed up data pulling. At Greenlane, we took this approach as well; Keith, my partner and incurable data nerd, created our out-of-the-box reports to pull API data on traditional SEO metrics like rankings (yes – we still believe in the value), natural traffic (at the month over month and year over year level), natural conversions (same range), and every necessary target landing page metric we could think of. Then after discussing clients’ own KPIs, we add more obligatory reports to our default set.
But pulling data is only a means to an end. Data exports – especially the scheduled kind – are huge time savers. However, the downside to these automatic data pulls is the lack of necessity to go into analytics platforms to “poke around”. Simply put, you need to look for trends, see how data correlates with each other, and investigate why things are (and are not) happening as expected. You need to have notes of what you want to check for each month when you pull your reports. You need to let data inspire questions and direct you to answers. This data is what should be driving your day-to-day optimizations.
No child ever wanted to be a “report monkey” when they grew up. You shouldn’t be one either.
I’m guilty. In a past life, I was part of a company that spent so much time – by hand – downloading Omniture reports, copy and pasting cells, customizing charts, running formulas, and beautifying spreadsheets. I can make a spreadsheet look like a work of art (though Annie will always have me beat). It took 10+ hours a month. Looking back, this was a total waste of clients’ money. That’s not what we were hired to do, yet we got away with it. Granted, I do believe the aesthetics of an attractive report can at least semi-consciously suggest to the recipients your agency has talent and the money to invest in quality output (whereas this “money” may indicate success), but that’s only going to help you for so long. It’s like seeing a beautiful deck at a conference presentation – like it or not, it does give the perception of capability. This is the marketing industry after all.
But when you’re spending so much time pulling, shaping, tweaking, and formatting, you’re spending less time being a marketing detective.
I’m the guy in the company who (probably annoyingly) squawks about fonts, consistency, and aesthetics, etc… all for the reasons above. But Keith and I both feel that the reports not only have a value to the client, but a value to our team as well. These reports ultimately make our job easier. The process of creating these timely reports – believe it or not – is what makes us better at our jobs:
I’ve worked agency-side most my professional life. I did however have a brief stint as a client. It was very useful, as it helped me understand the daily challenges of an in-house marketer; especially the many directions they are often pulled in. When I first got reports from our PPC vendor or social marketing vendor, I wanted to tear into them. Talk strategy. Get the learnings. But, I was busy as hell. Eventually I just wanted the most impactful highlights.
An executive summary or a quick blurb of succinct natural language explanation can go a long way, especially in companies where these reports get passed around. You know the frustration you feel when you see a slidedeck on Slideshare, but can’t make any sense of the slides? You missed the accompanying presentation which sometimes leaves you more confused than ever as you click through the slides. It doesn’t mean the slides were bad or valueless – it just meant the context wasn’t there. A good executive summary provides the context.
Here’s an example of something a client might see at the bottom of one of our spreadsheet reports (click for larger image):
However, executive summaries can be dangerous for the clients if executed poorly. Many clients tend to accept the executive summary without questioning. Whether you have a client who uses the executive summary to dig into your brain, or one who just accepts as it, you owe it to them as hired contractors to provide the information they really need. Don’t let their lack of questions lead you into creating valueless executive summaries.
Clearly I think natural language is extremely important in telling a marketing (and data) story. Another option we recently discovered (and strongly recommend you check out for yourself) is Wordsmith For Marketing, a new service that can actually write textual reports based on data, saving your team time. We’ve started working with them and are really blown away by the exports. How a computer is writing reports like these are beyond me:
This is just a part of the long, detailed PDF. The service pulls the data from a Google Analytics connection, and lets you go in and move items and add your own content. See the summary above? That was completely written by the computer, using words like “moderate loss” and “conversion rate also slipped.” Pretty incredible, with a very cool roadmap of features to come.
This is by far the first “push-button” report I’ve seen that actually provides contextual value, but we still encourage our team to take it further. Since Wordsmith easily allows you to add bullets and more context, we ask our team to fill in any gaps needed by affixing more observations and recommendations right into the report. For example, did we work on a specific campaign last month (with or without goal tracking)? Wordsmith won’t know, so our account managers must include all that. It’s a very useful merging of technology and manual digging that still cuts down a ton of hours.
Imagine a client running eCommerce product pages on a modern JS framework. It’s responsive and sexy but it’s not drawing search traffic. Data could suggest an evaluation of the code where you might find AngularJS, something you can drive to fix with proxies. Alternatively, imagine a client has tons of duplicate product pages – immediately your instinct is to pull the pages, put in robots/noindex solutions, and canonical tags. Yet, the data could suggest that Google already figured out the duplication issue and is still driving good traffic to the dupe pages regardless. Finally, imagine a client got a little too aggressive with a former link campaign and suddenly got stuck with an algorithmic penalty on their a deeper landing page only. Digging in deeper to a site’s analytics can quickly help you pinpoint the problem and give you a course of correction, plus help develop the priority.
These are examples you don’t get from just topical exports. The data can help you develop, prioritize, and execute all day long. Sure, it’s a pain losing the natural search keyword data with [not provided], but while that adds complexity to the keyword work in SEO, there’s still plenty of other SEO initiatives and experiments you can easily create just by making deeper data dives an important part of your day-to-day, or providing reports that you and your clients truly find valuable.
Embrace and optimize (see what I did there?) your SEO reports, but make sure you’re keeping the goals of these reports in mind all along. Once completed, the time should be spent analyzing the data and creating strategies, not creating the reports themselves. If your goals aren’t to empower your clients and empower yourselves, while holding your own feet to the fire to achieve results, you’re probably doing it wrong. Creating the right reports should be for educating both you and your clients, thus helping you really learn your chops as a marketer, while allowing the client to see the benefits of your great work.
Vote for this post on inbound.org.
Entity optimization as a big SEO play isn’t quite upon us yet. It’s a slow, growing Google addition. I know – it frustrates me too. So much potential, of which I believe will greatly improve search results in the future. Google isn’t nearly showing the fruits of everything it knows through entities, whether through cards or search results – at least not relative to the way they rank on keywords alone.
But can knowledge cards help bring qualified traffic while considering searcher intent? SEOs always talk about searchers intent. Anyone who’s been doing SEO for a while knows that building for intent can be a challenge.
Take a query like “batman the dark knight”. Was the searcher looking for the 2008 movie? The graphic novel? The upcoming game? Were they looking to buy something, or just curious about a release date? What the hell were these people thinking? This is certainly very top of the funnel stuff, and would normally yield lower conversions, but it is where many Google non-power users would start.
Google knows these searchers expect them to be mind readers. They’re keenly aware of this. They may be working on mind-reading devices in their labs (to which I will finally invest in the tin-foil hat – I’ve got a lot of junk swimming in my head that should stay hidden). But in the meantime, through their results they give us personalized search, or this cute little cluster of links, but I doubt many click on anything here:
But if you properly create an entity, you can get better “related results” in the knowledge graph:
Pop into Freebase and look up either of these entities, and you’ll see the details above listed out. Coincidence? Probably not. The data could have come from there. We know the Google-owned Freebase is part of their brain now. But unfortunately, the huge database of great information (granted, which needs to be checked against other sources), simply isn’t producing results yet. Whether a limitation in the knowledge card product or limitations in processing the data, I’m not sure – but I’m always hopeful Google steps it up soon.
Of course I recommend optimizing now and getting your entities in place for when Google pushes the pedal to the metal.
But for those who are working on campaigns where entities are being shown, you’re in luck. Google’s using your search history and their knowledge cards to personalize the results – sometimes in a more valuable way than the general results.
If I were doing SEO for Jaguar, a well-known luxury brand car, I already have the benefit of Google knowing what my product is. They show some of it in their knowledge card with a simple “jaguar” search:
Obviously this isn’t all Google knows – just what they feel like showing at the present time. They’re getting this from Google+, Wikipedia and Freebase at a minimum.
Since Ralph Speth can’t go back in time and choose a new name for the company, they have to compete for search result real-estate and millions of monthly searches for the term “Jaguar”. That is, against other pages that want to rank – like the Jacksonville Jaguars, the animal, the Atari Jaguar, comic book characters, and movie titles.
Now, if I were doing SEO for the defenders of wildlife, and I wanted this top-of-the-funnel term to potentially bring me traffic and awareness, the default (above) results suck for me. It’s all cars, football teams, or pictures.
But Google does something cool…
Search history plays a role in results. It uses keywords, and ideally entities, to see relationships through queries. A query like “animal,” “panthera,” and “wild animal” is related to Jaguar. Specifically, a query like “panthera,” followed by a new search for “jaguar” gives a different result. The Jaguar car listings, ads, and knowledge card are supressed, for an option where one can click to refine their search. This isn’t even slightly hidden. See the difference between the below results and the above example?
Clicking the link (pointed to by the red arrow) shows a new refined search where defenders.org has a listing (at the time of this writing). The query has been changed to “jaguar animal” but, through a new click-path, defenders.org has the opportunity to benefit from this “jaguar” head term. I believe this is at least partially entity driven. And, I believe this is a small example of how entities can be used in the future as Google’s products become more robust.
What do you think? Am I seeing a connection where there isn’t one?
I read – and commented on – a great post called Panda 4.0 & Google’s Giant Red Pen by Trevin Shirley. Panda 4.0 just hit; the SEO space is hiding under their desk, with some reacting either out of panic or for show.
It’s definitely news, but at this point, I don’t see any reason to scream from the rooftops at Google. It’s what we should be expecting by now.
In 2011, the first Panda showed us Google is not afraid to drop atom bombs. Panda opened the door for Penguin, and many updates have come since. Matt Cutts said he wished Google had acted sooner, and in his shoes, I’d probably agree.
Let’s not forget how spammy the results used to be:
I can imagine the conversation at the Googleplex between the webspam and search team:
“Man, how did you let this get so bad?”
“Me? I though you were paying attention…”
“Look – we need to fix this. But the algorithm can only be tweaked so hard. I mean, it’s not Skynet yet.”
“But people think it is…”
“We’re going to lose our shirts if we don’t act quick. How about we take drastic measures.”
“But the SEO community will have a cow.”
“But hopefully the rest of the world won’t notice and just start loving, trusting, and using a cleaner Google!”
“Agreed. Hey Navneet Panda… do you have any ideas?”
Maybe they should have named these things Godzilla instead of Panda or Penguin. The battles that ensued since the birds and the bears were nasty. Some search results were leveled. I’m not being dramatic for the sake of a metaphor – I’m pretty sure we can all agree the results have never been the same. Some SEOs were/are slow to give up the fight. Some agencies still sell SEO that doesn’t work. Others, however, have realized the new rules – while different – still offer great opportunity.
Google declares their war on spammers a victory, noting black hat forums have slowed down. They’ve admitted to throwing some FUD into the mix like Kim Kardashian’s publicist might do, but for the greater good of their mission – to fix the results and uphold their “reputation.” All the hatemail and tweets to Matt Cutts isn’t going to change this. I’m pretty sure he’s holding steadfast. While Google won’t nod to the fact that some good got swept up in the bad, they obviously know it.
But honestly, I think it works for me. I think the changes, and casualties, were necessary. Were they supposed to wait until they were perfect? Plus I was getting tired of the lack of imagination… not that some of the dark arts weren’t brilliantly designed and executed. But in some sectors, SEO is very slow to change.
What I mean is, I was missing the marketing. In 2007 I was in a full-service agency’s marketing department doing SEO. Yet, SEO didn’t feel like marketing then. It was still firmly planted in web development. But in my situation, marketing and web development were siloed. Our departments weren’t friends (some internal politics at play). As asinine as that sounds now, I learned it wasn’t uncommon in big agencies back then. So, to make our SEO offering work, I had to tie “marketing” and “technical” together.
As evolution would have it, there’s no doubt that SEO is a marketing channel now… so I kind of lucked out by getting an early jump on it. The more I tied the two together, the more long-lasting the results were. Even today. It’s the only real Panda/Penguin proof strategy I’ve seen.
Like many rock bands, Google has changed their formula. I agree – relatively speaking, Google now works pretty well. Or at least they’re finally poised to substantially improve. And that’s from me – a guy who hates change. Update your website or UI and I throw a temper tantrum. But realistically, has anything ever stayed the same? Did David Bowie not continue to produce great music, albeit different? Did Empire Strikes Back not kick more ass after changing directors?
Did Windows 8 not improve upon Windows 7?
Granted, it’s still Google’s property, and they can do with it as they please, so if they only want to represent a portion of the web, I suppose they have that right. Maybe in hindsight it was kind of ambitious to attempt to organize all the world’s webpages. Ah, the dreams of two bright-eyed Stanford students.
In his post, Trevin quoted something from Hacker News that I found very interesting: ““We are getting a Google-shaped web rather than a web-shaped Google.” I sat with this for a few days. Ultimately I don’t think we’re getting a Google shaped web or a web-shaped Google. I understand the concern, especially when Google is a massive part of discovering new content and a provider of big revenue. But the web is much larger than Google. The citizens that create on the web, outside of the SEO bubble, are very much their own people, inspired by anything and everything. Alternatively, a web-shaped Google – which I argue was their first attempt – was a bit unrealistic.
When I worked with a client who was an innocent casualty of an update, I used to get angry. I used to think Google was a bunch of jerks. Then, I got creative, and found ways to get the client back onto Google’s radar – usually to a larger traffic and brand-recognition increase. Plus, I started relying on some of the other valuable internet marketing tools and channels. Talk about silver linings.
But honestly no client I’ve ever had, who got hurt by a Google update, was a true victim. Google always told us they wanted to rank the best, most useful content to their users. I’ve worked with some clients who got the traffic, but only because Google didn’t realize they weren’t the best. I’ve seen sub-par, homogenized content ranking well, and though, “meh – might as well ride it while Google is still dumb.”
Now looking back, if they got swept up in an update, it’s because they really weren’t doing more than the bare-bone basics – Google simply stepped up their game. These sites weren’t the originator of content, topics, and incredible ideas. They were just “running through the motions”.
Maybe it’s time to accept Google has graduated from grade-school.
In another post I wrote about lazy SEO. The more I think about it, I think old-school SEO is lazy SEO because it simply doesn’t move the needle enough to quantify hitching your wagon to. I truly think if you haven’t moved on by now, you’re only going to be playing catch-up in the next couple years.
So what do you think? Am I right? Or have a misguided myself?
Sometimes desperate times call for desperate measures. This post is about a desperate measure.
We had a client with a manual link penalty. We did some work (using my outline from this post). Rankings started going up and traffic/conversions started boosting. Then, a few days later, the next Google Notification came in. It’s like playing digital Russian Roulette with those things – you’ll either be thrilled or be in a lot of pain.
This time Google said they “changed” our penalty, as there were still some spammy links out there.
Remember, not all penalties have the same impact. Clearly ours was lessened (which was continually proven in the weeks to follow), but our client – rightfully so – wanted to have the whole penalty removed. The problem was we couldn’t find anymore bad links. Everything from Ahrefs, OSE, Google Webmaster Tools, Bing Webmaster Tools, and Majestic (etc.) was classified and handled appropriately.
Google’s notifications sometimes show some additional samples of poisonous links. This time we were showed only two links of forum spam, something we found zero instances of previously. Old school, dirty forum spam usually is belched out in huge, automated waves. We asked the client, who asked their previous vendors, if they had any knowledge of the link spamming. Nobody knew anything about it, so any chance of getting a list of these URLs (which was probably very low anyway) was now nil. But how did we miss all of it?
The problem was, this forum spam was so deep in the index that the major tools couldn’t find them. Even Google’s Webmaster Tools report didn’t reveal them. That’s right – Google’s notification was showing us links existing, but weren’t even giving us insight into those links through Webmaster Tools. They never got any clicks so we weren’t finding them in Google Analytics. Google’s vague link reporting functions and vague, boilerplate notifications weren’t helping us help them.
The only way to find these deep links was through the use of Google’s search engine. Unless you have a staff of hundreds and nothing but time to manually pull results and analyze one by one, this didn’t seem possible. But we came up with with a reasonably easy process using Cognitive SEO, Scrapebox, Screaming Frog, and good old Excel, to try to emulate this activity with at least some success.
Note: I feel obligated to tell you that this is not going to be an exhaustive solution. I don’t think there is one. There’s limitations to what Google will actually serve and what the tools listed in the post can actually do. To give you some good news, Google will likely release you from a penalty even though you didn’t clean up every single spammy link. All the clients I’ve gotten out of the doghouse still had some spam out there we weren’t able to find. To Google’s credit, at least they seem to understand that. Hopefully this process will help you out enough to get the job done when your repeated reinclusions are denied (even after really, really trying).
We’re going to have to beat Google into giving us opportunity. The problem is, we’re going to get a serious amount of noise in the process.
We know the inanchor: operator can be helpful. It’s not as powerful as we’d like, but it’s the best we have. A search in google like inanchor:”bill sebald” will ask Google to return sites that link using “bill sebald” as anchor text. This will be very valuable… as long as we know the anchor text.
Step 1. Get the anchor text
This can be done in a few ways. Sometimes your client can reveal the commercial anchors they were targeting, sometimes they can’t. All the major backlink data providers give you anchor text information. My favorite source is Cognitive SEO, because they give you a nice Word Cloud in their interface right below their Unnatural Link Detection module (see my previous post for more information on Cognitive).
Collect the anchor text, paying special attention to any spammy keywords you may have. I would recommend you review as many keywords as possible. Jot them down in a spreadsheet and put them aside. Don’t be conservative here.
You also want to be collecting the non-commercial keywords. Like, your brand name, variations of your brand name, your website URL variations, etc. Anything that would be used in a link to your website referencing your actual company or website.
Together you’ll get a mix of natural backlinks and possibly over-optimized backlinks for SEO purposes. We need to check them all, even though the heavily targeted anchors are probably the main culprit here.
This is where Scrapebox comes in. I’m not going to give you a lesson (that’s been done quite well by Matthew Woodward and Jacob King). But if you’re not familiar, this powerful little tool will scrape the results right out of Google, and put them in a tabular format. You will want proxies or Google will throw captchas at you and screw up your progress. Set the depth to Scrapebox’s (and Google’s) max of 1,000, and start scraping.
Step 1: Enter in your queries
In the screenshot example below, I entered one. Depending on results, and how many commercial anchor text keywords you’re looking for, you want to add more. This might require a bunch of back and forth, and exporting of URL’s, since you have a limitation in how much you can pull. I like small chunks. Grab a beer and put on some music. It helps ease the pain.
But don’t just do inanchor: queries. Get creative. Look for your brand names, mentions, anything that might be associated with a link.
Step 2: Choose all the search engines as your target
In most cases you’ll get a lot of dupes, but Scrapebox will de-dupe for you. In the errant case where Bing might have some links Google isn’t showing, it may come in handy. Remember – Google doesn’t show everything it knows about.
Step 3: Paste in your proxies
It seems Google is on high alert for advanced operators en masse. I recommend getting a ton of proxies to mask your activities a bit (I bought 100 from squidproxies.com, a company I’ve been happy with so far. H/T to Ian Howells)
Step 4: Export and aggregate your results
After a few reps, you’re going to get a ton of results. I average about 15,000. Scrapebox does some de-duping for you, but I always like to spend five minutes cleaning this list, filtering out major platforms like Youtube, Yahoo, Facebook, etc, and removing duplicates. Get the junk out here and have a cleaner list later.
Got a huge list of webpages that may or may not have a link to you? Wouldn’t it be great to find any links without checking each page one by one? There is. Screaming Frog to the rescue.
Copy and paste your long list out of Excel and into a notepad file. Save as a .txt file. Then, head over to Screaming Frog.
Choose: Mode > List
Upload your recently created .txt file.
Then choose: Configuration > Custom
Enter in just the SLD and TLD of your website. See below:
Now when you click start, Screaming Frog will only search the exact URL in your text file, and check the source code for any mention of yoursite.com (for example). In the “custom” tab, you can see all the pages Screaming Frog found a match. Be careful, sometimes it will find hyperlinks that aren’t actually linked, email addresses for you, or hotlinked images.
Boom. I bet you’ll have more links than you originally did, many of which are pulled from the supplemental hell of Google’s index. Many of these are in fact so deep that OSE, Ahrefs, Majestic, etc., don’t ever discover them (or they choose to suppress them). But, odds are, Google is counting them.
Remember earlier when I said this wasn’t a perfect solution? Here’s the reason. Some of these pages that Google shows for a query are quite outdated, especially the deeper you go in the index. In many cases you could grab any one of the URLs that you found that did not have a link to your site (according to Screaming Frog), and look at the Google cache, then find the link. Did Screaming Frog fail? No. The link has vanished since Google last crawled the URL. Sometimes these deeply indexed pages don’t get crawled again for months. In a month the link could have been removed or been paginated to another URL (common in forum spam). Maybe the link was part of an RSS or Twitter feed that once showed in the source code but has since been bumped off.
The only way I know to overcome this takes a lot of processing – more than my 16gb laptop even had. Remember the part where you upload the full list of URLs into Screaming Frog in list mode? Well, if you wanted to pull of the governers, you could actually crawl these URLs and their connected pages as well by going to Configuration >Spider>Limits and remove the limit search depth tick, which applies a crawl depth of ’0′ automatically when switching to list mode. I was able to find a few more links this way, but it is indeed resource intensive.
This is an extreme example on rare cases.
Yesterday we had a prospect call our company who was looking for a second opinion. Their site had a penalty from some SEO work done previously. The current SEO agency’s professional opinion was to burn the site. Kill it. Start over. My gut-second opinion was that it should (and could) probably be saved. After all, there’s branding on that site. The URL is on their business cards. It’s their online identity and worth a serious attempt at rescue. In this case I think extra steps like the above might be in order (if it should come to that). But if it’s a churn-and-burn affiliate site, maybe it’s not worth the effort.
Post-penguin we find that removing the flagged links, with the parallel event of links just becoming less and less valuable as the algorithm refines itself, does keep rankings from bouncing completely back to where they were before – in most, but not all, cases. That’s a hard pill for some smaller business owners to swallow, but I have never seen a case of penalty removal – where all the levels of rank affecting penalty were removed – keep a site from never succeeding in time. Time being the keyword.
So yeah, maybe it really has “come to this,” If your site is worth saving. At the very least you’ll be learning your way around some incredible powerful tools like Scrapebox, Cognitive SEO, and Screaming Frog.
I’m excited to see if anyone has a more refined or advanced way to achieve the same effects!
There must be thousands of SEO tools. While many tools are junk, a few great tools rise up each year and grab our attention. They’re often built for some very specialized needs. Of all the industries these brilliant developers could build in, they chose SEO. I’m always thankful and curious. As a fan of SEO tools, both free or paid, I’m excited to learn about new ones.
A few months ago I got an email from François of Linkody asking for some feedback. It did a nice job of link management and monthly ‘new link’ reporting. Pricing was very low, it’s completely web-based, and is very simple and clean. It pulls from the big backlink data providers, and even has a free client-facing option (exclusively using Ahrefs) at http://www.linkody.com/en/seo-tools/free-backlink-checker. Great for quick audits. I’ve used it quite a bit myself, and was happy to give a testimonial.
The link management function isn’t new to the SEO space. Many tools do it already, like Buzzstream and Raven – and they do it quite well. Additionally, link discovery is an existing feature of tools like Open Site Explorer, yet this is an area where I see opportunity for growth. I love the idea of these ‘new link’ reports, but honestly, haven’t found anything faster than monthly updates. I know it’s a tough request, but I mentioned this to François. By tracking “as-it-happens” links, you can jump into conversations in a timely manner, start making relationships, and maybe shape linking-page context. You might even be able to catch some garbage links you want to disassociate yourself from quicker.
The other day I received a very welcomed response: “I wanted to inform you of that new feature I’ve just launched. Do you remember when you asked me if I had any plan to increase the (monthly) frequency of new links discovery? Well, I increased to a daily frequency. Users can know link their Linkody account with their Google Analytics account and get daily email reports of their new links, if they get any of course.”
Sold. That’s a clever way to report more links, and fill in gaps that OSE and Ahrefs miss.
Upon discovering the new URL, you can choose to monitor it, tag it, or export.
The pros: Linkody picks up a bunch of links on a daily basis that some of the big link crawlers miss. You can opt for daily digest emails (think, Google Alerts). Plus it’s pretty cheap!
The cons: It needs Google Analytics. Plus, for the Google Analytics integration to track the link, the link has to actually be clicked by a user. However, for those who have moved to a “link building for SEO and referral traffic generation” model (like me), this might not be much of a con at all.
As François told me, “next is displaying more data (anchor text, mozrank…) for the discovered link to help value them and see if they’re worth monitoring. And integrating social metrics.” Good stuff. I’d like to see more analytics packages rolled in, and more data sources? Maybe its own spider?
If you’re a link builder, in PR, or a brand manager, I definitely recommend giving Linkody a spin. It’s a great value. Keep your eye on this tool.
I remember a few years ago blowing the mind of a boss with a theory that Google would eventually rank (in part) based on their own internal understanding of your object. If Wikipedia could know so much about an object, why couldn’t Google? In the end, I was basically describing semantic search and entities, something that has already lived as a concept in the fringe of the mainstream.
In the last year Google has shown us that they believe in the value of a semantic web and semantic search engines. With their 2010 purchase of Metaweb (which is now Freebase), and the introduction of the knowledge graph, the creation of schema, and the sudden delivery of a new algorithm called Hummingbird, Google is having one hell of a growth spurt. It’s not just rich snippets we’re talking about or results that better answer Google Now questions.
We used to say Google had an elementary school education. They understood keywords and popularity. Now it can be argued Google has graduated, and is now enrolled in Silicon Valley Jr. High School. Comprehension has clearly improved. Concepts are being understood and logical associations are being made. A person/place/thing, and some details about them (as Google understands it), are starting to peek through in search results.
Yesterday was my birthday. Yesterday was also the day I became Google famous – which to an SEO geek is kind of awesome. I asked Google a couple questions (and some non-questions), and it showed me I’m an entity (incognito and logged in):
This produced a knowledge result (like we’ve seen a couple times before). Details on how I got this are illustrated deeper in this post:
The comprehension level has its limit. Ask Google “when was bill sebald born” or “what age is bill sebald” or “when is bill sebald’s birthday,” and no such result appears. For some reason an apostrophe throws off Google – quereying “bill sebald’s age” vs. the version bulleted above, and there’s no knowledge result. Also, reverse the word order of “bill sebald age” to “age of bill sebald” and there’s no result.
Then, ask “bill sebald birthday” and you’ll get a different knowledge result apparently pulled from a Wikipedia page. This doppelganger sounds a lot more important than me.
We know Google has just begun here, but think about where this will be in a few years. At Greenlane, we’re starting entity work now. We’re teaching our clients about semantic search, and explaining why we think it’s got a great shot at being the future. Meh, maybe social signals and author rank didn’t go the way we expected (yet?), but here’s something that’s already proving out a small glimpse of “correlation equals causation.” It doesn’t cost much, it makes a lot of sense for Google’s future, and seems like a reasonable way to get around all the spam that has manipulated Google for a decade.
I’m not into creating a label. Semantic SEO isn’t a necessary term. You might have seen it in some recent presentations or blog post titles, but to me this is still old-fashioned SEO simply updating to Google’s growth. This is the polar opposite to the “SEO is dead” posts we laugh at. Someone’s probably trying to trademark the “semantic SEO” label right now, or at least differentiate themselves with it. To me, as an SEO and marketer, we always cared about the intent of a searcher – semantic search brings us closer to that. We always cared about educating Google about our values, services, and products. We always wanted to teach Google about meaning (at least for those who were doing LSI work and hoping it would pay off). If this architecture becomes commonplace, it becomes part of any regular old SEO’s job duties. Forget a label – it’s just SEO.
The SEO job description doesn’t change. Only our strategies, skills, and education. We do what we always do – mature right along with the algorithms. We will optimize entities and relationships.
Semantic search isn’t a new concept.
I think the knowledge graph was one of the first clear indications of semantic search. Google is tipping its hand and showing some relationships it understands. Look at the cool information Google knows about Urban Outfitters. This suggests they also know, and can validate this information – like CEO info, NASDAQ info, etc. Google’s not quick to post up anything they can’t verify.
Click through some of the links (like CEO Richard Hayne) and you’ll get more validated info.
These are relationships Google believes to be true. For semantic search to work, systems need to operate seamlessly across different information sources and media. More than just links and keywords, Google will have to care about citations, mentions, and general well-known information in all forms of display.
Freebase, as expected, uses a triple store. This is a great user-managed gathering of information and relationships. But like any human-powered database or index, bad information can get in – even with a passionate community policing the data. Thus, Google usually wants other sources. Wikipedia helps validate information. Google+ helps validate information.
The results I got for my age (from Google above) probably came from an entry I created for myself in Freebase. The age is likely validated by my Google+ profile where I listed my birthdate. Who knows – maybe Google also made note of a citation on Krystian Szastok’s post about Twitter SEO’s Birthdays where I’m listed there too. I’m sure my birthday is elsewhere.
But what about my height? Google knows that too, and oddly enough, I’m fairly sure the only place on the web I posted that was in Freebase:
But I also added information about my band, my fiance, my brother and sister – none of which I can seem to get a knowledge listing for. However, Google seems to have arbitrarily given one for my parents, who as far as I know are “off the grid.”
Another knowledge result came in the form of what I do for a living. This one is easy to validate (in this case only helped with several relevant links I submitted through Freebase):
This is really the exciting part for me. When I first saw the knowledge graph in early 2013, it wasn’t just a, “that’s cool – Google’s got a new display interface,” type of thing. This was my hope that my original theory may be coming true.
In fact, in a popular Moz Whiteboard Friday from November 2012 called Prediction: Anchor Text is Weakening…And May Be Replaced by Co-Occurrence, I was hopeful again. There was a slight bit of controversy here on how a certain page was able to rank for a keyword without the traditional signs of SEO (in this case the original title mentioned co-citation, where Bill Slawski and Joshua Giardino brought some patents to light – see the post for those links). My first though – and I can’t bring myself to rule it out – might have been that it’s none of the above; instead, this is Google ranking based on what it knows about relations of the topic. Maybe this is a pre-Hummingbird rollout sample? Maybe this is the future of semantic search? Certainly companies buy patents to hold them hostage from competitors. Maybe Google was really ranking based of internal AI and known relationships?
Am I a fanboy? You bet! I think the idea of semantic search is amazing. SEO is nothing if not fuzzy, but imagine what Google could do with this knowledge. Imagine what open graph and schema can do for feeding Google information on creating deeper relationships. Couldn’t an expert (ala authorship) feed trust in a certain product? Couldn’t structured data improve Google’s trust of a page? Couldn’t Google start to figure out easier the intent of certain searches, and provide more relevant results based on your personalization and those relationships?
What if it could get to the point where I could simply Google the term “jaguar.” Google could know I’m a guitarist, I like Fender guitars, and I’m a fan of Nirvana (hell – it’s a lot less invasive than the data Target already has on me). Google could serve me pages on the Fender Jaguar guitar, the same guitar Kurt Cobain played. Now think about how you could get your clients in front of search results based on their relationships to your prospective searchers needs. Yup – exciting stuff.
An entity is an entity. Do this for your clients as well. The entries in Freebase ask for a lot of information that could very well influence your content production for the next year. Make your content and relationships on the web match your entries. At Matt Cutts’ keynote at Pubcon, he mentioned how they’re just scratching the surface on authorship. But I think authorship is just scratching the surface on semantic search. I think the big picture won’t manifest for another few years – but, no time like the present to start optimizing for relationships. At Greenlane we’re pushing all our chips in on some huge changes this year, and trying to get our clients positioned ASAP.
On a side note, I have a pretty interesting test brewing with entities, so watch this spot.
PFor one reason or another, plenty of sites are in the doghouse. The dust has settled a bit. Google has gotten more specific about the penalties and warnings through their notifications, and much of the confusion is no longer… as confusing. We’re now in the aftermath – the grass is slowly growing again and the sky is starting to clear. A lot of companies that sold black hat link building work have vanished (and seem to have their phone off the hook). Some companies who sold black hat work are now even charging to remove the links they built for you (we know who you are!). But at the end of the day, if you were snared by Google for willingly – or maybe unknowingly – creating “unnatural links,” the only thing to do is get yourself out of the doghouse.
Occasionally we have clients that need help. While it’s not our bread and butter, I have figured out a pretty solid, quick, and accurate method when I do need to pry a website out of the penalty box. It requires some paid tools, diligence, a bit of excel, and patience, but can be done in a few hours.
The tools I use (in order of execution):
To get the most out of these tools, you do need to pay the subscription costs. They are all powerful tools. They are all worth the money. For those who are not SEOs, reading this post for some clarity, let me explain:
To truly be accurate about your “bad links,” you need to get as big a picture of all the links coming to your site. Google Webmaster Tools will give you a bunch for free. But, in typical Google fashion, they never give you everything they know about in a report. Hell – even their Google Analytics is interpolated. So, to fill in the gaps, there are three big vendors: Open Site Explorer by Moz, Majestic SEO, and Ahrefs.
Wait – so why isn’t Ahrefs and Majestic SEO on my numbered list above? Because Cognitive SEO uses them in their tool. Keep reading…
Note: Click any of the screenshots below to get a larger, more detailed image.
1. Download the links from Google Webmaster Tools.
Click Search Traffic > Links To Your Site > More > Download More Sample Links. Choose a CSV format.
Don’t mess with this template. Leave it as is. You’re going to want to upload this format later, so don’t add headers or columns.
2. Download all individual links from Open Site Explorer to a spreadsheet.
3. Copy only the links out of OSE, and paste under your Webmaster Tools export.
At this point you should have a tidy list of each URL from Google Webmaster Tools and Open Site Explorer. Only one column of links. Next, we head over to Cognitive SEO.
There are a number of SaaS tools out there to help you find, classify URLs, and create disavow lists. I’ve heard great things about Sha Menz’s rmoov tool, There’s also SEO Gadget’s link categorization tool (everything they build is solid in my book). I once tried Remove’em with OK results. Recently Cognitive SEO entered the space with their Unnatural Link Detection tool. With a little bit of input by you, it has its own secret sauce algorithm. I found the system to be quite accurate in most cases, classifying links into three buckets: OK, suspect, and unnatural. More info on the AI here. Also, if you read my blog regularly, you might remember my positive review of their Visual Link Explorer.
First you tell Cognitive what your brand keywords are. Second, you tell it what the commercial keywords are. Typically, when doing disavow work for a client, they know what keywords they targeted. They know they were doing link building against Google guidelines, and know exactly what keywords they were trying to rank for. If the client is shy and doesn’t want to own up to the keywords – or honestly has no idea – there’s a tag cloud behind the form to help you locate the targeted keywords. The bigger the word, the more it was used in anchor text; thus, is probably a word Google spanked them over.
A note about the links Cognitive provides: Ravzan from Cognitive tells me the back link data is aggregated from MajesticSEO, Ahrefs, Blekko and SEOkicks mainly. That’s a lot of data alone!
Below I’ve used Greenlane as an example. Other than some directory submissions I did years ago, unnatural link building wasn’t an approach I took. But, looking at my keyword cloud, there are some commercial terms that I want to enter just to see what Cognitive thinks. Note, the more you fill in here, the better the results. The system can best classify when at least 70% of anchor text is classified as brand or commercial.
Click submit, and Cognitive quickly produces what it thinks are natural and unnatural links.
Cognitive produces nice snapshot metrics. I can quickly see what links I need to review (if any). In my case, Cogntive marked the directory work I did as suspect. Since I don’t have a manual or algo penalty, I’m not going to worry about this work I did when I was younger, dumber SEO.
But, for a client who has a high percentage of bad links, this is super helpful. Here’s an example of results from a current client:
This site has a highly unnatural link profile and it’s likely to be already penalized by Google. This happens to be an all too true statement.
Next, Cognitive added a layer of usability by extending with the Unnatural Links Navigator.
This tool basically creates a viewer to quickly toggle through all your links, and quickly (with some defined hotkeys) tap a site as “disavow domain” or “disavow link”. You get to look at each site quickly and make a judgement call on whether you want to agree with Cognitive’s default classification, or disagree. 9 times out of 10 I agree with what Cognitive thinks. Once in a while I would see a URL labeled “OK” where it really wasn’t. I would simply mark it to disavow.
What should you remove? Here’s a page with great examples from Google. Ultimately though this is your call. I recommend to clients we do the more conservative disavow first, then move to a more liberal if the first one fails. Typically I remove things that link look like they belong on the previously linked page. I also remove pages with spun content, forum board spam, xrumer and DFB stuff, obvious comment spam, resource page spam, and completely irrelevant links (like a viagara link on a page about law). PR spam, directories, and those sites that scrape and repost your content and server info have been around forever – currently I see no penalty from these kinds of links, but if my conservative disavow doesn’t do the job, then my second run will be more liberal, and contain these. 9 times out of 10 my conservative disavow is accepted.
This part of the process might take a couple hours depending on how many links you need to go through, but this is obviously much faster than loading each link automatically, and a lot more thorough than not loading any links at all. I believe if you’re not checking each link out manually, you’re doing it wrong. So turn on some music or a great TV show, grab a beer, tilt your chair back, and start disavowing.
Once complete, you’ll have the option to export a disavow spreadsheet and a ready-made disavow .txt file for Google.
Here’s are the full steps to make the most out of Cognitive SEO.
Google wants you to make an effort and reach out to the sites to try and get the link removed. Painful? You bet. But some SEOs swear it doesn’t need to be done (exclaiming that a simple disavow is enough).
To disavow, take the .txt file you exported from Cognitive, and add any notes you’d like for Google. Submit through your Google Webmaster Tools at https://www.google.com/webmasters/tools/disavow-links-main
But, if you want to attempt to get the links removed, Buzzstream can help you! Buzzstream is like a CRM and link manager tool for inbound marketers. Easily in my top 3 SEO tools. For prospecting, one (of several) things Buzzstream can do is scan a site and pull contact information. From an email that appears deep in the site, to a contact form, Buzzstream can often locate it.
By creating an account with Buzzstream, you can upload your spreadsheet of links into it, forcing Buzzstream to try to pull contact information. Choose “match my CSV” in the upload, and tell Buzzstream your column of links should be categorized as “linking from.”
Here’s a sample. Notice the email, phone, and social icons? This is a huge help in contacting these webmasters and asking for the link to be removed.
That’s all there is to it. For anyone who has done disavows in the past, and found it excruciating (as I used to), this will hopefully give you some tips to speed up the process. Of course, if you’re not in the mood to do any of this yourself, there are certainly SEO companies happy to do this work for you.
Any questions with these steps? Email me at firstname.lastname@example.org or leave a comment below.
Ever wonder how powerful some of the oldest SEO recommendations still are? With the birds and the bears (and a little caffeine) changing so much in SEO since 2011, I wanted to see first hand some of the results we can get from some moves like internal linking and title tag optimization. Using my own site as the proving ground, and moving quickly between tweaks and first results to try and exclude any other ancillary update or change, I decided to test some optimizations I still see recommended or used in the field. The set of competing pages I chose below don’t move very often, so I thought this might be a good group to experiment with.
Note: It’s important to understand that this is not a controlled test at all. Any single domain I’m competing against could be making some changes at the same time which would naturally skew my results. Let’s take this with a grain of salt and consider all of this directional. This is not advice, this is merely my experience and thoughts. If I get hammered on this in the comments, so help me…
Truthfully I think the tl;dr can be summed up pretty well in a single statement:
[rant] See, the results of these tests turned out as I (and probably most of you) expected. Virtually no gains on the thinnest of tests. There were very little surprises below. Yet, these still bring related recommendations all the time from lesser quality blogs – or worse, sometimes agencies and consultants.
Last week I walked into a pitch where the prospect showed me some of the projects his current neighborhood SEO company is working on. He candidly told me he didn’t know what the SEO company was doing for him (which is why he was entertaining new vendors). With the draft of this post in my head, he started sharing some of the recommendations he was given – some of which coincidentally are listed below. Others recommendations included press release links and quickly churned video production.
Now I’m not one to “negative sell” over a competitor (ie, downplay someone else’s service to promote my own), and I was extremely respectful to this vendor, but I left the meeting really frustrated for this business owner. It took everything I had to keep from blasting this vendor. The business owner is clearly the victim of lazy SEO. He was a great guy trying to run a business and relied on the company to be his SEO hero. I respectfully gave him my different opinion on tactics and strategies without truly speaking my mind. I’m still not sure I shouldn’t have been more truthful.
In case you’re wondering, none of the local services in my screenshots below is the vendor I’m reluctantly protecting. [/rant]
Updated 2-20-2014: Lia Barrad made a great point in the comments that I feel should be added here. Unfortunately I couldn’t persuade a client to allow me to display bigger data. As a result I was only limited to do the tests on our own site. The amount of traffic and testing options I had on this relatively small Greenlane site didn’t give me much opportunity to also show a lift/loss in traffic. I really wanted to share that as well, because I truly think qualified and converting traffic is way higher on the list of valuable SEO KPIs. Instead I was relegated to using garbage keywords like “Philadelphia SEO” that doesn’t bring much good traffic (I used to rank extremely well for the term and eventually abandoned it because it wasn’t worth the effort in my case).
Enjoy the test!
Situation: On November 17 2013, using Chrome Incognito, my site ranks #11 for a geo-targeted keyword (see graphic below – in this case I don’t want to muddy this test by adding the keyword anywhere on this website except in the testing page).
Click images for larger view
The strongest page on my site is my homepage (which is currently ranking for the keyword above). It has a PA of 52.58, with 420 external links passing link equity from 51 external domains.
My second strongest page is my Outdated Content Finder tool. It got mentions in Moz, Search Engine Land, Search Engine Journal, and picked up from mentions at Mozcon. It has a page authority (PA) of 49.07, with 89 linking root domains, for a total of 100 external, equity passing links. There are already 40 outbound links from this page, with two being to external domains.
On my Outdated Content Finder page, there isn’t a reference to the homepage using any anchor text but “home” in the navigation.
Test: To see if I could pass better PageRank to my homepage, using an exact match anchor text, I implemented the following:
Expectation: In many cases, the Fetch As Google URL submission works really fast (I’ve seen it add a new URL in less than 10 minutes), but I’m not really expecting a jump in rank. I think because the sitewide navigation, where there’s a home link already embedded, this second link may not have much power.
Result: It took a few days, but there was a single-position gain on 11/18 (same as the new cache date). The bump went from position 11 to position 10. Nothing to hang my hat on normally, but for a page jump, I’m somewhat satisfied in this case.
To push the rankings a little higher, let’s try a partial-sitewide, exact anchor link to the same homepage.
Test: 11/19 – My blog has a different sidebar than my non-blog pages. With a widget in WordPress I can add a simple piece of copy with an exact anchor text link:
This isn’t a true fully-sitewide link, and is all one level deeper into the site (http://www.greenlaneseo.com/blog/) but for this experiment I think it’s good enough.
Expectation: I have a number of blogs with a wide variety of backlinks. I still believe sitewide links have power (though limited), and expect to possibly see another position bump.
Result: On 11/24 (6 days after the change), the keyword actually dropped two spots to position 12 (page 2). From what I can observe, no new sites have entered the set.
Since that sitewide link didn’t work too well, I reversed it. Actually, I updated it to push all the links into the Outdated Content Finder page. Maybe if we consolidate into my second most powerful page it might have a positive effect to the same target keyword.
Test: 11/24 – Updated the site-wide copy as follows:
Expectation: Truth is, I expected more from Test 2. With Test 2.1, I’m even less optimistic there will be a positive change. At the least, I’m expecting my target keyword to fall back to position #10.
Result: Apparently better than expected. Now appearing in position 9 for my target keyword since 11/27.
The domains in this set stayed relatively constant throughout this 10 day experiment. Again, I make no claim to this being the results everyone should expect, since we must consider competition, possible backend algorithm changes, and (especially since these are all SEO companies) possible changes by the websites themselves. But, my theories are as follows:
Situation: Thousands of SEOs, websites, and audit tools suggest these two best practices for title tags:
Personally, I’ve rejected this for the last 8 years. Here’s why – I believe Google is more sophisticated, and realizes the target keyword being first in the title isn’t always natural. In Google’s younger days, sure – it’s a signal they could code to capture, but I think it’s too limiting to be a signal today. It’s a usual SEO recommendation that surely Google knows about. Second, if a title tag is too long, it gets truncated. That’s not a great user experience, but I’ve never seen evidence of the truncated text not helping rank. I’ve only seen the opposite.
Test: To test this, I updated my title tag for my blog homepage on 11/29/2013. Target keyword is in the middle of the title tag. I intentionally caused the tag to truncate. This is a pretty terrible title, but suits the experiment:
On a side note, after creating this terrible title tag, I submitted to Fetch as Google. Within 60 seconds this title tag showed in an incognito search, despite an outdated “Nov 18, 2013″ date. That’s remarkable.
On 11/30 through 12/02, I’m ranking position 271 for my keyword. It seems pretty settled there. On 12/03, I have updated the title tag to this:
Expectation: I don’t think the ranking will move. I don’t think keyword position matters.
Result: On 12/7 my current rank for the keyword was still 271. On 1/4/2013, it flopped down to 284. No positive change.
I wholeheartedly believe the volatility of a change is different when a rank is in the hundreds, vs. in the tens. Let’s revise the same test on a keyword that is already ranking well. For the keyword Philadelphia SEO, my homepage page ranks 6. The title tag is Greenlane SEO – Search, Analytics, and Strategy Services Since 2005. A Philadelphia SEO Company.
Test: On 1/8 let’s see what happens if I change it to Philadelphia SEO Company – Greenlane SEO – Search, Analytics, and Strategy Services Since 2005.
Expectation: I don’t think the ranking will move. I don’t think keyword position matters in this case either.
Result: On 1/12 my current rank for the keyword was still 6. No positive change (but I’m reverting immediately – that’s a terrible title tag just for a supposed SEO value).
I don’t believe that a title needs to be under 70 characters for SEO value to take hold. As mentioned early in this post, a truncated title is not great from a marketing perspective. Surely there’s better things a user can see than an ellipsis in the SERP link, but when trimming to 70 characters is recommended in order to rank better, I call “shenanigans”.
Test: I’m not going to work too hard on this test because I’ve tested this before. On 1/8/2014, on a blog post called Review of Repost.us, I rank #1 for “review of repost.” The title tag is simply Review of Repost.us. I am changing the title to past 70 characters: Review of Repost.us – A Review By Bill Sebald – Is Repost.Us SEO Friendly? Let’s Find Out! Greenlane Search Marketing
Expectation: I’m expecting no drop in rank whatsoever.
Result: On 1/13/2014, no drop with new ugly truncated title tag.
As expected, tweaking the title tags with these old-school recommendations didn’t do anything. It’s not 2007 anymore.
I do hope you enjoyed the tests. As stated in the beginning of the post, this is not scientific. Take this as directional and do what you may with the information, but my recommendation for those who still solely rely on these kinds of recommendations to provide your client with SEO services, please reconsider recommending things that have a bigger impact. If you’re a business person yourself, and you get recommendations like this, please don’t drink the kool-aid.
I stumbled upon an interesting service I don’t think many SEOs know about – at least, not the few I’ve asked. It’s called repost.us. Looks like it’s about 2 years old.
Simple premise: Add your site to the database, and others can republish your content. They say, “It’s the wire service reinvented for the web.”
Click any image to enlarge
A user of repost.us can login, search for content, and simply copy and paste the blue embed code (with a couple checkbox options) right into their website. See below – one of my articles, straight from this blog, has been added to their database. This is how a user sees it:
Notice above, circled in red, there is an Adsense block as part of the copied code. This isn’t my Adsense code; instead it appears to be added there by the repost.us team, and does appear to wind up in your posted article. This gives repost.us a chance to monetize for the service. This also gives a publisher, who embeds Adsense, a chance to swing their publisher ID over as well. Interesting way to earn more Adsense clicks.
Right. The dreaded D word. Here’s a site that took my content and reposted it:
Did you notice the attribution links (in red) at the bottom? These particular links don’t show in the source code either (but others do – read on).
<div class=”rpuArticle rpuRepost-7af546614f6b5e93c9c6053b466c1a0f-top” style=”margin:0;padding:0;”>
Let’s face it – the SEO industry has a tendency to stomp a tactic into the ground. Some of us even get lazy (pleny of this kind of junk around). Directory submissions were once wildly valuable, then SEOs started creating directories by the thousands…
</div><!– put the “tease”, “jump” or “more” break here –><hr id=”system-readmore” style=”display: none;” /><!–more–><!–break–><hr class=”at-page-break” style=”display: none;”/><div class=”rpuEmbedCode”>
<div class=”rpuArticle rpuRepostMain rpuRepost-7af546614f6b5e93c9c6053b466c1a0f-bottom” style=”display:none;”> </div>
<div style=”display: none;”><!– How to customize this embed: http://www.repost.us/article-preview/hash/4917fea1ea6f6df42de6a8f3d7cb3d4d –></div>
See the links in red above? The The Kind Of SEO I Want To Be (via http://www.greenlaneseo.com/) links? Those are the only two links that appear to link back to my original, canonical blog post. They live in the source code behind the full injected content. Sadly they are both the same shortened URLs (in this case http://s.tt/1MWo1) but they are at least 301 redirects. If you believe 301′s dampen PageRank more than straight links, despite statements from Matt Cutts, then this is probably disappointing.
In my experience, this small amount of duplicate content, with one or two links back to the original document (including 301′s), don’t seem to cause any duplicate content issues. I’ve had my content posted on Business 2 Community in full with an attribution link, and Google still seems to figure it out. My posts still wind up ranking first – even if it takes a few weeks.
I emailed the team at repost.us and asked for a user count and activity. CEO John Pettitt kindly responded:
“We don’t give exact numbers but you can assume between 10K and 100K sites embed content in any given month. There are over 5000 sites contributing content. We have not quite 4 million articles in the system and we republish between 50 and 200K articles a month.
The average reposted article gets ~150 views per post, that goes up a lot for new content where it runs ~2000 and we regularly see content getting 20-50K views for an article if a bigger sites picks it up. The usage is very quality sensitive, if it’s content farm quality “seo bait” it probably won’t do well. it’s it’s original well written content it will do better.”
Pretty awesome numbers! Unfortunately, I didn’t fair so well.
After running at least 3 months, with only 6 domains republishing my articles (one apparently being repost.us itself), I received a total of 40 total impressions (disregard the chart above that suggests 21 for just for the few they show in the summary). Still, that’s 6 links I got without really doing anything but writing for my own blog.
Also, out of all the posts on my blog, there were only 6 different posts shared through the 6 different sites (I have blog posts dating back to 2007). I did see a year old post, but for the most part, all the content that got republished was newer content. I don’t know if that’s because their system chose to suppress old posts, or just a coincidence.
Finally, after spot checking the 6 sites that hosted at least one article, all but the repost.us domain were extremely poor. DA of less than 15 with virtually no external links according to Moz. Now I’m much less excited about the handful of links I received.
So it wasn’t a success for me, but in light of the numbers John (from repost.us) shared, I could very well be unlucky or simply not in line with what the user base is looking for. I write for the SEO industry. The users of this service may very well not have any interest in SEO. Or, maybe I’m just not writing interesting stuff (but I refuse to believe that!).
But I do believe in the power of reposting content. I’m not completely afraid of duplicate content over getting more eyeballs onto a piece of my content strategy. At the end of the day, republishing for eyeballs – even in traditional paper media – was a marketing goal. Again, I believe Google is good enough at sorting most light duplicate content eventually, whereas repost.us also took precautions to make sure they helped avoid adding noise to the signal and misguide the algorithm into mistaking the canonical URL. We actually just started to use repost.us for some of our clients as well, taking note of the different categories the service supports.
My only concern with the service is, based on an unfair sample of 6, there may be a lot of spammers republishing and looking to achieve an article marketing type of model (ie, post everything, monetize with ads). Could the spam links hurt? Probably not, but I would definitely keep my eyes open as an SEO.
My one sentence bottom line review: Absolutely worth a try. It could yield some great SEO and marketing results, especially when / if the service grows.