Founded in 2005, we're a strong boutique SEO consulting group with big experience and
industry recognition. We invite you to browse the site and learn more about who we are,
and more importantly, what we can bring to your business. Partner with us.
 
 
 

Review of Linkody (Daily Link Discovery)

Author : Bill Sebald

Google+ | Articles from

There must be thousands of SEO tools. While many tools are junk, a few great tools rise up each year and grab our attention. They’re often built for some very specialized needs. Of all the industries these brilliant developers could build in, they chose SEO. I’m always thankful and curious. As a fan of SEO tools, both free or paid, I’m excited to learn about new ones.

A few months ago I got an email from François of Linkody asking for some feedback. It did a nice job of link management and monthly ‘new link’ reporting. Pricing was very low, it’s completely web-based, and is very simple and clean. It pulls from the big backlink data providers, and even has a free client-facing option (exclusively using Ahrefs) at http://www.linkody.com/en/seo-tools/free-backlink-checker. Great for quick audits. I’ve used it quite a bit myself, and was happy to give a testimonial.

The link management function isn’t new to the SEO space. Many tools do it already, like Buzzstream and Raven – and they do it quite well. Additionally, link discovery is an existing feature of tools like Open Site Explorer, yet this is an area where I see opportunity for growth. I love the idea of these ‘new link’ reports, but honestly, haven’t found anything faster than monthly updates. I know it’s a tough request, but I mentioned this to François. By tracking “as-it-happens” links, you can jump into conversations in a timely manner, start making relationships, and maybe shape linking-page context. You might even be able to catch some garbage links you want to disassociate yourself from quicker.

The other day I received a very welcomed response: “I wanted to inform you of that new feature I’ve just launched. Do you remember when you asked me if I had any plan to increase the (monthly) frequency of new links discovery? Well, I increased to a daily frequency. Users can know link their Linkody account with their Google Analytics account and get daily email reports of their new links, if they get any of course.

Sold. That’s a clever way to report more links, and fill in gaps that OSE and Ahrefs miss.

Click for larger view

Linkody

Upon discovering the new URL, you can choose to monitor it, tag it, or export.

The pros: Linkody picks up a bunch of links on a daily basis that some of the big link crawlers miss. You can opt for daily digest emails (think, Google Alerts). Plus it’s pretty cheap!

The cons: It needs Google Analytics. Plus, for the Google Analytics integration to track the link, the link has to actually be clicked by a user. However, for those who have moved to a “link building for SEO and referral traffic generation” model (like me), this might not be much of a con at all.

What’s on the roadmap?

As François told me, “next is displaying more data (anchor text, mozrank…) for the discovered link to help value them and see if they’re worth monitoring. And integrating social metrics.” Good stuff. I’d like to see more analytics packages rolled in, and more data sources? Maybe its own spider?

Conclusion

If you’re a link builder, in PR, or a brand manager, I definitely recommend giving Linkody a spin. It’s a great value. Keep your eye on this tool.

 

 

 



Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts


Optimize NOW For Entities and Relationships

Author : Bill Sebald

Google+ | Articles from

I remember a few years ago blowing the mind of a boss with a theory that Google would eventually rank (in part) based on their own internal understanding of your object. If Wikipedia could know so much about an object, why couldn’t Google? In the end, I was basically describing semantic search and entities, something that has already lived as a concept in the fringe of the mainstream.

Sketching It Out

Sketching out relationships on a whiteboard

In the last year Google has shown us that they believe in the value of a semantic web and semantic search engines. With their 2010 purchase of Metaweb (which is now Freebase), and the introduction of the knowledge graph, the creation of schema, and the sudden delivery of a new algorithm called Hummingbird, Google is having one hell of a growth spurt. It’s not just rich snippets we’re talking about or results that better answer Google Now questions.

We used to say Google had an elementary school education. They understood keywords and popularity. Now it can be argued Google has graduated, and is now enrolled in Silicon Valley Jr. High School. Comprehension has clearly improved. Concepts are being understood and logical associations are being made. A person/place/thing, and some details about them (as Google understands it), are starting to peek through in search results.

Yesterday was my birthday. Yesterday was also the day I became Google famous – which to an SEO geek is kind of awesome. I asked Google a couple questions (and some non-questions), and it showed me I’m an entity (incognito and logged in):

  • how old is bill sebald
  • what is bill sebald’s age
  • bill sebald age
  • birthday of bill sebald

This produced a knowledge result (like we’ve seen a couple times before). Details on how I got this are illustrated deeper in this post:

sebald-age

The comprehension level has its limit.  Ask Google “when was bill sebald born” or “what age is bill sebald”  or “when is bill sebald’s birthday,” and no such result appears.  For some reason an apostrophe throws off Google – quereying “bill sebald’s age” vs. the version bulleted above, and there’s no knowledge result. Also, reverse the word order of “bill sebald age” to “age of bill sebald” and there’s no result.

Then, ask “bill sebald birthday” and you’ll get a different knowledge result apparently pulled from a Wikipedia page. This doppelganger sounds a lot more important than me.

 

result-2

We know Google has just begun here, but think about where this will be in a few years. At Greenlane, we’re starting entity work now. We’re teaching our clients about semantic search, and explaining why we think it’s got a great shot at being the future. Meh, maybe social signals and author rank didn’t go the way we expected (yet?), but here’s something that’s already proving out a small glimpse of “correlation equals causation.” It doesn’t cost much, it makes a lot of sense for Google’s future, and seems like a reasonable way to get around all the spam that has manipulated Google for a decade.

A new description of SEO services?

I’m not into creating a label. Semantic SEO isn’t a necessary term. You might have seen it in some recent presentations or blog post titles, but to me this is still old-fashioned SEO simply updating to Google’s growth. This is the polar opposite to the “SEO is dead” posts we laugh at. Someone’s probably trying to trademark the “semantic SEO” label right now, or at least differentiate themselves with it. To me, as an SEO and marketer, we always cared about the intent of a searcher – semantic search brings us closer to that. We always cared about educating Google about our values, services, and products. We always wanted to teach Google about meaning (at least for those who were doing LSI work and hoping it would pay off). If this architecture becomes commonplace, it becomes part of any regular old SEO’s job duties. Forget a label – it’s just SEO.

The SEO job description doesn’t change. Only our strategies, skills, and education. We do what we always do – mature right along with the algorithms. We will optimize entities and relationships.

Where have we come from, and where are we going?

Semantic search isn’t a new concept.

I think the knowledge graph was one of the first clear indications of semantic search. Google is tipping its hand and showing some relationships it understands. Look at the cool information Google knows about Urban Outfitters. This suggests they also know, and can validate this information – like CEO info, NASDAQ info, etc. Google’s not quick to post up anything they can’t verify.

urban

Click through some of the links (like CEO Richard Hayne) and you’ll get more validated info.

hayne

These are relationships Google believes to be true. For semantic search to work, systems need to operate seamlessly across different information sources and media. More than just links and keywords, Google will have to care about citations, mentions, and general well-known information in all forms of display.

Freebase, as expected, uses a triple store. This is a great user-managed gathering of information and relationships. But like any human-powered database or index, bad information can get in – even with a passionate community policing the data. Thus, Google usually wants other sources. Wikipedia helps validate information. Google+ helps validate information.

The results I got for my age (from Google above) probably came from an entry I created for myself in Freebase. The age is likely validated by my Google+ profile where I listed my birthdate. Who knows – maybe Google also made note of a citation on Krystian Szastok’s post about Twitter SEO’s Birthdays where I’m listed there too. I’m sure my birthday is elsewhere.

But what about my height? Google knows that too, and oddly enough, I’m fairly sure the only place on the web I posted that was in Freebase:

bill sebald height

But I also added information about my band, my fiance, my brother and sister – none of which I can seem to get a knowledge listing for. However, Google seems to have arbitrarily given one for my parents, who as far as I know are “off the grid.”

parents

Another knowledge result came in the form of what I do for a living. This one is easy to validate (in this case only helped with several relevant links I submitted through Freebase):

profession

This is just what Google wants to show, not all it knows

This is really the exciting part for me. When I first saw the knowledge graph in early 2013, it wasn’t just a, “that’s cool – Google’s got a new display interface,” type of thing. This was my hope that my original theory may be coming true.

In fact, in a popular Moz Whiteboard Friday from November 2012 called Prediction: Anchor Text is Weakening…And May Be Replaced by Co-Occurrence, I was hopeful again. There was a slight bit of controversy here on how a certain page was able to rank for a keyword without the traditional signs of SEO (in this case the original title mentioned co-citation, where Bill Slawski and Joshua Giardino brought some patents to light – see the post for those links). My first though – and I can’t bring myself to rule it out – might have been that it’s none of the above; instead, this is Google ranking based on what it knows about relations of the topic. Maybe this is a pre-Hummingbird rollout sample? Maybe this is the future of semantic search? Certainly companies buy patents to hold them hostage from competitors. Maybe Google was really ranking based of internal AI and known relationships?

Am I a fanboy? You bet! I think the idea of semantic search is amazing. SEO is nothing if not fuzzy, but imagine what Google could do with this knowledge. Imagine what open graph and schema can do for feeding Google information on creating deeper relationships. Couldn’t an expert (ala authorship) feed trust in a certain product? Couldn’t structured data improve Google’s trust of a page? Couldn’t Google start to figure out easier the intent of certain searches, and provide more relevant results based on your personalization and those relationships?

What if it could get to the point where I could simply Google the term “jaguar.” Google could know I’m a guitarist, I like Fender guitars, and I’m a fan of Nirvana (hell – it’s a lot less invasive than the data Target already has on me). Google could serve me pages on the Fender Jaguar guitar, the same guitar Kurt Cobain played. Now think about how you could get your clients in front of search results based on their relationships to your prospective searchers needs. Yup – exciting stuff.

Google is just getting started

An entity is an entity. Do this for your clients as well. The entries in Freebase ask for a lot of information that could very well influence your content production for the next year. Make your content and relationships on the web match your entries. At Matt Cutts’ keynote at Pubcon, he mentioned how they’re just scratching the surface on authorship. But I think authorship is just scratching the surface on semantic search. I think the big picture won’t manifest for another few years – but, no time like the present to start optimizing for relationships. At Greenlane we’re pushing all our chips in on some huge changes this year, and trying to get our clients positioned ASAP.

On a side note, I have a pretty interesting test brewing with entities, so watch this spot.



Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts


Step-By-Step Google Disavow Process (Or, How To Disavow Fast, Efficiently, and Successfully)

Author : Bill Sebald

Google+ | Articles from

PFor one reason or another, plenty of sites are in the doghouse. The dust has settled a bit. Google has gotten more specific about the penalties and warnings through their notifications, and much of the confusion is no longer… as confusing. We’re now in the aftermath – the grass is slowly growing again and the sky is starting to clear. A lot of companies that sold black hat link building work have vanished (and seem to have their phone off the hook). Some companies who sold black hat work are now even charging to remove the links they built for you (we know who you are!). But at the end of the day, if you were snared by Google for willingly – or maybe unknowingly – creating “unnatural links,” the only thing to do is get yourself out of the doghouse.

Occasionally we have clients that need help. While it’s not our bread and butter, I have figured out a pretty solid, quick, and accurate method when I do need to pry a website out of the penalty box. It requires some paid tools, diligence, a bit of excel, and patience, but can be done in a few hours.

The tools I use (in order of execution):

To get the most out of these tools, you do need to pay the subscription costs. They are all powerful tools. They are all worth the money. For those who are not SEOs, reading this post for some clarity, let me explain:

To truly be accurate about your “bad links,” you need to get as big a picture of all the links coming to your site. Google Webmaster Tools will give you a bunch for free. But, in typical Google fashion, they never give you everything they know about in a report. Hell – even their Google Analytics is interpolated. So, to fill in the gaps, there are three big vendors: Open Site Explorer by Moz, Majestic SEO, and Ahrefs.

Wait – so why isn’t Ahrefs and Majestic SEO on my numbered list above? Because Cognitive SEO uses them in their tool. Keep reading…

Note: Click any of the screenshots below to get a larger, more detailed image.

Step 1 – Gather The Data

1. Download the links from Google Webmaster Tools.

Click Search Traffic > Links To Your Site > More > Download More Sample Links.   Choose a CSV format.

Google Webmaster Tools Links - Step 1

Google Webmaster Tools Links - Step 2

Don’t mess with this template. Leave it as is. You’re going to want to upload this format later, so don’t add headers or columns.

     2. Download all individual links from Open Site Explorer to a spreadsheet.

     3. Copy only the links out of OSE, and paste under your Webmaster Tools export.

     4. Remove any duplicate URLs.

At this point you should have a tidy list of each URL from Google Webmaster Tools and Open Site Explorer. Only one column of links. Next, we head over to Cognitive SEO.

Step 2 – Cognitive SEO Unnatural Link Detection

There are a number of SaaS tools out there to help you find, classify URLs, and create disavow lists. I’ve heard great things about Sha Menz’s rmoov tool, There’s also SEO Gadget’s link categorization tool (everything they build is solid in my book). I once tried Remove’em with OK results. Recently Cognitive SEO entered the space with their Unnatural Link Detection tool. With a little bit of input by you, it has its own secret sauce algorithm. I found the system to be quite accurate in most cases, classifying links into three buckets: OK, suspect, and unnatural. More info on the AI here. Also, if you read my blog regularly, you might remember my positive review of  their Visual Link Explorer.

First you tell Cognitive what your brand keywords are. Second, you tell it what the commercial keywords are. Typically, when doing disavow work for a client, they know what keywords they targeted. They know they were doing link building against Google guidelines, and know exactly what keywords they were trying to rank for. If the client is shy and doesn’t want to own up to the keywords – or honestly has no idea – there’s a tag cloud behind the form to help you locate the targeted keywords. The bigger the word, the more it was used in anchor text; thus, is probably a word Google spanked them over.

A note about the links Cognitive provides: Ravzan from Cognitive tells me the back link data is aggregated from MajesticSEO, Ahrefs, Blekko and SEOkicks mainly. That’s a lot of data alone!

Below I’ve used Greenlane as an example. Other than some directory submissions I did years ago, unnatural link building wasn’t an approach I took. But, looking at my keyword cloud, there are some commercial terms that I want to enter just to see what Cognitive thinks. Note, the more you fill in here, the better the results. The system can best classify when at least 70% of anchor text is classified as brand or commercial.

Cognitive SEO screenshot 1

Click submit, and Cognitive quickly produces what it thinks are natural and unnatural links.

Cognitive SEO Screenshot 2

Cognitive produces nice snapshot metrics. I can quickly see what links I need to review (if any). In my case, Cogntive marked the directory work I did as suspect. Since I don’t have a manual or algo penalty, I’m not going to worry about this work I did when I was younger, dumber SEO.

But, for a client who has a high percentage of bad links, this is super helpful. Here’s an example of results from a current client:

Cognitive SEO Screenshot 3

This site has a highly unnatural link profile and it’s likely to be already penalized by Google.  This happens to be an all too true statement.

Next, Cognitive added a layer of usability by extending with the Unnatural Links Navigator.

 Cognitive SEO screenshot 4

This tool basically creates a viewer to quickly toggle through all your links, and quickly (with some defined hotkeys) tap a site as “disavow domain” or “disavow link”. You get to look at each site quickly and make a judgement call on whether you want to agree with Cognitive’s default classification, or disagree. 9 times out of 10 I agree with what Cognitive thinks. Once in a while I would see a URL labeled “OK” where it really wasn’t. I would simply mark it to disavow.

What should you remove? Here’s a page with great examples from Google. Ultimately though this is your call. I recommend to clients we do the more conservative disavow first, then move to a more liberal if the first one fails. Typically I remove things that link look like they belong on the previously linked page. I also remove pages with spun content, forum board spam, xrumer and DFB stuff, obvious comment spam, resource page spam, and completely irrelevant links (like a viagara link on a page about law). PR spam, directories, and those sites that scrape and repost your content and server info have been around forever – currently I see no penalty from these kinds of links, but if my conservative disavow doesn’t do the job, then my second run will be more liberal, and contain these. 9 times out of 10 my conservative disavow is accepted.

This part of the process might take a couple hours depending on how many links you need to go through, but this is obviously much faster than loading each link automatically, and a lot more thorough than not loading any links at all. I believe if you’re not checking each link out manually, you’re doing it wrong. So turn on some music or a great TV show, grab a beer, tilt your chair back, and start disavowing.

Once complete, you’ll have the option to export a disavow spreadsheet and a ready-made disavow .txt file for Google.

Here’s are the full steps to make the most out of Cognitive SEO.

  1. Create a campaign with your site or client’s site.
  2. Once in the inbound link analysis view, click IMPORT on the top right. Choose Google Webmaster Tools as the Import Format. Choose the Google Webmaster Tools / Open Site Explorer .csv file. Click Import.
  3. Once links are appended, click “start the automatic classification now” and follow the steps.
  4. Click “Launch The Unnatural Links Navigator”, and click the “link” column to sort alphabetically.
  5. Toggle on each link to disavow individually, or choose one link per domain and disavow the domain. This will make sense once you’re in the tool.

Step 3 – Submit Disavow To Google – or – Do Outreach To Remove Links

Google wants you to make an effort and reach out to the sites to try and get the link removed. Painful? You bet. But some SEOs swear it doesn’t need to be done (exclaiming that a simple disavow is enough).

To disavow, take the .txt file you exported from Cognitive, and add any notes you’d like for Google. Submit through your Google Webmaster Tools at https://www.google.com/webmasters/tools/disavow-links-main

But, if you want to attempt to get the links removed, Buzzstream can help you! Buzzstream is like a CRM and link manager tool for inbound marketers. Easily in my top 3 SEO tools. For prospecting, one (of several) things Buzzstream can do is scan a site and pull contact information. From an email that appears deep in the site, to a contact form, Buzzstream can often locate it.

By creating an account with Buzzstream, you can upload your spreadsheet of links into it, forcing Buzzstream to try to pull contact information. Choose “match my CSV” in the upload, and tell Buzzstream your column of links should be categorized as “linking from.”

Here’s a sample. Notice the email, phone, and social icons? This is a huge help in contacting these webmasters and asking for the link to be removed.

buzzstream screenshot

Conclusion

That’s all there is to it. For anyone who has done disavows in the past, and found it excruciating (as I used to), this will hopefully give you some tips to speed up the process. Of course, if you’re not in the mood to do any of this yourself, there are certainly SEO companies happy to do this work for you.

Any questions with these steps? Email me at bill@greenlaneseo.com or leave a comment below.

 

 

 



Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts


Review of Repost.us

Author : Bill Sebald

Google+ | Articles from

I stumbled upon an interesting service I don’t think many SEOs know about – at least, not the few I’ve asked.  It’s called repost.us.  Looks like it’s about 2 years old.

Simple premise: Add your site to the database, and others can republish your content.  They say, “It’s the wire service reinvented for the web.”

Click any image to enlarge

Repost.us Homepage

A user of repost.us can login, search for content, and simply copy and paste the blue embed code (with a couple checkbox options) right into their website.  See below – one of my articles, straight from this blog, has been added to their database.  This is how a user sees it:

Repost.us Sample

Notice above, circled in red, there is an Adsense block as part of the copied code.  This isn’t my Adsense code; instead it appears to be added there by the repost.us team, and does appear to wind up in your posted article.  This gives repost.us a chance to monetize for the service. This also gives a publisher, who embeds Adsense, a chance to swing their publisher ID over as well. Interesting way to earn more Adsense clicks.

What About Duplicate Content?

Right.  The dreaded D word.  Here’s a site that took my content and reposted it:

Sample republished SEO post

Did you notice the attribution links (in red) at the bottom?  These particular links don’t show in the source code either (but others do – read on).

Now here’s the cool part.  Search in the source code for this specific body of copy and you won’t find it.  Either will Google.  Javascript injects this copy in through the embed code, which looks like this:

<div class=”rpuEmbedCode”>
<div class=”rpuArticle rpuRepost-7af546614f6b5e93c9c6053b466c1a0f-top” style=”margin:0;padding:0;”>
<script src=”https://1.rp-api.com/rjs/repost-article.js?3″ type=”text/javascript” data-cfasync=”false”></script><a href=”http://s.tt/1JrGh” class=”rpuThumb” rel=”norewrite”><img src=”//img.1.rp-api.com/thumb/6754722″ style=”float:left;margin-right:10px;” /></a><a href=”http://s.tt/1JrGh” class=”rpuTitle” rel=”norewrite”><strong>I&#8217;m Not Afraid Of A Google Update Against Guest Posting</strong></a> (via <a href=”http://s.tt/1JrGh” class=”rpuHost” rel=”norewrite”>http://www.greenlaneseo.com/</a>)<p class=”rpuSnip”>
Let’s face it – the SEO industry has a tendency to stomp a tactic into the ground.  Some of us even get lazy (pleny of this kind of junk around). Directory submissions were once wildly valuable, then SEOs started creating directories by the thousands&hellip;
</p>
</div>
</div><!– put the “tease”, “jump” or “more” break here –><hr id=”system-readmore” style=”display: none;” /><!–more–><!–break–><hr class=”at-page-break” style=”display: none;”/><div class=”rpuEmbedCode”>
<div class=”rpuArticle rpuRepostMain rpuRepost-7af546614f6b5e93c9c6053b466c1a0f-bottom” style=”display:none;”>&nbsp;</div>
<div style=”display: none;”><!– How to customize this embed: http://www.repost.us/article-preview/hash/4917fea1ea6f6df42de6a8f3d7cb3d4d –></div>
</div>

Natively the code will only show a snippet, like something you might see in a <noscript>. This is all Google will see (which I achieve by turning off Javascript in my browswer):

Sample 2

See the links in red above?  The The Kind Of SEO I Want To Be  (via http://www.greenlaneseo.com/) links?  Those are the only two links that appear to link back to my original, canonical blog post.  They live in the source code behind the full injected content. Sadly they are both the same shortened URLs (in this case http://s.tt/1MWo1) but they are at least 301 redirects.  If you believe 301′s dampen PageRank more than straight links, despite statements from Matt Cutts, then this is probably disappointing.

In my experience, this small amount of duplicate content, with one or two links back to the original document (including 301′s), don’t seem to cause any duplicate content issues.  I’ve had my content posted on Business 2 Community in full with an attribution link, and Google still seems to figure it out.  My posts still wind up ranking first – even if it takes a few weeks.

Seems SEO Friendly Enough… But How Were The Results?

I emailed the team at repost.us and asked for a user count and activity.  CEO John Pettitt kindly responded:

“We don’t give exact numbers but you can assume between 10K and 100K sites embed content in any given month. There are over 5000 sites contributing content. We have not quite 4 million articles in the system and we republish between 50 and 200K articles a month.

The average reposted article gets ~150 views per post, that goes up a lot for new content where it runs ~2000 and we regularly see content getting 20-50K views for an article if a bigger sites picks it up. The usage is very quality sensitive, if it’s content farm quality “seo bait” it probably won’t do well. it’s it’s original well written content it will do better.”

Pretty awesome numbers!  Unfortunately, I didn’t fair so well.

My Repost.us data

After running at least 3 months, with only 6 domains republishing my articles (one apparently being repost.us itself), I received a total of 40 total impressions (disregard the chart above that suggests 21 for just for the few they show in the summary).  Still, that’s 6 links I got without really doing anything but writing for my own blog.  

Also, out of all the posts on my blog, there were only 6 different posts shared through the 6 different sites (I have blog posts dating back to 2007).  I did see a year old post, but for the most part, all the content that got republished was newer content.  I don’t know if that’s because their system chose to suppress old posts, or just a coincidence.

Finally, after spot checking the 6 sites that hosted at least one article, all but the repost.us domain were extremely poor.  DA of less than 15 with virtually no external links according to Moz.  Now I’m much less excited about the handful of links I received.

Conclusion

So it wasn’t a success for me, but in light of the numbers John (from repost.us) shared, I could very well be unlucky or simply not in line with what the user base is looking for.  I write for the SEO industry.  The users of this service may very well not have any interest in SEO.  Or, maybe I’m just not writing interesting stuff (but I refuse to believe that!).

But I do believe in the power of reposting content.  I’m not completely afraid of duplicate content over getting more eyeballs onto a piece of my content strategy.  At the end of the day, republishing for eyeballs – even in traditional paper media – was a marketing goal.  Again, I believe Google is good enough at sorting most light duplicate content eventually, whereas repost.us also took precautions to make sure they helped avoid adding noise to the signal and misguide the algorithm into mistaking the canonical URL.  We actually just started to use repost.us for some of our clients as well, taking note of the different categories the service supports.

My only concern with the service is, based on an unfair sample of 6, there may be a lot of spammers republishing and looking to achieve an article marketing type of model (ie, post everything, monetize with ads).  Could the spam links hurt?  Probably not, but I would definitely keep my eyes open as an SEO.

My one sentence bottom line review: Absolutely worth a try.  It could yield some great SEO and marketing results, especially when / if the service grows.



Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts


An SEO Fail For Candie’s – A Big Brand Audit

Author : Bill Sebald

Google+ | Articles from

My fiancé loves shoes.  Like the stereotype, she has a closet full.  I don’t get it.  I own three pair, and one pair are flip-flops.  One afternoon she came into my office and said, “you’re the Google geek, find me these shoes.” She had a pair of Candie’s that broke right in the middle of the sole. I found out she searched for about 45 minutes looking for these shoes, to no avail – I suspect much longer than the average person.

Challenge accepted – I’m supposed to be good at this, but within 30 minutes I couldn’t find anything either.  I used everything from reverse image search to the most descriptive keywords I could image.  I looked in CSE’s, Pinterest, Amazon, and a few major shoe retailers. I went deep into supplemental pages of smaller retail sites (which is a very scary place).

The entire internet could not find a pair of these shoes.  It’s like they never existed.

If you’re a retailer in this day and age, and you’re making your loyal customers work this hard, you’re doing something seriously wrong. Let’s review some of the areas Candie’s could improve, and maybe relate this to your own business.  All along think about the post-sale customer journey as well (which is the category we fit in, where my fiancé started out with brand loyalty, and slowly lost it by the end of the event).

Doing a quick content audit

Candie’s is a brand started in the 1980′s.  Kohl’s has exclusive rights to all their different product lines except shoes.  Thank you Wikipedia.

A quick browse of results in Google’s blog search shows a brand that’s interested in being associated with celebrity.  They pursue (and promote) famous spokeswomen.  So far this doesn’t bring much topical diversity.   Every headline I found features a blurb about Hayden Panettiere or Britney Spears, etc.  Candie’s got a link from USA today (sort of), but the only thing the piece spoke about is how Vanessa Hudgens is the new face of Candie’s.  Too bad the link only went to Kohls.com, and didn’t mention anything related to the value of the products or this could have been a much more valuable link.

Why didn’t the post say more?  If what’s existing on the web is a clue, it might be because there’s no content strategy helping out the cause.  For all I know, USA Today got a press release about the Vanessa Hudgens news, and with a quick Google search, couldn’t find anything else to say either.  Yes – news outlets do work like that.  They use Google too. Poor Candies.com got the shaft.

I checked forums and Twitter chatter (using Candie’s related keywords) – but didn’t find much. With all these endorsements I assumed there’d be more fan chatter.  Do the customers really have nothing to pine about?

So let’s try Facebook.  Candie’s has 669,000 likes.  This looks promising.  Lots of likes, and, well… little comments.  One post asked the question “what do you love most about fall?” There were only 14 comments.  952 likes, but 14 comments.  A low engagement rate even for a blasé, phoned-in question.

There are more celebrities on the Facebook page, and a few links, but most go to Kohl’s shopping pages or other Facebook pages.  This looks like an intentional closed loop which doesn’t seem to be generating any critical mass whatsoever.  I have to think those celebrities weren’t cheap, and underused.

How about Twitter?  They have the obligatory Twitter account.  However, tweets are quite busy being promotional like the Facebook page (not a best practice these days) with some additional use of cutesy Instagram pictures.

During our scavenger hunt, we wanted to see if their social team would handle a request:

tweet

Amy (my fiancé) even jumped in.  Neither of us got a reply.  It doesn’t look like they use Twitter for customer service, which is really pretty disappointing.  The web – and your customers – expect this of you.  It’s 2013.

If I were working with Candie’s, I’d be all over engagement and content creation.  I’d want more out there than just what celebrity we have in our campaigns.  I’d want people to start associating my products with quality, design, or trendsetting.  I’d want to start showing off how well my shoes work with day and night outfits.  I’d want a resource center for my shoppers to see how shoes look with certain styles.  I’d want my shoes to rank for something more than just the product description of an ecommerce site.  Hell, I’d even be pleased to see an outdated infographic at this point.  I’m sure in the c-suite of Candie’s have plenty of adjectives and analogies for their shoes they toss out in the board room, as well as qualities they boast and want to represent.  I certainly couldn’t figure it out from their online output.

To do more than rank for the brand name, Candie’s needs a voice.  Candie’s needs topics and expertise.  There’s certainly not much now to pull in any long-tail search traffic.  There’s brand authority and the likelihood to rank for shoe related queries, but they just don’t have the context.  Checking out a tool like BuzzSumo, it’s pretty clear there are people sharing articles they could have been writing – but haven’t tried.

Content marketing would only grow their share.  It would make the consumer persona(s) much more valuable.

Old Fashioned SEO

Ok, so Candie’s doesn’t succeed on the content or social front in my opinion.  How about good old traditional SEO?  How are the rankings?  Not good according to SEMrush.  A total of 7 keywords in their database, and all brand terms (save one).

semrush

Take a look at the homepage.

Right off the bat, they hide the shoes section (took me past my 3 second “catch me” time frame), but let’s just look at the basic SEO.

homepage

Now take a look at the homepage naked (CSS off):

Candie's no CSS

This could very well be the least SEO friendly homepage of a major brand I have ever seen.

In fact, the whole website is really just a couple hero graphics after another, ultimately funneling traffic back into Kohl’s.  Is this mandated by the Kohl’s ecommerce agreement? Maybe, but if Wikipedia is right, that doesn’t apply to shoes – the hidden category.  Once I got to http://www.candies.com/spring2012/weHeart.asp, I couldn’t even find my way back to shoes.

I found 16 title tags.  All but three said nothing more than “Candie’s”.  Virtually no spiderable text anywhere.  This site just isn’t trying.  The competitors must be loving that!

What about backlinks?  263 linking root domains, and no real footprints of SEO work here.

I’m starting to wonder if a heads-down presence is intentional.  Since they’re not an ecommerce store (and divert most their retail traffic to http://www.kohls.com/feature/candies.jsp), maybe they’re afraid to rank to well, as it may take away from a shopping page.  That would be “tinfoil hat” behavior in which I’ve never seen the likes before.

What about the Kohl’s site?

The landing page on the Kohl’s site isn’t much better (cached here for  posterity).  Look at the adorable SEO copy under the 70% graphic spots.  At least the anchor text matches the destination pages’ title tags.

But I can’t wrap my head around the hero spot video.  At this stage in the game, does Kohl’s really want to be doing the branding for Candie’s?  Hit play, and prepare to be distracted from your purchase for 60 seconds watching behind the scenes footage of a photo shoot.  Kohl’s makes no other attempt to build the brand or tie this video to products.  I stick by my ecommerce rule of thumb – if the graphics/video take up 75% of your real estate, they better be prepared to drive 75% of your page sales.

With precious time to encourage the interest of a searcher, where the back button is ingrained in every shoppers brain, I would be promoting more shopping funnels.  Not distractions.

What else could they do (aside from the obvious)?

If we were hired to do Candie’s SEO, while assuming probable bureaucracy and politics the brand may have with their ecommerce agreement, I would certainly work to fix the items I cried about above.  That’s a no-brainer.  But that would only get them to the stadium.  If they wanted to compete, they’d have to keep swinging.

They need some big ideas and quick.

  • What better ways can they use the celebrities?  How can we see them in less phoned-in campaigns?  How about bringing the controversial ads back!  The web has only gotten more interactive, get creative (again).
  • How can they better serve those looking for old shoes (like my fiancé)?  What about a simple blog highlighting each shoe with descriptive, representational content.  So when someone is searching “black Candie’s shoes with a buckles,” something from Candies.com might actually show up.  If the products are out of stock or discontinued, the site could recommend alternatives.
  • Nowadays you need to prove to Google that you’re an expert. Does Candie’s have any experts?  Get them front and center.  Let’s meet them, and learn from them.  Get them writing.  Do your research to learn all the questions your potential customers have, and answer the hell out of them.
  • How about something playful – a contest where shoppers share their styles?  Take photos of themselves in Candie’s products, and compete for a prize.  Candie’s could post these photos (with links to the product pages). Think that might earn some links from proud nominees?  It will, while doing an awesome job with deep linking.

Summary

This post assumes the Candie’s brand actually wants to be bigger online than they currently are.  That may be erroneous.  Also take into consideration, this is not an exhaustive audit by any means (I run an SEO company but Candie’s is not a client).  If I’m wrong on this audit, then enjoy the lessons anyway.  The truth is, from only a few hours of scanning, the opportunities for SEO leap out at you.

Amy (my fiancé) never did hear from Candie’s.  She never did find a single instance of the shoe, but her diligent searching led her to discover another brand who had similar shoes. They got the sale.  If this is happening on a large scale, it’s scary to think how many lost sales Candie’s has had in the last decade.

On the contextual front, SEO is more than just keyword-level stuff (where Hummingbird reminded some of us with a baseball bat).  While brands may have a certain upper-hand, they still need to do search marketing. Candie’s is an example of a brand that’s getting buried for anything but their own brand terms. With great power comes great responsibility – aid your customers post-sale, and they’ll reward you.  There must be other customers (in addition to my fiancé) who love Candie’s shoes.  I struggle to believe all the followers they have are there for Carly Rae Jepsen pictures.  In terms of topics, there’s not much to read, so they don’t have much to release virally.  If I were a brand advocate, I’d be pretty hungry for something more from the website.  I’d be flatly disappointed.

The truth is, Candie’s is not alone.  I’ve done a lot of big brand work, and while they usually have a big backlink portfolio due to being top of mind for a lot of bloggers, there are many instances where the content output is very low.  Attempts were never made to be more than a couple marketing campaigns or a logo.  While Candie’s certainly works with ad agencies, a company or specialist who knows search would be an amazing addition to their arsenal.

By the way, Amy also recently found the box.  For any women wondering what shoes these are, or if they were actually real (as I began to question), here you go.  Dazz black, baby. I’m probably going to get in trouble for showing her shoe size.

Dazz Black Candie's Shoes



Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts


Google+ SEO Tips For The Local Business Owner

Author : Bill Sebald

Google+ | Articles from

Nobody loves Google+ as much as Google. So much so, they’re using Google+ Pages as the destination URLs in the packs now. Looks like you don’t even need a website anymore.

Click image to enlarge

local page

 

What do you notice from this screenshot? First, I apparently need Google’s help spelling collision. Second, if you click the blue link or the Google+ page link, they both go to the same place – a very thin Google+ business listing:

Click image to enlarge

local3

This may not have been claimed. No reviews. And Google thinks this is a better result than the other fleshed out Google+ local and direct websites? Google doesn’t even have any entities showing for this listing.

This isn’t the first time SEOs have seen this.

Local Search Is The Land Of The Lost

Local Search Is The Land Of The LostGoogle has a lot of products. I can only imagine how difficult it is to manage them all internally. I have no idea how big their local team is (likely less than the “web” team, which is already surprisingly small), or what their company goals are with this product, but this is a vital vertical to many small businesses that just doesn’t seem to have the love. This whole integration feels like web search from 2002 – very little made sense there either.

Since Google+ and Google Places merged, forming this mess called Google+ Local, every SEO has been recommending you flesh out a Google+ business page. Our recommendations were for you to flesh out your Google+ and places page, even if you’re already having trouble finding time to tend to your Facebook page. We said, “don’t complain, just do it – Google needs your information to rank you in the packs.”

From the looks of these screenshots, it doesn’t look like we were necessarily right, eh? If Google does indeed have an algorithm biased to any Google+ site, then maybe you don’t need to do the work? Read on…

Can You Understand Google’s Local Algorithm?

I don’t really try. I’m not sure Google really can understand it either. I suspect their hands are full trying to tame the jungle. For almost a decade I’ve described the “Google (web) algorithm” as a rope. A rope has hundreds of threads woven in (all algorithms working together to make the big algorithm). Google Local seems more like a bag of hair. But to be fair, Google web turned into a bag of hair in the early 2000′s as well. They’re only now starting to braid it.

With local and Google+, we have a business page, a local page, maps, and pack listings. They just all don’t tie together nearly as well as they should.

Wut Do? – Do Marketing!

SEO is still marketing. I’m frustrated to see Google+ being so awful, but I believe it will get better. I have local clients I adore, and seeing things like this makes me mental. Google doesn’t always reward content, Google doesn’t always reward your support. Google has made many local business owners I speak to feel jaded by “failed” SEO. To be honest, sometimes an SEO can’t hit a specific goal if Google simply doesn’t want it to be so. You have to give it months – sometimes years – to see. If you want the internet to work for you, you have to accept it could take a long time.

But if and when Google does shift in your favor, your customers will benefit from your hard work. When you’re doing SEO (or having a firm do it for you), make sure you’re doing marketing too.

Despite the bag of hair algorithm throwing a few freebies away to local companies who didn’t do any real marketing, there’s a lot of gold for the business owners who did find time in their busy day to keep the content river flowing through their Google+ account.

Mini Case Study

Michele H, local wedding photographer (asked to be private, apparently a competitive field).
Goal is to fill up the fall with jobs, with no expenses.

I have a friend in the local photography space. Her name is Michele. She moved to Philadelphia suburbs right before Christmas 2012 to stay with her sick mother, and wasn’t really set up financially. As a wedding photographer, she didn’t have a strong ‘word of mouth’ network in Philly, something many local services rely heavily upon.

In such a short window, I figured a social content strategy and local search was the way to go (forgoing general web SEO). I helped her get her photography service up and running with a quick, clean SEO friendly platform (simply WordPress), and pushed Google+ on her hard (as an experiment on my end). She spends most her day retouching photos, and naturally didn”t want to do any more on a computer than she had too. Still, we created a balanced plan to create engagement with only a few hours a month. This included:

  • Minimum 5 shares a week on Google+
  • Minimum 5 original thought pieces\
  • Citation building
  • Post anything created solely for the blog
  • 10 interactions on relevant Google+ pages
  • Always driving Google+ traffic to her blog, and blog traffic to her Google+ (create a loop)
  • Respond to all comments quick and humbly
  • Soft promotion of services

All the authorship stuff was also put in place. In took about 3 weeks in May to start showing her photo in the search engine result pages.

The Waiting Game

For months nothing came of it. I was rarely involved, assuming she was following the steps. I didn’t do any other SEO work for her.

The content she created sat around on Google+. She wasn’t getting into the packs, and more importantly, she wasn’t getting any pack traffic or Google+ referrals. Everything she did on Google+ she mirrored on Facebook, which was semi-active (helped mitigate any feeling of it being a huge waste of time), but let’s just keep this mini case study on Google+ and related website content.

She was a worried and a little stressed.

Then suddenly she got a few followers in April who started sharing her stuff. More saw it and circled her. In March, 29 had her in their circles. In July, two thousand. By checking out the most shared stuff in Analytics, we knew what flavor of content she needed to continue writing in (in her case it was about what wedding photographers can use to differentiate themselves, and unique wedding photo ideas). She was becoming a brand on Google+.

She started taking small jobs when weddings weren’t happening, and asked them to consider reviewing on her places page. Happily for her, they were all favorable.

She also got a few organic links and upward trending traffic to her blog (located on a folder off the root domain). Things were starting to happen slowly. 10 more visits here, 20 more visits there, with a low bounce rate. Not big numbers, but to a local wedding photographer, this was helpful. 45% of her closed leads came from this traffic from April to July.

In her vertical, her Google+ may not have been ranking well at first, but it was a vital social component and cause of the informational searches she was now receiving. Attribution reports showed some decent interplay. Impressions and actions started to go up. It’s all connected.

The Result

She had one goal: fill up the fall with wedding shoots in a new town. She succeeded last week. Added benefit: zero cost. All really minor effort leading to a big win for a minor business.

The opportunity is there for the small business of any size – the bigger you are, the more work you need to put into it. In this day of big brands getting the lionshare of the rankings and traffic, the small business can still rock in long-tail and local search. It’s not hard or expensive – just awkward and confusing… but completely valid.

Protip: Click the sleestack.




Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts


Building Links With Content Refresh

Author : Bill Sebald

Google+ | Articles from

I enjoy footprints, advanced operators, and link building with content.  I like the personalized, conversational nature of the link building I do.  Though not fast, it’s fun and very impactful. It’s like a cannon versus an AK47.

Naturally I was drawn to broken link building.  Garret French’s ridiculously cool Broken Link Building tool is a great resource.  While some are great at this, my success rate is unfortunately low with this tactic.  My assumption is that the resources are usually so old (hence, the link breaks because it’s abandoned) that the webmaster doesn’t even care that it’s broken.  Thus, no response from the webmaster following my inquiries.

But they do seem to respond more often when they have wrong information on their site.

I’ve always been able to use the Broken Link Building tool to get content ideas and find good blogs, but thought, “instead of fixing the link by suggesting my own content, why not produce content that fixes a bloggers on-page content.”

Time Is The Enemy Of Information

time expiredIn time, things become outdated.  Data refreshes.  Ideas expire.  Studies prove other studies wrong.  Trends, interests, and feelings change.  The problem with the web is that you’re hard-pressed to keep your website 100% current.  How often have you searched for the answer to a question to find a 4 year old, out-dated article?  Google does a poor job with QDF, and simply needs help with detecting the latest, most accurate information.

That’s where this tactic kicks in.

Each of my clients (or past employers) is an expert in something.  Once I figure out what these strengths are, and identify who can write the content, I search for sites that have wrong information.

Here’s a couple opportunities I was able to “refresh” with this tactic:

  • 5 great iOS apps that help you manage your time (updated a 4 year old article with something much current.  2 of the apps were not even in the App Store anymore).
  • Fracking does not cause groundwater contamination (one site had this claim, where an article we produced had proof of Marcellus Shale region contamination from fracking results).

Each new article we placed had a link to our site either in the byline or in the body itself. This tactic works great with a content strategy.  Throw the results into Buzzstream and you’re on your way.

Introducing The Outdated Content Finder

I like this idea so much I wanted a tool that could quickly find these opportunities.  I asked Mike Angstadt, a great Philadelphia developer and SEO,  if he thought he could help me build it.  In 24 hours, the Outdated Content Finder was born.  Mike is the man, so hit him up on Twitter.

Give it a spin.  Click the logo below:

It’s still in beta, and will grow to include more features.  I’d love your feedback in the comments below.



Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts


How Did I Beat The Duplicate Content Game?

Author : Bill Sebald

Google+ | Articles from

(There have been a few updates to this article at the end; the title of this article has been changed to reflect all the data.  I highly recommend you read the comments as well).

Yesterday I posted an article on quick link wins from Moz’s new Fresh Web Index.  I happened to catch the announcement of the tool and tested it immediately.  I wrote up a quick post about an hour later.  There were comments from Twitter, inbound.org, and my own blog about how fast I produced the article.

So, being that it was quick, I wondered if I had a shot ranking for Fresh Web Index this morning. After all, Google’s all about speed, and QDF now, right?

serp

 

Unfortunately, my domain didn’t make the first page.  But two sites who republished my article did.  My post was the canonical version – Google is supposed to figure that out, right? Especially since my page was indexed before the other two.  Let’s look at this deeper.

I get republished by Business 2 Community.  They hand-pick posts from my feed that might suit their members.  Yahoo is a publishing partner of B2C, so they again publish some of B2C’s posts. If you look at the image above, both those domains are ranking for my article. Authorship didn’t help me here (not that I expected it to), and the links back to my site didn’t clue Google in.  Nor is there a canonical tag in place by B2C or Yahoo.  From the looks of it, I appear to be beaten by sheer domain authority.  Not only that, I appear to have been completely filtered out of the first 100 ranks.

To me, this is Google doing a poor job.

So it got me thinking – what else can I do to signal to Google that my original post should be shown in place of one of these re-publishers?  I could ask B2C to remove my posts, citing duplicate content issues, but I like the visibility I get there (and on Yahoo).

The Long Shot

If you look at my single post pages, my template actually removes the time stamp.  It has the date, but not the actual hour the post went live.  Could that be the magic bullet to get Google to value my original post higher?

date stamp

As of 10:20am (of day 2), I have coded the time stamp into my WordPress single-post template. Again, I think this is a long shot.  Because it’s easily faked, would Google actually factor that?

date stamp

Now we wait to see if Google actually pays attention to the posted time.  I’m also going to “fetch as google” and submit to the index again, since some think that might work as an old-school ping.  Can’t hurt.

Day 2 

Success.  Google decided to list me on the first page today (a fresh cache is listed for today, March 8th), right under a great post that came out by Rhea at Outspoken Media. The Yahoo listing still exists, but the blended News listing (Business 2 Community) has dropped.

Updated SERPs

So other than adding the time stamp (my long shot), what changed?

Well, let’s check FWE to start.  According to the tool, I got two new linking root domains (aside from the Yahoo and B2C) link.  One is from the result right above me, the strong Outspoken Media. Clearly as I sing FWE’s praises, I know it can’t catch all the links out there. There may be more.  Additionally, Yahoo and B2C probably received links too (at this time, it’s still too soon to see in OSE, Majestic or ahrefs).

Second, since the news vertical dropped off, it could have specifically been my barrier to entry. While that algorithm runs differently to Google’s general search algorithm, I could understand where an IFTTT type of scenario occurred.  By rule, possibly Google says, “if three of the same post appears on a page, then kill the least authoritative.”  If the freshness of the news vertical times out, maybe my site is granted it’s appropriate return.  This still doesn’t speak highly of Google’s internal canonicalization abilities.

So What’s My Best Guess?

Correlation doesn’t equal causation, so I have to go with my gut until I can get more information. Currently I suspect the answer lies in one of the above three explanations.

I’m publishing this post now, but expect to come back to it as I think a little more through it. Would love to see your thoughts in the comments!

Update 3/28/2013: Well, it’s been about a month, and my page no longer ranks for the term. The Yahoo duplicate content listing still does (on the first page as of this writing).  It looks like the QDF and any internal canonicalization Google may do has worn off.  Some of the web pages now dominating are strong, unique pieces.  Some are low quality.

Quite disappointing. FAIL… and updated the title of this post accordingly :(

At the very least, hopefully this post is useful for someone in the same situation to understand more about how Google is currently processing through this issue.  I urge you to read the comments, as more information is contained there.

Update 11/17/2013: Much time has passed.  I’ve been noticing that duplicate content issues have seemed less and less dangerous for some of my clients.  In the past couple months I saw Google start getting it right for two clients in particular, who struggled with some of the same issues I noted above.

I remembered this post and decided to do the query again.  Now the duplicate pages are completely out of the index, and my URL is the first (and only ranking) piece.  It came back.  I’m quite pleased, actually.

It looks like Google may have gotten its act together a bit more in the recent months.

Once again, I updated the title of this post accordingly :)



Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts


Getting Low Hanging Links Using Fresh Web Explorer

Author : Bill Sebald

Google+ | Articles from

Today SEOmoz announced a new tool today – Fresh Web Explorer (in beta, available to Pro accounts).

It’s fast. It’s big. It’s sexy. It’s simple. It tracks links and mentions in aggregate, and so far, has proven to be faster than Google Alerts and Topsy. This is especially cool for SEOs banking on co-occurrence and citations in the future of ranking. Plus, it has a feed authority feature (in the vein of the defunct AlertRank) which could be pretty useful for many.

Example 1:

It provides a legend of search operators, most we can guess if we are fans of the operators that work in Google. Quotes, minus signs, “OR” – they work great. I picked a few terms that I know gets used in conjunction with my blog:

  • bill sebald
  • greenlaneseo
  • greenlane.wpengine.com
  • green lane seo

I have the option to input one at a time, or both in a string like “bill sebald” OR “greenlaneseo” OR “greenlane.wpengine.com” OR “green lane seo,” depending on whether I want an aggregate or comparison view. I can also scan web mentions as far back as “last four weeks.”


Untitled-1

Not only did I find a post that linked to my site just today (this tool is fast!), but I also found a page that mentioned me but didn’t link. I’ll be emailing him shortly to see if we can’t turn that co-occurrence into a link.

Protip – This tool also lets you export, which after a little tweaking of the CSV, makes for a juicy import into Buzzstream for even better link management.

Example 2:

I’m usually very successful with finding and connecting my good content with relevant posts – a reason I love the broken link building tactics. I recently wrote a review for a Visual Link Explorer from Cognitive SEO. I saw State of Search did a write-up on the tool within the hour my post came out. I wrote to them and asked if they’d like to link to my review as more context for their readers. Unfortunately there was no response (hey, it happens to the best of us), but this tool makes the success of that tactic even more possible.

I entered “Visual Link Explorer” into the tool, and had a couple nice hits. I could easily contact all of these sites with my review, and try to negotiate a link. Think about the varieties of keywords you could enter here to find timely posts and content that is still within the webmaster’s attention span. I’ve always found it’s much easier to get a link on fresh content, than something that’s been long forgotten by the webmaster.

Untitled-21

Is it missing content, links, and citations? Yes. But this is a really great start. These tricks worked great for me in my first hour of playing with it. Would love to hear what you can come up with in the comments.

Oh, and check this out too – Fresh Web Explorer Bookmarklets

Updated 3/21/2013 – On the heels of Fresh Web Explorer, SEOmoz has rolled the concept into Open Site Explorer with “Just Discovered.”  This new tab shows the freshest links discovered by Open Site Explorer by scanning pages recently shared through social media.  It appears pretty accurate, unfortunately some of the links they just discovered for me are year old links on popular websites.



Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts


Review Of Visual Link Explorer

Author : Bill Sebald

Google+ | Articles from

Before starting this review, I want to highlight some good prospecting by Razvan Gavrilas. He read a comment I left on a post from Seer about data visualization and Google Fusion Tables, and reached out to me (for those who disagree with me about the power of comments, here’s more proof of value). Razvan emailed me through one of my e-mail accounts, to which I unwisely mistook as being a vendor looking to pitch.  He then hit me on Twitter, to which I unwisely ignored thinking it was also a vendor pitch. He then added me on Linkedin, and finally got my attention. His persistence was impressive, and my ignorance was shameful. I wish I had taken notice sooner, as he was offering me a demo of a really incredible tool. Semi-serendipitously, I offered to do a review, and recommended the company to a few of my friends, one of which was Mike King who also shared it – he has much more amplification than I do. This is more proof that persistent, smart, personal outreach may not be scalable, but it’s still incredibly powerful.  Now, on to the review…

cognitiveSEO

I’m a very-right brained, visual person. I really like data visualization. The critique I left on the Seer blog about Google Fusion Tables was that the functionality wasn’t there to click through and look at specific data points. As an answer to that, the Visual Link Explorer by Cognitive SEO was born. In addition to the Visual Link Explorer, my demo gave me a huge array of link slicing tools, with a lot of filters and features. Unlike many link tools predecessors, this toolset was clearly created to serve the masses who may each be looking to gather different link metrics. On many reports you can filter on link strength, citation flow, count, etc. Also unlike some simpler link reporting and analysis tools, there’s a learning curve here. But like any robust analysis tool (like Omniture for instance), it may take some time to learn this platform. I see this being valued more by the enterprise agencies or in-house SEOs who are held to higher reporting and analysis standards.

I tinkered. I created a campaign and ran an audit on my company’s services domain and another Philadelphia SEO company’s domain. I already had a fair sense of their linking tactics – they have a lot of exact match anchor footer links embedded in clients’ websites. I wanted to see how the two link profiles compared. The campaign wizard prompted me through the initial steps (where I deepened the data pull), and returned massive digital reports within 7 minutes (which the system then saves for immediate review later). That was impressive considering how slice and diced data I had at my fingertips, right in my browser.

So jumping into the new Visual Link Explorer feature specifically, this was really the most impressive of all. A fully navigable, functional, clickable visualization of my link graph:

click image below to open larger in new window

Link Data Visualization

Now here’s the comparison of my SEO competitor, which was just as easy to pull up:

click image below to open larger in new window

Competitor Links

Right off the bat it’s pretty clear that we have two completely different link building, content marketing, and site architecture strategies.  By examining the cluster above, I confirmed what I suspected about my competitor. They have hundreds of links pointing directly to their homepage, with very little variation of exact match anchor text – terms like Philadelphia SEO Company, and Philadelphia SEO. Surprisingly, while Google spanked a lot of this with the Penguin updates, this company still remains strong for these keywords. They rank very well, and this visualization helps me recognize (in seconds) their exemption, and possibly put together a plan to match them at their own game. In my opinion, that’s the biggest value of data visualization – the ability to “snapshot” the landscape quickly, and start driving actionable strategies. With a lot of clients or busy days, this is incredibly important.

Zooming into the interactive interface, I’m able to see links much closer (the scroll wheel on the mouse is heavenly for this).  I’m also able to toggle Link Trust Flow, Domain Trust Flow, Link Citation Flow, Domain Citation Flow, and Link Rating.

click image below to open larger in new window

Closeup of links

I’m able to click through each of the data points to get more information (in the form of a knowledge box), a fix for one of my biggest criticisms of other data visualization tools:

click image below to open larger in new window

closeup

It’s really pretty amazing, and I’m just tapping into it.  My only criticism is (and I shared this with Razvan) is its missing some definitions, and by that I mean, clearly descriptive labels of what all the amazing data means. Novice link builders will get lost in this data, so I’d like to see it maybe cater to them more. This is a powerful tool and should be clearer so all SEO clients can benefit from an empowered (and fully comprehending) SEO service provider.

I would be shocked if this doesn’t quickly become part of an SEOs regular arsenal.

More coming soon – I’m going to create a video tour hopefully soon.  In the meantime, to see some of the other reports from Cognitive SEO’s great tool, here are a few more resources:

http://cognitiveseo.com/blog/2228/visual-link-explorer-visualize-backlinks/

http://www.stateofsearch.com/visualising-the-link-graph

 

 

 



Like this post? Share it!

  • Tweet
  • Facebook
  • Diggit
  • Delicious
  • Diggit
  • Diggit
  • Diggit
  • Diggit
  • Diggit

Related Posts