How to Find Old Redirect Opportunities & Reclaim Links (w/ The Wayback Machine)

Read Full Article.  

Necessity is the mother of invention. Many years ago, one of our clients bought a popular, content-rich website and redirected it to their current domain. SEO (and retaining the backlinks) were not on their radar at that time. Upon learning about the migration, we asked if they had redirected the site at a page-level or just redirected the site to their own homepage. The client had no idea how the redirection was done and they didn’t have a redirect list (list of the old, legacy URLs) to work from. We needed to invent a plan to gather up the data.

Republishing Your Content May Still Be Dangerous

Read Full Article.  

In the past, I enjoyed republishing content to sites like Business 2 Community and Yahoo Small Advisor. I also experimented with reposting on platforms like Linkedin. Typically (as expected) my content would rank quickly upon publishing on my own site. Then, the much higher DA sites would republish shortly thereafter. As you might expect, Google would start ranking those domains above mine. But, in a few weeks, Google would suppress those big domains and my post would remain the victor for years and years to come (without the use of cross-domain canonical tags). Why? Because it’s the canonical article. It makes sense. Google can figure that out. I wrote about this experience in 2013. I was fine with this situation. I enjoyed the traffic and visibility I would get from these other sites. It’s simply smart marketing. But lately I’ve been noticing an inconsistent change. Unfortunately, I’m finding Google isn’t performing as expected in most cases these days. For me, it’s been less than half.

All Your Content Doesn’t Matter Without Meaning

Read Full Article

I’ve heard the best time to write is very early in the morning when you’re still in sleep mode. It may help with creativity or in developing concepts. It might even help you spend less mental energy (who couldn’t use more battery life?). Not to mention, the only likely distraction are roosters, though only a problem for marketers working on farms. For our SEO clients, I often write my titles after my piece is written, but I never go into a content piece without a purpose. And more than a fluffy idea, but an idea that I can qualify as valuable.

3 Ways To Monitor For SEO Disasters

Read Full Article

We recently had a client launch a new site in Wordpress. It was appropriately in a staging area before launch. Instinctively upon hearing the news of the launch, we decided to look for a robots tag. Sure enough, every page was marked as “noindex, nofollow”. The client was able to make the change before Google crawled the new site. Above I said “instinctively” because, well, this isn’t the first time I’ve seen a site launch set to block search engines. It’s probably not even the 50th. I worked on an eCommerce platform where many sites launched with this issue. Wordpress – as fine a platform as it is – makes it super easy to launch set to noindex. Since developers often build sites in staging areas, they’re wise to block bots from inadvertently discovering their playground. But, in the hustle to push live an update or new design, they can forget a tiny (yet crucial) check box. I’ve gathered up three different ways you can monitor your clients’ sites, or even your own, without the use of server logs or an education in server administration. There’s different kinds of website monitoring (e.g., active, passive), but I’m keeping it simple and applicable for anyone. I wanted to pick a few that were diverse, free or affordable

How To Audit Your Canonical Tags

Read Full Article

The reason Google doesn’t accept the canonical tag as a directive is probably because they know many webmasters will screw it up. If you have a massive database driven eCommerce site, and you’ve tried to get a developer team to implement, you’ve seen how it can ultimately launch with a ton of unexpected results. Examples I’ve seen: via templates, products were suddenly “canonicalizing” to the homepage. Page 4 of a collection suddenly canonicalizing to page 1 of the collection. Crazy, random results are always likely if not implemented and QA’d properly. When the tag was announced in February of 2009, I worked for one of the largest eCommerce platforms at the time. We wanted to be first to offer this, and we rushed it out – with many, many problems. I’ve always had a love/hate relationship with this tag.

How To Check To See If Blocked Pages Are Indexed

Read Full Article

You put a robots.txt on your site expecting it to keep Google out of certain pages. But you worry – did you do it correctly? Is Google following it? Is the index as tight as it could be? Here’s a question for you. If you have a page blocked by robots.txt, will Google put it in the index? If you answered no, you’re incorrect. Google will indeed index a page blocked by robots.txt if it’s being linked by one of your pages (that do not have a rel=”nofollow”), or if it’s linked from another website. It doesn’t usually rank well because Google can’t see what’s on the page, but it does get PageRank passed through it.

How To Flush Pages Out Of Google En Masse

Read Full Article

Google gives you a few ways to “deindex” pages. That is, kick pages out of their index. The problem is, despite some serious speed improvements in crawling and indexation, they’re pretty slow to deindex and act upon canonical tags. This quick trick can help you isolate and remove pages en masse.

Find and Fix “Index Bloat” SEO Issues

Read Full Article

If a website is a mess of URLs and duplicate content, Google will throw their hands up in the air out of frustration. This is a bad spot to be in. You’ll find your traffic and rankings drop while your indexation becomes bloated. Your crawl rate (which we’ve found correlates with traffic) will be curbed. It could seem all very sudden, or it could be gradual. Every case is different – but it’s always a nightmare. Keeping track of website changes is critical with SEO. The other day I peaked into our own Google Webmaster Tools indexation report, and saw something pretty alarming in the “index status” report.

HTTP vs. HTTP2

Read Full Article

HTTP2, the new Web protocol slated to go live any day now, aims to be a faster, more efficient protocol. HTTP1.1 is the current predecessor and has been around for about 15 years. The problem with HTTP1.1 is that can only load requests one at a time, one request per one TCP connection. Basically, this made browsers run parallel requests to multiple TCPs for the same Web asset. This clogs up “the wire” with multiple duplicate data requests, and can hurt performance if too many requests are made.

SEO and Multiple H1 Tags

Read Full Article

The “official rollout” of HTML 5 in October 2014 ignited renewed interests in an old SEO debate: whether or not using multiple H1 tags on a single page is bad for SEO. Depending on the school of thought, some designers debated the true use case. Likewise, some SEOs had a similar debate. We know H1 tags have value, to which some SEOs try desperately to insert several H1 tags on a page (usually with target keywords). I’ve seen H1 tags in breadcrumb trails, hidden behind wordless graphics, and pushed to the margin with CSS. But other SEOs, who worry about being seen as spammy, go with the “one H1 per page” rule of thumb. When one of our clients recently asked this question, we found ourselves reevaluating and realigning our multiple H1 best practices. We had to establish where we stand on the answer.

Like our posts, case studies, and experience? Think we're a good fit for your company? Contact Us Now