The Wayback Machine is well known as a useful tool for viewing the way websites looked in the past. It’s always fun to pop in a URL from your favorite websites to see how far they’ve come since the early days of the internet (and maybe make fun of them a little). But the Wayback Machine happens to be a pretty helpful tool for SEO as well. Here are ten ways we’ve found you can use the Wayback Machine to improve your SEO strategy.
The complexities of SEO are exacerbated by the limited visibility we have into Google’s algorithm. Google claims more than 200 signals make up the main web search algorithms. When Google told us content, links, and RankBrain are the biggest contributor, it didn’t exactly unravel any mysteries of how to improve rank.
I’m an admitted analytics junkie, and I’m going to rant about something I’m starting to see too often from the industry. Why are we now seemingly okay with using filters to cover up inconsistent tracking? Where did the education of our client go, or diligence in maintaining tracking standards?
If you’re writing blog posts, or any kind of copy, on behalf of a client, you need to know them so well that you can (quite literally) finish their sentences. As an outside writer, it’s a hard but necessary task. Great writers are plentiful, but writing in someone else’s voice – even a company’s voice – is the real challenge. I compare it to a comic doing impressions. (Or maybe that’s just my excuse to classify watching SNL as “research”.) Someone like Dana Carvey carefully studies the quirks and habits of how an entity presents itself. Sure it’s about the words they say, but it’s also about how and why they say them.
Q&A sites are a huge source of information and learnings. We use them for discovering searchers’ interests and pain points, which in part can drive a content strategy. Next to learning, Q&A sites are also great for establishing yourself as an authority in a given topic. I’m sure you’ve heard the advice – if you want to drive new leads or position yourself as a thought leader, hang out on a Q&A site. It’s solid advice.
There are many XML sitemap generators available for purchase, or even for free. They do what they’re supposed to – they crawl your site and spit out a properly formatted XML sitemap. But sometimes there’s a problem with these XML sitemap generators. They don’t know what URLs should (or should not) be in the XML sitemap. Sure, you could tell some of them to obey directives and tags, like robots.txt and canonical tags, but unless your site is perfectly optimized, you’ll need to do some work by hand.
We all know that links are an important part of SEO. They help users and bots navigate a site and give search engines information about its quality and authority. With links confirmed as one of Google’s top three ranking factors, we’ve all been reminded of the importance of quality, relevant backlinks. In order to get those backlinks, we have to put a good amount of effort into link building, and that often proves to be a big challenge. There are scaling issues. There are research and outreach management challenges.
Last week I did a Mozinar on content purging, and how it can improve your SEO. If that sounds interesting to you, click this link to check out the recording. Now in that webinar, I shared a Google Sheets tool we built to help pull website data fast. Paste a list of your URLs, and voilà – your data is available in a Google Sheets format (which you can easily export into Excel if you wanted to). From different groupings from different date ranges, get your sessions, pageviews, conversions, etc. Not unlike something you can get from URLprofiler or Screaming Frog, but if you like this alternative, you can have it.
If you’re not checking your client’s HTTP headers, you’re not giving them good service. I’m not talking about the stuff in between theand tags, either. I’m talking about the server response that you get before you get all that nice HTML, or that fancy PDF, or whatever else your client’s website is slinging. That’s because, well, your client’s website isn’t slinging anything. It’s being slung by a server, and the server’s HTTP response is the first thing a web browser – or a web robot like Google’s crawler – will see.