If a website is a mess of URLs and duplicate content, Google will throw their hands up in the air out of frustration. This is a bad spot to be in. You’ll find your traffic and rankings drop while your indexation becomes bloated. Your crawl rate (which we’ve found correlates with traffic) will be curbed. It could seem all very sudden, or it could be gradual. Every case is different – but it’s always a nightmare. Keeping track of website changes is critical with SEO. The other day I peaked into our own Google Webmaster Tools indexation report, and saw something pretty alarming in the “index status” report.
HTTP2, the new Web protocol slated to go live any day now, aims to be a faster, more efficient protocol. HTTP1.1 is the current predecessor and has been around for about 15 years. The problem with HTTP1.1 is that can only load requests one at a time, one request per one TCP connection. Basically, this made browsers run parallel requests to multiple TCPs for the same Web asset. This clogs up “the wire” with multiple duplicate data requests, and can hurt performance if too many requests are made.
The “official rollout” of HTML 5 in October 2014 ignited renewed interests in an old SEO debate: whether or not using multiple H1 tags on a single page is bad for SEO. Depending on the school of thought, some designers debated the true use case. Likewise, some SEOs had a similar debate. We know H1 tags have value, to which some SEOs try desperately to insert several H1 tags on a page (usually with target keywords). I’ve seen H1 tags in breadcrumb trails, hidden behind wordless graphics, and pushed to the margin with CSS. But other SEOs, who worry about being seen as spammy, go with the “one H1 per page” rule of thumb. When one of our clients recently asked this question, we found ourselves reevaluating and realigning our multiple H1 best practices. We had to establish where we stand on the answer.
Last year I introduced The Simple SEO Site Audit Tool to quickly get a sense of your entire site’s tags, complete with a schema audit – right out of Google Sheets. It’s a glorious invention between Sean Malseed (of RankTank) and myself. Check out the newest tool for checking out how your SERPs look.
A rose by any other name may smell just as sweet, but good luck trying to find good results for “thorny red flower” on the first page of the SERPs! Until Google gets even better with their relationship-mapping (thanks Hummingbird, you’re a great start!), long-tail optimization will be a huge part of SEO. Whether your goal is to optimize content for the search engines or getting fresh new content ideas, using the right keywords and phrases is a big factor for success. The trick is knowing which keywords to incorporate in your content. If you’ve done a Google search for “keyword research”, then you probably know there are tons of methods, some using hardcore SEO tricks or simple keyword research tools. The most popular keyword tool by far is the one and only Google Keyword Tool called Keyword Planner. Conceived originally for AdWords, this once external keyword tool is loved by SEOs for the metrics it provides. It’s biggest shortcoming? It mostly reveals only head or body keywords, and you have to wrestle it out of an Adwords account. But what about those times you need more detailed, long-tail keywords?
Twitter isn’t just an interactive platform where everyone and their mother can go to speak their mind in 140-character chunks—it’s a powerful, dynamic tool that can be used for a variety of functions: as a communication tool, real-time news feed, trolling, you name it! For the sake of today’s piece, I’ll give some basic tips on how to use Twitter as a vehicle to learn about SEO. This 101 is for anyone who has either never used Twitter or is simply a dabbler.
At the time this post was written, Twitter reported 284 million monthly active users with more than 500 million Tweets being sent per day (unfortunately we have no real stat on how many accounts are fake or abandoned, but I digress). There’s a staggering amount of content and information being passed through that firehose. But, like most things in life, it’s not about consuming all the content, but the best content. The bigger the “signal” the higher likelihood there is for “noise.”
Last week our team attended a local (Philadelphia) SEO meetup where there was a presentation from local wiz Sean Malseed of Circlerank. The presentation was called “Build Your Own Damn (SEO) Tools With Google Apps.” He showed us how to use Google Sheets for scraping and pulling API data to build your own custom tools. He also shared his own site that has some really incredible tools ready for free use: http://www.ranktank.org.
The folks at Mountain View made the conscious decision that keywords alone couldn’t deliver them the results they wanted to see (ahem, “their users wanted to see”). Google tried some different modeling, but ultimately came around to semantic search (that is, using semantic technology to refine the query results). Now I said much of the industry has picked up on it. Not all. I still see a lot of pretending Panda, Penguin and Hummingbird never happened. That’s unfortunate for innocent clients around the world. But for most of us probably reading this, we’re students of a new lexicon. With words like “triples” and “entities” and “semiotics” and “topic modeling.”
Ah, the SEO report. Sometimes the bane of our existence. Some agencies spend the majority of their time creating monthly detailed monstrosities, while others might send quick, white-labeled exports. Meanwhile, smart companies (like Seer) look for ways to use APIs and programming to speed up data pulling. At Greenlane, we took this approach as well; Keith, my partner and incurable data nerd, created our out-of-the-box reports to pull API data on traditional SEO metrics like rankings (yes – we still believe in the value), natural traffic (at the month over month and year over year level), natural conversions (same range), and every necessary target landing page metric we could think of. Then after discussing clients’ own KPIs, we add more obligatory reports to our default set.