Twitter isn’t just an interactive platform where everyone and their mother can go to speak their mind in 140-character chunks—it’s a powerful, dynamic tool that can be used for a variety of functions: as a communication tool, real-time news feed, trolling, you name it! For the sake of today’s piece, I’ll give some basic tips on how to use Twitter as a vehicle to learn about SEO. This 101 is for anyone who has either never used Twitter or is simply a dabbler.
At the time this post was written, Twitter reported 284 million monthly active users with more than 500 million Tweets being sent per day (unfortunately we have no real stat on how many accounts are fake or abandoned, but I digress). There’s a staggering amount of content and information being passed through that firehose. But, like most things in life, it’s not about consuming all the content, but the best content. The bigger the “signal” the higher likelihood there is for “noise.”
Last week our team attended a local (Philadelphia) SEO meetup where there was a presentation from local wiz Sean Malseed of Circlerank. The presentation was called “Build Your Own Damn (SEO) Tools With Google Apps.” He showed us how to use Google Sheets for scraping and pulling API data to build your own custom tools. He also shared his own site that has some really incredible tools ready for free use: http://www.ranktank.org.
The folks at Mountain View made the conscious decision that keywords alone couldn’t deliver them the results they wanted to see (ahem, “their users wanted to see”). Google tried some different modeling, but ultimately came around to semantic search (that is, using semantic technology to refine the query results). Now I said much of the industry has picked up on it. Not all. I still see a lot of pretending Panda, Penguin and Hummingbird never happened. That’s unfortunate for innocent clients around the world. But for most of us probably reading this, we’re students of a new lexicon. With words like “triples” and “entities” and “semiotics” and “topic modeling.”
Ah, the SEO report. Sometimes the bane of our existence. Some agencies spend the majority of their time creating monthly detailed monstrosities, while others might send quick, white-labeled exports. Meanwhile, smart companies (like Seer) look for ways to use APIs and programming to speed up data pulling. At Greenlane, we took this approach as well; Keith, my partner and incurable data nerd, created our out-of-the-box reports to pull API data on traditional SEO metrics like rankings (yes – we still believe in the value), natural traffic (at the month over month and year over year level), natural conversions (same range), and every necessary target landing page metric we could think of. Then after discussing clients’ own KPIs, we add more obligatory reports to our default set.
Entity optimization as a big SEO play isn’t quite upon us yet. It’s a slow, growing Google addition. I know – it frustrates me too. So much potential, of which I believe will greatly improve search results in the future. Google isn’t nearly showing the fruits of everything it knows through entities, whether through cards or search results – at least not relative to the way they rank on keywords alone. But can knowledge cards help bring qualified traffic while considering searcher intent? SEOs always talk about searchers intent. Anyone who’s been doing SEO for a while knows that building for intent can be a challenge.
I read – and commented on – a great post called Panda 4.0 & Google’s Giant Red Pen by Trevin Shirley. Panda 4.0 just hit; the SEO space is hiding under their desk, with some reacting either out of panic or for show. It’s definitely news, but at this point, I don’t see any reason to scream from the rooftops at Google. It’s what we should be expecting by now. In 2011, the first Panda showed us Google is not afraid to drop atom bombs. Panda opened the door for Penguin, and many updates have come since. Matt Cutts said he wished Google had acted sooner, and in his shoes, I’d probably agree.
Sometimes desperate times call for desperate measures. This post is about a desperate measure. We had a client with a manual link penalty. We did some work (using my outline from this post). Rankings started going up and traffic/conversions started boosting. Then, a few days later, the next Google Notification came in. It’s like playing digital Russian Roulette with those things – you’ll either be thrilled or be in a lot of pain. This time Google said they “changed” our penalty, as there were still some spammy links out there. Remember, not all penalties have the same impact. Clearly ours was lessened (which was continually proven in the weeks to follow), but our client – rightfully so – wanted to have the whole penalty removed. The problem was we couldn’t find anymore bad links. Everything from Ahrefs, OSE, Google Webmaster Tools, Bing Webmaster Tools, and Majestic (etc.) was classified and handled appropriately.
The link management function isn’t new to the SEO space. Many tools do it already, like Buzzstream and Raven – and they do it quite well. Additionally, link discovery is an existing feature of tools like Open Site Explorer, yet this is an area where I see opportunity for growth. I love the idea of these ‘new link’ reports, but honestly, haven’t found anything faster than monthly updates. I know it’s a tough request, but I mentioned this to François. By tracking “as-it-happens” links, you can jump into conversations in a timely manner, start making relationships, and maybe shape linking-page context. You might even be able to catch some garbage links you want to disassociate yourself from quicker.
I remember a few years ago blowing the mind of a boss with a theory that Google would eventually rank (in part) based on their own internal understanding of your object. If Wikipedia could know so much about an object, why couldn’t Google? In the end, I was basically describing semantic search and entities, something that has already lived as a concept in the fringe of the mainstream.