All the SEO gurus suggest there are over 200 ranking factors. But none know the truth. Only Google does.
And on top of that, no one except Google knows the true weighting of each of those factors, let alone any machine learned relationship weighting (if that even exists) or other factors no one has yet guessed (the don’t knows we don’t know).
What has been done are post factor experiments, where known rankings are checked again and again against perceived or announced ranking factor changes.
But whichever way you look at it, it is mostly putting a finger in the air and guessing which way the wind is blowing.
And this is good news, because the less anyone knows about these factors, the better the content will be – which is good for you, your company, your customers – and Google.
But having said all that, let’s explore what various experts say on all this, and sprinkle in some good old common sense and logic.
The first premise on that point, is that Google want to stay in business, and if they’re to do that, they are going to do everything possible to deliver the best set of results they can.
This is why they employ more AI engineers on the planet than anyone else. it’s the least possible thing they can do for their shareholders as well as their audience (ie. us).
The second premise is that a search engine that delivers bad results is a) pointless, and b) won’t last long.
We all know that HTML is tag based. Tags help browsers format the data in human readable form.
And that means tags also help search engines understand the importance or relevance of tagged content.
Some tags also contain elements. The anchor tag <a> is a good example. As well as having the ‘href’ element for the link it also has the ‘rel’ element. And one use of that element is to state that a link is ‘nofollow’.
A nofollow rel element in theory means that no link juice is passed through that link – and conversely, no rel tag means (in theory) link juice is passed.
So spammers of the past used to stack up thousands of ‘follow’ links in the hope that all that link juice would artificially raise the ranking of the page they all pointed to.
This activity still goes on today, and one form of it is the use of so called Private Blog Networks (PBNs).
Google have been aware and have been squashing this for well over a decade, yet the practice still persists, with many claiming that it still works.
That may be so for some, but they are up against machines that are already so much more powerful than your average human being at spotting patterns, that it is extremely unlikely they will get away with it for very much longer.
Then there’s the tricky problem of less than honest businesses paying for services to spam their competitors out of business.
It’s obvious and logical that Google have dealt with that sticky problem too (their top evangelist at the time – Matt Cutts – was spammed this way).
No one knows, but it makes complete sense that if Google detect link spamming of any sort and they cannot 100% equivocate that spamming with the owners or operators of the page or site being spammed, they are simply going to ignore the links.
Which means that it’s almost certain that the rel=nofollow element has little value anyway.
After all, if you have a page, and that page includes an outbound link to some other page, that other page should be relevant to whatever the topic is, and should therefore be a normal follow style link – why would anyone not do that? (the simple answer is because they want to be greedy with their link juice because they read somewhere that every outbound link loses them some of it).
Do you think that really matters to Google? Of course not. Go back to the first premise – Google and all search engines need to deliver good content. Period.
And so we come to all the links on a page. If you were writing a search engine and ranking pages, you would expect to see the chain of links incoming, outgoing and internally linking to be similar in context.
That’s all that matters.
If all the pages that link together are all themselves good quality pages including the pages they link to as well, you have the start of a half decent link quality assessment algorithm.
But stick in millions of spammy pages, sites and directories, plus all the myriad linking schemes that come with them (eg. silo’s, rings and PBN’s), and your algorithm is going to find it hard to differentiate between the good, the bad and the ugly.
And so you bring in pattern recognition, semantic indexing, artificial intelligence machine learning, and user profiling and you eventually come to a position far greater than figuring out links.
You start to be able to read the value of a page purely on its own merit. And that is the one fundamental thing that’s happening right now.
Great pages, full of the right content (the content that readers are searching for) is now ranking despite what the backlink heroes of old are claiming. I know this to be true because of the sheer number of pages my own sites are now ranking for (all without a single artificial backlink – every backlink has happened naturally).
Another SEO myth is the URL or page link. I’m not saying your link shouldn’t have some direct or semantic meaning – it should – that’s useful to the reader.
It’s just that so many experiments have been done with relevant links and so much has been against non-relevant link URL’s, that it’s quite rare to see any link that isn’t now human readable.
And yet, completely unreadable links still hit the top spot on Google.
So if in any doubt about what link you should give your newly published article, using the article’s title is the right thing to do – simply because it makes sense.
The same applies to the domain name myth. I’ve bought countless domains in the hope that, for example, accountants.com will rank in the accounting niche better than, say, blogsandco.com – but have found it makes no difference at all.
Having said that, it will help your customers understand if you use the domain blogsandcoaccountants.com – but don’t do that just to game the system. You’ll be wasting your money.
Speed matters – but not just for Google, it matters for your visitors. Why wouldn’t you spend some time or money ensuring your site loads as fast as possible. Anything else is madness.
Using WordPress is fine by the way, just don’t install too many plugins. Every plugin adds more overhead. Also, only choose official WordPress plugins – that is those that are registered with WordPress and appear in their plugin directory.
There are exceptions, such as ThriveThemes (and many others), but there are countless hundreds of rogue plugins or simply outdated plugins with security vulnerabilities. Don’t take any chances.
Ensuring your pages work on the mobile platform matters a lot too. But how much that matters depends more on your audience than it does on Google right now.
But don’t ignore it. Google have very specific guidelines about how pages should work on mobile, and again it’s about quality assurance and delivering the right content to users that matters.
You can bet that if two pages give the same information to the user, then Google will rank the one that works better on mobile higher when viewed on a mobile device. This is worth some thought when you’re looking at competing pages using a tool such as SEO Roadmaps.
Every site should have a sitemap. It goes without saying that anything that can help a search engine find all your pages with ease is worth having.
As they said in Toy Story – if you haven’t got one, get one!
Robots.txt files tell search engines which pages to ignore. Unless you’ve got pages you really don’t want a search engine to list, there is little need to bother with this.
Also bear in mind that no search engine has to obey a robots.txt file anyway.
Adding meta information to your pages is a good thing, because it allows search engines to decipher the meaning of certain parts of your page.
But if you use it to game the system – such as trying to get Google to use your page as the one to display with a carousel at the top of search, then you better make damn sure your page IS the best page and not trying to spam itself there.
Google understands patterns better than any collection of human brains can, so trust me when I say they know what you’re doing.
Visit Schema.org to find out more about this subject.
About the best thing you can do when researching your article is to make a list of the headings and subheadings used on all the top competing pages for your article.
This is not just good research, it will also inspire you to go down new paths as well as give you ideas for things you may not have previously thought about.
And better still, Google is flagging up what they think matters, so you can not do any worse than follow their lead here. The SEO Roadmap app makes this a doodle to do.
The same applies to the lists and bullet points you find on top competing pages. Stand on the shoulder of giants and take advantage of this simple way to do research.
Many SEO experts say you should embolden and highlight words and phrase in your articles where they have semantic context. The idea here is that Google will then weight them higher.
The reality is that it probably makes no difference whatsoever. Put yourself in Google’s shoes and ask yourself if a highlighted or italicised word should really make one page rank higher than another?
And if it did (and people say it did), then you can see how easy it would be to game the system (which many people did), and so it becomes obvious that it’s in everyone’s interest not to weight highlighted words above others.
This equally applies to headings and subheadings, but what is different about those, is that they give an article a flow and a certain semantic meaning, and will only rank anyway if the article itself has proper meaning (and is not more spam).
The most common question in SEO is how many words should an article have.
And the answer is, as many as it needs to answer the problem the searcher needs answering.
There is no specific figure – despite what you may have read elsewhere.
People have researched the subject and use statistics such as top pages having between 2,000 and 3,000 words.
Others such as the YoastSEO plugin state an article must have at least 300 words.
They’re both right and equally wrong. The point it, the number of words is NOT relevant to anything – with the exception of ONE, and only one, metric.
And that metric is latent semantic indexing (LSI), also known as Latent Semantic Analysis (LSA).
If Google can only figure out a page’s value by the number of semantically related phrases on it, and that number can be determined, as, say, 954. Then 954 would be the minimum number of words you should go for – but bear in mind, that number would have been determined from a specific already ranking page – and you’re not going to be able to do the same unless you copy it verbatim – which of course means your page would end up in the supplemental index – which means no one would see it anyway)
Use the SEO Roadmaps app to get these figures auto calculated for you.
This is the same argument as the myth about the number of words you need in order to rank an article.
It’s the same thing. It’s not relevant.
In fact, the only relevance between writing long or short copy is the value or universally acknowledged know-how of the information or thing you are trying to sell.
If you’re selling a cheap commodity, then it’s not going to take many words to explain it.
If it’s rare, scarce, expensive or some brand new idea, it will take more words to explain, and hence a longer article.
Books have dense text, with very little white space, and in the case of novels, usually no images at all.
And yet people read all of the 70,000 plus words with no complaints at all. So it becomes a matter of expectation too.
15 years ago people started adding meta keywords to their articles in the hope Google would rank them higher because of the relevance.
Then they started spamming those keywords no only in all the available meta tags – but also in the body text.
The result was swift action by Google and whole bunch of de-indexed pages.
Google said long ago that they have never looked at the keywords meta tag anyway, so that has simply been another myth spread by so called SEO gurus.
Everything is about context and semantic meaning these days, so articles regularly rank for keywords that they don’t even contain. This is because the similes, metaphors and synonyms that make up semantic meaning now count for at least as much meaning as direct use of specific keywords.
Which means you can happily craft excellent prose without worrying about stuffing it with keywords anymore.
However, a word of warning. If you do use keywords, and tend to over use them for no good reason, you may still get a penalty.
To avoid that. use a keyword density checker such as the one in the SEO Roadmaps app to ensure you haven’t accidentally over done it (you can use it to check the top 10 sites for the densities to ensure you’re in the right ball park too).
On page factors that may help when it comes to ranking pages include the context of headings and subheadings as well as lists and bullet points.
They may also include the number and quality of internal and external links on a pages including images, their links, and any alternative tagged text associated with them.
Other factors include keyword density, video and downloadable resources.
But whether these have any direct influence on ranking is a matter of debate as mentioned at the start of this article and since there is no answer, the golden rule is to always write for you audience first.
But ensure you don’t miss out anything that matters to Google by deeply researching what’s already ranking (SEO Roadmaps does this for you).
Off page factors include backlinks. That is, all the links that point to your page.
Note that search engines can only display a single page. That is what a link does, so they’re really not interested at all in any site’s home page unless that home page has the solution on it that people are searching for.
If this were not the case, then every search would simply show the root domain, and that would be the most useless set of search results ever!
Remember this when writing articles. Each page ranks on its own merit and the association of that page to the domain’s home page is completely meaningless.
Which brings up nicely to what I call the backlinking myth.
Backlinks matter. That’s fine. But only in the context of whether they are really truly relevant – and not been done simply to get better link juice (whatever that may mean!).
Google understands links. They know precisely whether a link is natural, relevant and good. And if it fails any of those tests, it counts for nothing.
There was a time when students sold links from their university/educational personal profiles to site owners looking to get some authority from being ‘recognised’ by a university.
The same happened with government sites. You can imagine what Google thought about that, and yet people are still selling this idea today.
Once again, don’t be sucked in. These short cuts waste your time and money.
If you write an article that really is useful as an educational tool or help in some way, there is nothing wrong with contacting the webmaster of a relevant educational establishment and letting them know about the article, but if they place a link at all, you had better hope they place it from an equally relevant page (or again, you’re wasting your time).
The one backlink that will do you some good is if you do guest blogging on a high ranking site. The backlink, if you are allowed one, will be placed in your mini bio at the foot of your guess post.
And whether you get kudos for that from Google is less important than the extra visitors (and increased celebrity status) you may get from the referral.
So to sum this up, don’t think about SEO at all when writing an article, but DO do your research into what Google and other search engines are already telling your about the subject (by looking at what’s already ranking), and do use a tool such as SEO Roadmaps to speed that process up.
Once your article is written and published, I highly recommend returning to it every few months to check its position and make sure you’re not missing out on new research done on the topic.
You can also then add an update note at the to of the article to let Google and your readers know they’re reading the latest information – and not some years old blog post completely out of date.
You can also create a video to compliment your article, post that on your YouTube channel and link back to your page from there.
I’m sure you can think of many more ways to extend an existing article and help it rise further up the ranks using tactics like this. Just keep it all relevant and well written.
Keep on doing more of the same. We also run a 30 day SEO Beginner to Expert workshop, which you can see a fast track version of on the SEO Roadmaps link here.