Links are nowhere near as important as they used to be (and anyone pushing any type of link generation service is a danger to your business – proof below)
Content must align perfectly with search intent (delivering the best possible content is the only way Google stay in business)
Google is the Gatekeeper (don’t trust anyone else – read why below)
Firstly, it’s the shortest search engine optimisation guide you will ever read (or need).
Secondly, it’s been written using real data supplied by Google through its search engine. And since Google dictates what pages rank where, there can be no better source.
You need to know right now that everything important in SEO started changing in 2016 with the announcement from Google that their PageRank algorithm results would no longer be made public.
Up to that point, anyone who claimed to know anything about SEO used their original patented algorithm (PageRank) to determine whether a page would rank highly (if at all).
And that meant that everyone could (if they wanted to) game the system by comparing real rankings with PageRank scores. But no more – and that’s a very good thing for you and me.
Any guide to SEO (especially a ‘definitive’ one) must therefore rely solely on Google’s output rather its algorithm. That is, enter a search phrase in Google and see which pages are displayed first.
Whatever comes up is what you have to beat. Whatever those pages contain and link from and to is your benchmark. You can see that any meaningful analysis of this is going to be a big task.
So a definitive guide to SEO (if it’s to be definitive) has got to tackle every aspect known to Google (and commonsense) if it is to be of any use. And that’s why this guide has been written – but it’s not a ‘get and forget’ guide – it’s a living and breathing SEO testimony (with the emphasis on testing).
The internet and the world of search has changed so much, perhaps it should be called New SEO.
Google realises that to survive (which of course it very much wants to do) it must strive to deliver better and better results. If it fails to do that, and in particular gets the ratio of ads to content wrong, we will start to look for alternatives.
And absolutely for certain, an alternative will be waiting. This is in spite of the fact that Google now employs more PhD’s than any other company on the planet.
In other words, they must (at all costs) keep on improving and delivering what we want if they are to stay in business. And that is exactly what we want as marketers.
Google will award top content producers with better placings in the SERPs (Search Engine Results Pages) provided we keep on giving. This change started happening when RankBrain was introduced along with the Hummingbird algorithm in 2013.
But it is only from late 2016 onwards that the changes these updates made have started to become obvious.
To get the MOZ bar, you need to first register a free account with them. Then you can install it in your browser (you need to be logged in to your Moz account to turn it on once it’s been installed).
You can see the number of links pointing to each page in Google’s search results for any search term as well as a few other pointers to popularity.
The idea is to give you some ideas on how hard or easy it will be to rank for your term. However, you will notice that the results can be extremely erratic. This is due to those changes I mentioned right at the start of this SEO guide.
Most noticeable are the number of links. Sites with no links at all are now capable of being ranked on page 1. This was very different a few years ago.
It’s all part of Google’s long standing crackdown on spammy sites and bad SEO practices including link farms, private blog networks, keyword spamming and a whole host of other so called ‘black hat’ techniques.
But none of that matters right now. What we’re interested in is exactly why some pages rank and some do not. That is the holy grail of all SEO professionals.
And up until 2017 it was extremely hard to get right. Google has filed over 14,000 patents. Amidst those patents are the key components of their search algorithm, including the original patent – PageRank (read all about the PageRank patent here).
Google say they use over 200 signals to determine the rank of any page and the term being searched for. Trying to understand which of those signals matter the most is impossible unless you work for Google.
However, they have publicly stated that the top signal is now the quality of a page’s content. Quality means does the content answer the search query. This is easy to determine by the searcher’s consequential actions after clicking on the link to that page in the search results.
if the searcher clicks a search result link and promptly clicks their browser’s back button, and then clicks another link on the search page, it’s common sense to assume the first result clicked did not give them the answer they were looking for.
But it’s not always the case. Anyone researching facts (such as product pricing) may go back and forth a number of times. It does not always mean a less than adequate search result.
Google needs to know and understand the differences better than anyone else. What may seem to be a ‘bounce’ in one context may be entirely different in another.
And for Google, delivering the best possible search results is imperative. If they fail more times than they succeed to deliver the right search results, people will eventually seek out a better search engine.
But for now, just looking at pages and some of their stats using the Moz bar will help you see just a little bit more than the average searcher does.
It starts with the title. The length doesn’t matter provided it shows up in full when searched for (in other words, if it’s very long, the end will be truncated).
MOZ suggests a maximum of 60 characters will ensure 90% of people searching will see it in full (50 should get 100%). Here’s what MOZ say about it.
Use plenty of subtitles to split up the content. This helps readers understand the big picture as well as making the content easier to read.
Unless you know what Google are already ranking for the topic you are writing about, you won’t have any idea how to get their attention (remember Google is the Gatekeeper).
To see what they think is semantically related, analyse the top 10 sites (the relevant ones only – Google still makes mistakes).
Use the <alt> tag for all your images and add relevant text to describe the image. This is good web protocol to help users, and Google will see it that way too (in other words, don’t follow old hat techniques and stuff your image alt tags with keywords).
Always include links to other relevant pages (both internal and external) if it will help your visitors get more from your article. It’s another good practice. Not sure which links to use? That’s another problem SEO Roadmaps software will solve for you.
The hard part is the analysis. Manually it will take you hours to do the research properly (and if you don’t, you will never know what the triggers are for getting any article to rank on page 1).
Read more about how to rank articles over here.
Before I show you how to organise and create the perfect website and structure your pages properly (so users get what they want, Google respects what you do, and you get more traffic) I need to break a very commonly held belief (and tell you why that belief messes up a lot of sites). Read on…
The most common myth about websites is that the home page is the most important page on the site.
It isn’t. It isn’t even mildly important. It is, in fact, just another page on the internet.
The reason we believe the home page is important is because we think a site is like a book (because on the surface, it kind of is).
Before you buy a book, there’s a great chance the first thing you’ll see is the book’s cover – it’s ‘home’ page if you like.
So we make this assumption that people only get into our site via the home page. This is rarely the case – yes they can access the site via the home page and a navigation bar, but that’s not how Google and other search engines deliver results.
Google doesn’t show us the home page in every result it displays. In fact, it rarely shows the home page of any site at all.
And this is because the internet consists ONLY of pages. Pages with answers, solutions and other information.
So unless a home page has a solution, it provides little value, and Google is not going to display it.
People want answers when they search, not business cards.
The web is a huge list of pages, not a list of domains.
Yes, there are of course domains, but to think that a domain matters is not to understand the value of the pages a domain contains.
A domain serves NO PURPOSE to the searcher.
A searcher wants an answer to a question or a solution to a problem.
They don’t care if that answer can be found at https://quentinpain.com or at https://quentinpain.com/ideas/successful-business-5-easy-steps/
They only care that they are pointed to A PAGE that contains the answer or solution they want.
Every page on the internet has an address called a Universal Resource Locator (URL).
The fact that one URL (the root domain for example) just happens to be a subset of another URL (a so called ‘inner’ page) is of no consequence to the searcher.
Even if a home page contains the complete answer to every question on the topic the website was set up for (ie. it answers one or more questions right there on the Home page), it does not change the fact that it is just another page (and therefore nothing special).
When we search on Google or Bing (or any other search engine), they bring up a list of PAGES. They don’t bring up a list of websites or domains.
(sure, you can see that each page belongs to a domain – but that’s a human thing – NOT a search engine thing – we just love automatically categorising everything we see because it’s how we make sense of the world – but that has no bearing on the usefulness of a search on a search engine).
To put it another way, if Google thought just listing the root domain (the ‘home’ page) was useful, every informational search would result in https:/wikipedia.com (and you’d then have to go into Wikipedia and use their search to actually find what you were looking for in the first place – heck, even I could invent a search engine that was more useful than that – ie. a ‘content’ page).
Every page on the internet should serve a purpose. That’s the whole point of the web.
If a page serves no purpose, then there’s no point.
So ask this question, what is the purpose of your home page?
Whatever the answer is, it should be written to reflect that answer.
When you do that, you accomplish perhaps the most overlooked thing of all about the web – it’s purpose is to serve its audience’s needs (whatever they may be).
If you ONLY want people to “Enter” your site via its home page, then its home page becomes a search engine. So if that’s the intent (and it would be a serious error of judgement if it was), then the home page is going to need a Search button added to it.
Because if that’s not done, then all you have is a really hard to use directory of links (and it was that problem back in the mid 1990’s that made organisations like Yahoo, Alta-Vista and AOL wake up to the fact that they needed a search engine to replace all those really hard to use sites).
If we truly care about our customer’s experience on our site, then we don’t care if we add internal or external links. We just want to make sure we give them exactly what they need.
We don’t care about so called ‘link juice’ either. If we do, then we’re not giving 100% of our attention to our audience.
However, there’s nothing wrong at all with creating a NEW page on your site that keeps people there by ensuring your new page is BETTER than all the external pages you would have pointed people to otherwise.
That’s the single best tip you will ever here on Search Engine Optimisation. It means you care more about your readers than anyone else (because you’re always delivering the best content).
It means you care about search engines and what they deliver (because you want them to always be delivering the best content).
And you care about your business and the people that work for you by ensuring it always delivers the best possible solutions.
If you’re worried about how hard it is to navigate your site from the home page, then you’ve missed the point about how the web has evolved.
At the start of the world wide web, every site with more than one page was a mini search engine.
Each page had links pointing to other pages. All the pages were related in some way so as to form a web of related useful information.
As the web expanded, it got impossible to find the information we wanted easily, so someone suggested the idea of a universal index, and the concept of the search engine was born.
Now it no longer mattered whether a site had any navigation menus at all – just so long as it had links somewhere that connected it to other pages that mattered given the context of that page.
But legislation (plus business sense) kicked in, and we had to start adding things such as Privacy Pages to our sites. And of course, they needed to be added to every page – just in case.
And then it made sense to add an ‘About Us’ page – and have that available from every page too – just in case someone visited the page, liked what they saw, and wanted to find out more about the company.
And it also made sense to have a contact us page (not least because it’s also necessary for the privacy stuff). And that too needed adding to every page.
And that really is all that matters when it comes to navigational menus.
Except for one single case.
What happens when someone is interested in what you do, but only has 1 URL – and that URL just happens to be your root domain?
Than you’re going to need a way to categorise your whole site into the smallest number of general links possible (so you don’t overwhelm with thousands of home page links).
And each of those category style links will take them to a page containing a more detailed set of links. And they in turn will link to more pages until at last the searcher ends up on the one single page they were looking for in the first place.
And with that said, that’s precisely why we have search engines – to get people as fast as possible to the right PAGE and not website – because no one cares about a website – they just want a solution.
So we know that a home page is just another page on the web.
We know its most likely purpose is to serve people who have ACCIDENTALLY landed there (because no decent search engine is going to send anyone to the root of a domain UNLESS that page has the BEST answer on it).
And we know that the ONLY links on ANY page should be relevant and useful for the intent of that page (so Privacy, Contact, T&C, and Disclaimer pages should be linked to from every page because the law says so).
We know that if a site is a business site, then it’s probably going to be a good idea to have an ‘About Us’ page linked to from every page – so the user can decide if they like what we do before they make a purchase or get in touch.
And we now know categorically that having a link to the HOME page on every page is for 99% of the time, completely useless.
In fact it is ONLY useful IF the site’s page to page navigation is so bad, the user gets completely lost.
So having said all that, what does the perfect web page look like?
That is really all you need to know about websites – it’s never about the Home Page, it’s about Every Page.
Stop Press: 2018 Update. Nothing important has changed since the major shift in 2017 with Google AI changes and RankBrain, so read on with confidence if ranking pages on Google is important to you.
Keywords used to be the most important consideration for ranking a page. Now they’re not. It’s all about content, context and the way phrases and words connect together to give meaning (in other words, semantics).
Having said that, it’s still an interesting exercise to use the Google keyword tool to discover estimated traffic – but trust me – those traffic figures can be extremely misleading – and worse, they say nothing about the most important thing of all – which is search intent.
Google’s single mission is to deliver the right content for the intent of the searcher, and boy do they know a lot about what you’re searching for. Enough said. Read on…
Google has changed, and for the better. Quality is at last starting to count in the race up the rankings, and you’ve found the right page to let you in on the secrets – the first one of which is simply to write great content, and the second one is to find a way to get Google’s attention – and there lies the problem. Read on…
Let’s start with the biggest secret of all: Google is the Gatekeeper.
People don’t understand this simplest of all secrets for ranking on page 1.
Google is the ONLY entity that decides who ranks where. So your top tool in understanding why some pages rank and most don’t, is to go straight to Google and type in whatever it is you are trying to rank for.
That could be an article title, a search phrase used by your target audience, or (as in the old SEO days) a keyword.
The point is, keywords are no longer relevant – at least not in the way they used to be. It’s now about key phrases and the intent of those phrases.
It’s about latent semantic indexing, artificial intelligence, Google’s RankBrain, and most important of all – Google wanting to stay in business.
If they don’t deliver top quality results, then they will lose market share – and that could happen faster than you may realise with voice activated search – dominated by Amazon – rapidly increasing.
So there’s no point in studying last year’s metrics about the number and quality of backlinks, number of words, domain authority, page authority – or really, very much of what used to be considered rank analysis.
Instead, the new game in town is to analyse pages that are actually ranking for the words and phrases you want to rank for.
And there’s only one player in town when it comes to that analysis – and it’s Google.
Back in the day, everyone was worried about the Google “slap”. You wrote an article, stuffed it full of keywords, backlinked it like crazy using so called silo’s for internal links and private blog networks (PBNs) for external incoming links.
Your pages rose to the top in one week, then a month later they disappeared forever as they were moved to the dustbin of search once Google’s algorithm spotted what you were doing.
And so today, if you’re still paying people on Fiverr for cheap links, you better cancel that fast – not because you’re site will go down quicker than the Titanic, but because you’re wasting every single penny you pay – you’re literally throwing money overboard.
So how do you write articles that rank on page 1 without any of this old black/grey/white hat shenanigans?
You write the best possible articles you can. Articles based on real facts and solutions that people are searching for right now.
Write it and they will come works except for one small problem – they won’t come unless your page is found by Google in the first place.
And that’s the secret. You need your page to be seen by Google for something that tags it as being related to whatever it is you want it to be found for.
In other words, every article must have relevancy, quality and some attraction factor.
This is why you must always start with the article’s title.
The article title is the first thing people see. It either hits the mark or it doesn’t. If you write it carefully, with a lot of thought about your target audience, then you may just get a few more eyeballs too.
But that’s not what really matters here. It’s the fact that a single article can and will rank for hundreds of keyphrases over time once Google has found its natural place in the rankings.
And as you’re writing your article, make sure you add internal and external links to other relevant articles.
It’s not just your readers that will appreciate this extra help, it’s Google too. The thing is, whilst backlinks still do count, they only count if they’re natural and they make complete sense for the phrase being searched for.
It’s really obvious when you think about it. A page that has a link to another page that continues a part of the story on another angle, or that goes a little deeper than the main article, or that explains some fact in more details, is as natural a link as you can possibly get.
This passes all the algorithm tests of natural linking – you’ve a great article on a topic, and it links in exactly the right places to other relevant great articles on the same topic.
That’s natural linking.
If you want to get your page penalised or banned. Add irrelevant links to other pages. Add lots of links. Create other sites on free platforms such as blogger.com and spam links from there back to your site.
Keep doing it. As you do this you will leave a trail of telltale hints that Google and other search engines will soon realised is all being created from the same source. And that’s when they’ll strike, and your page will disappear.
Private Blog Networks are private because the hope is that no one can ever find out if the blog is connected with you or not.
People who sell and condone this sort of thing insist that you will be fine, but you won’t be. The only people getting rich from this sort of thing are the people selling you schemes on how to do it, plus the people who sell you the domain names and the hosting.
Don’t get sucked in. Ever.
There’s only one type of link that’s acceptable, and that is natural linking as explained earlier.
You will know if you’re breaking any rules simply by asking the question of whether a particular link is breaking a rule. Either it’s there to legitimately help the reader or it’s not Any link that doesn’t pass this test is a bad link.
It’s not really a myth. Search engine optimisation is a manipulation. It always has been. Google tell us not to manipulate search, yet they do it all the time by placing ads where better content could have been placed.
They measure each ads response using a quality score, and if the quality goes down (i.e less clicks than other ads), then they keep charging more to the advertiser until the advertiser turns off the ad (or Google do).
So what is SEO Friendly Content? It’s any content that gets Google’s attention. And once it’s got that attention, and Google ranks it with some experimental traffic, then, and only then do you have any chance at all of it ranking for the words and phrases you want.
Here’s the second big secret. Google are the gatekeeper as we know. But they’re also the judge and jury of where something ranks.
So if you want to know why one page ranks and another doesn’t, you are going to have to analyse each competing page. There’s no other way to do this now.
Gone are the days of adding a ton of backlinks (which is a good thing). In are the days of adding quality content.
If you enter a phrase into Google and rip apart the top 10 results that appear, you are on the first step of the ladder to get Google’s attention.
And at this point, you are already miles ahead of every other content writer out there.
You’re in the unique position of seeing what Google is experimenting with and what matters to them in terms of quality.
Analyse it further by checking out all the backlinks on those competing pages (both internal and external).
You’re now going to see a completely different picture of the semantics of an article.
This is the ONLY insight you will ever get to Google’s algorithm. You can run test after test after test and come up with a bunch or probabilities and statistics, but that’s all they are.
And worse, they’re all based on historical values and not today’s values.
Whereas any search done right now tells you exactly what Google thinks is the right sort of thing to rank RIGHT NOW.
This insight is huge and worth a small fortune just to know. And that’s because it’s simple, obvious, and above all, logical. There’s no theory here. Something ranking on page 1 is ranking on page 1. Period.
We’ve all heard of the long tail ever since Chris Anderson wrote about it in 2006.
If we’d all taken that advice back then, we would all have top ranking pages today – and sites so authoritative that the traffic and value of those domains would be astronomical.
When Peter Drucker said we over estimate what we can do in a year and underestimate what we can do in 5 or 10 years, he was not wrong.
We’re always after the short cut. The quick buck. The silver bullet. When we should be looking for the Golden Long Service Award.
Ranking a site for just the article title is the first step to greatness and recognition on the web. Nothing much else happens until that happens.
That is how Google knows what your page is about. Google put it there because it saw something in it. So short article titles are useless (unless it’s interconnected with something Google is already engaged with).
And once you have your page ranking, then Google will start experimenting with other related keyphrases. Google wants to give the user the best experience (every user makes Google money and keeps their investors happy).
And Google will continue to experiment with phrases for as long as its getting good feedback from its experiments.
At some point or another, there will be nothing further for it to experiment with, and your article at that time and place will have found its natural ranking position.
And that’s when most people (who even know about this) stop. They think the article is done. Nothing more to do.
Is that useful to their customer? Yes, for a time it is. But at some point, things will move on – Google included. Others will join in and start to produce better, more up to date articles on something similar.
And Google will carry on experimenting, and that original article will start to slip down the rankings.
And if there’s any decent traffic to be had for that article and all its derivative keyphrases, then it’s going to slip down a lot faster as competitors start to understand the gap that’s slowly opening up.
But those in the know, know this very well, and so they stay on top by ensuring their customers are always at the top of the agenda. Their articles are updated on a regular basis so it’s not just the customer who is kept informed, it’s Google too.
And they know it because the Google bot will come back to their pages time and time again to see what’s new simply because those pages are being updated and added to regularly.
Article extension happens when you go and do the research again. it gives you a chance to add anything new. And that keeps you ahead of the competition. Because if you do this right, you will ensure that your page is not just unique, it incorporates anything useful that your competitor pages have too.
Standing on the Shoulders of Giants is the tactic or even strategy that needs to be adopted from now on. There’s a very good reason professional authors do all their research first. They want to be know as the best.
Google filed their original patent in 1998. They’ve filed numerous other patents since. No one except Google knows how they’re algorithm works.
It’s all guesswork, some of it well educated guesswork, but most of it is simply made up or copied from other people who made it up.
And the best bit is we don’t care a jot. It’s of no consequence and a complete waste of time.
That’s because whatever test you carry out today, and conclusions you draw from it are always yesterdays news – and Google know it.
Some say it involves over 200 separate metrics. Great. Prove it.
Some say backlinks are still as important as ever. Great. Prove it.
Some say it’s the number of words that decide whether a page is page 1 material. That’s partially true in that every page on page 1 has a word count, and that word count tells us that when you put every possible metric together, that number of words is the number you need for an article with that title.
At least that’s getting somewhere towards some kind of truth. It’s fact based at any rate.
But does any of it matter? No. Not any more. All that counts now is quality and natural backlinks.
And they’re all voted on by searchers every time they click and every time they click back to the search engine. And as I’ve said, none of that happens unless Google give the thumbs up to an article in the first place and start experimenting by sending traffic to it.
This is another pointless exercise easily figured out, but ultimately a waste of time.
What help is the URL of a page to the reader or searcher? None. Except it may be easier to remember, and therefore type or pass on verbally. But that has nothing to do with search.
Search is just clicks or voice. The length of anything has no relevance to what the searcher wants with the exception of an article that does not give the answer they’re looking for as quickly as possible.
If you take this article, which is certainly long, the searcher gets the basic answer in the first paragraph (the answer to how to rank any article on page 1 of Google is to write great content, and make sure you write it in a way that gets Google’s attention).
Google announced years ago and things like meta keywords in an article count for nothing tells you all you need to know.
Many test have been done on the correlation between having a keyword in the title and not having one have been done – and just as many variable results have come up.
If you include a specific keyword in an article title in the hope that article will rank for that keyword, you are wasting your time with the exception of one point: if a searcher sees your article in the rankings, and they are specifically after something to do with that particular keyword, then they are logically more likely to click on your article instead of someone else’s.
But that makes no difference to the search results because Google sees every article as a complete piece. It’s not about details, it’s about the overall semantic value of the article.
If you want to find out how to analyse pages and track your articles, join our FREE Workshop below:Join Our FREE SEO Workshop
LSI was the big buzzword of 2014. it’s kind of old hat for most search engine optimisation specialists these days, because it’s been swallowed up in Google’s Rankbrain AI machine learning code.
Latent means hidden or dormant or not yet revealed, so the idea is that any keyword (or key phrase) has other meanings that may not be so obvious, but are connected to their surrounding sentences and article topic.
In other words, if I use the word ‘pants’, it could have multiple meanings depending on the topic – or surrounding references – or the way it’s being used figuratively.
From this, you can start to appreciate just how clever us humans are! We can decipher all sorts of meanings even from a simple expression on someone’s face without a word being spoken.
So the idea of LSI and why it matters is to make some sense of the context of a piece of writing in order to decide its category and sub category, and in particular, where it may be useful in terms of an answer for some search query.
Using our example word ‘pants’, if someone searches for ‘incontinence pants’ and Google shows up a page entitled “Why these new bicycle tyres are pants”, you can see there’s going to be a mismatch (and Google will get voted down for this answer by the searcher – who at some point may decide Google is just no good anymore).
When it comes to writing articles that answer problems, you need to make quite sure that every word you use is semantically correct for the topic you are writing about.
Having said that, there’s no problem using figurative words, metaphor or crazy synonyms (such as ‘sick’ meaning’ good’ for example), it’s just that you may confuse Google and other search engines in the process – and so your article may not get as high a ranking as some competitor.
And from that, it’s simple to deduce that using semantically linked words and phrases that make sense is going to help you optimise your articles far more that poetic prose just for the sake of it.
But that doesn’t mean losing your writing “voice” you may have spent years developing, it just means ensuring that you sprinkle the right LSI keywords and phrase throughout your articles where it makes sense to do so – and perhaps replacing some less obvious phrases with something more helpful – at least from the reader’s perspective.
We’ve covered some examples already, but what do LSI keywords and phrases look like?
To answer that, you need to know something about maths and vectors, and how an algorithm can be generated that will make some sense of a whole bunch of connected words in an article.
And the best way to start that is to think about intent. What is an article trying to do? Who is it trying to serve? What form is it taking to do this?
All these things matter. But what matters more is clarity. As we’ll see in a moment.
The bottom line is to ensure your sentences make sense. That they answer the problem that the article promises to answer in its title.
Before we go down that path, let’s see what Google has to say about it using their Keyword Planner tool.
Or rather, let’s not!
The problem with Google’s planner tool is that it’s all about advertising, not article writing. it’s the only thing left in search that still has some meaning when it comes to keywords.
And that’s because it’s easier for Google right now to have people bid on specific sets of keywords than it is to hope that somehow LSI related keywords will get picked up when advertising using CPC (cost per click).
Which is why Google try to be helpful by offering suggestions, but you will find in practice it’s better to figure out the whole journey of your prospect in the first place, then ask them what they’re actually searching for, and use that data to plan your paid advertising campaigns.
As Google says, their keyword planner is only a guide. No results can be guaranteed. And worse still, every competitor is also using it.
It’s vital that you understand the difference between ranking an article and getting traffic from it.
It’s not hard to rank an article, it just depends on what it is ranking for.
And every article that’s worth reading isn’t just ranking for 1 key phrase. They’re often ranking for hundreds of phrases.
If you’re article is perfectly optimised using LSI keywords and phrases, then it has the best chance of ranking for hundreds of keywords, and if your intent in the article is obvious, eg. if it was to get someone to buy something, then a search engine like Google will pick up on this semantically, and start showing ads on the search results for it.
Discovering your LSI status for an article is vital if you’re going to understand what Google thinks of it (and remember that it’s Google that decides where you article fits semantically in its index – not you).
So the ideal article will rank at the top of Google for its main content, and also rank for 100 to 500 other words as well. It’s these other words (that may or may not be long tail) that bring in the traffic.
LSI Keyword Examples
Let’s use the article your’re reading right now (ie. this one) to see some examples.
We know the article is all about Latent Semantic Indexing. We also know that it is abbreviated LSI (which you can see in the very first headline).
Google understands that LSI = Latent Semantic Indexing – the hidden (latent) meaning in that abbreviation is already in Google’s database of synonyms, polynyms, acronyms and abbreviations.
And you can see that the above paragraph is very clear about its meaning. There’s also alternative words and phrase that can connect the contents of an article together even further.
For example, Latent Semantic Analysis (LSA) is another terms used, which may or may not be similar depending on certain academic criteria. And that using the phrase ‘academic criteria’ builds more context into the use of the term LSA into the argument.
In short, as an article is built up using everything we’ve talked about so far – acronyms’s, alternative words – synonyms, polynyms, related phrases, figurative speech, and some new concepts such as colloquialisms, we build up a strong case for whatever it is we are trying to convey.
The page you’re reading now is ranking #1 on Google for its title text: “The Definitive Guide to Latent Semantic Indexing”. Yet is has zero links and far below average domain authority according to MOZ.
This is because it precisely answers the promise posed in the title, and uses Google to verify that promise in its content (using the SEO Roadmaps app).
Google is now ranking articles on merit alone where that merit is justified. This is good news for all of us. As I’m always saying, writing good stuff now gets rewarded.
Now you know all about LSI basics, why not generate a few LSI terms and phrases yourself.
A great and simple tool to do this is LSIGraph over here. (press the Ctrl key at the same time you click if you want this to open in a new tab – it’s Cmd + click on a Mac).
But knowing a bunch of LSI related words is one thing – knowing whether Google is even remotely interested is quite another. To find that out, you can get a subscription to the SEORoadmaps app over here.
The whole point of ensuring your article can be found and indexed semantically by Google is to get you more visitors. And if your site is commercial, then more visitors usually means more sales.
Whilst I always recommend working out how to use paid advertising to ensure a continuous stream of traffic to your site, why not ensure your content fits with Google’s aim of delivering high quality solutions to its searchers as well, so you can benefit from tons and tons of free traffic.
That is why LSI ultimately matters. It’s getting you noticed by Google for the right reasons.
Mark Zuckerberg, founder of Facebook is the first person to publicly announce his company was going to adopt a mobile first strategy.
And he was dead right to do so (notice I’ve used the phrase ‘dead right’ here – this is pointless prose from an LSI point of view, but it adds ‘voice’ to my writing – and over time that voice may well be picked up by Google’s AI machines to mean something relevant to intent – or something else we don’t yet understand).
Mobile is the fastest growing medium by far. And it’s mobile that first drove the concept of search using Voice. Right now, 1 in every 4 searches on mobile are done using voice. And it’s rising fast.
That means, search engines need to understand speech from a semantic point of view as well as the written word – and they are very different things due to the way we are taught to write in school.
This is worth bearing in mind as you develop your writer’s voice. Speaking out aloud whatever you’ve written is the simplest way to understand whether you’re writing from your voice or from some academic voice. This is going to matter a lot more in the future (TOP TIP: your voice will become infinitely more important than an academic voice over time).
Hashtags give people quick references to things. But their biggest use is in search. The clearer the tag, the better the search. But the longer the tag, the harder it is to remember.
Right now, it serves very little use in terms of ranking, but a lot of use in terms of specific result delivery.
The key change in search over the last decade has been the overuse of keywords. And that applies to over use of LSI just as much, with the only difference being that the more synonyms you use, the less clear it is whether a keyword is being stuffed or not.
At least that’s the opinion of many SEO gurus, but I disagree with this. Google has more PhD engineers working on AI than any other company, so it makes complete sense to guess that they understand this better than us.
There is a 99.99% chance that if people start using LSI related phrases to keyword stuff, it will be obvious what’s happening, and a Google penalty will ensue.
Start by checking your keyword density. Then compare the combined densities of related LSI oriented phrases. That’s a really tough analytical job because no one knows exactly how Google do this in the first place.
Which takes us nicely on to the final subject of how to write articles using LSI in order to give you a lift up the rankings.
And the answer is quite simply, don’t! Instead, do the analytics so you understand the space in which you are writing. Ensure you cover all the bases as dictated by Google (using the SEORoadmaps app), then write the best article you can using your research plus your own knowledge and voice.
Google will then reward you for good stuff. That’s the bottom line!
Read the following article on page 1 ranking to find out more about what you need to do next if you want a bigger audience, more followers, and more sales.
All the SEO gurus suggest there are over 200 ranking factors. But none know the truth. Only Google does.
And on top of that, no one except Google knows the true weighting of each of those factors, let alone any machine learned relationship weighting (if that even exists) or other factors no one has yet guessed (the don’t knows we don’t know).
What has been done are post factor experiments, where known rankings are checked again and again against perceived or announced ranking factor changes.
But whichever way you look at it, it is mostly putting a finger in the air and guessing which way the wind is blowing.
And this is good news, because the less anyone knows about these factors, the better the content will be – which is good for you, your company, your customers – and Google.
But having said all that, let’s explore what various experts say on all this, and sprinkle in some good old common sense and logic.
The first premise on that point, is that Google want to stay in business, and if they’re to do that, they are going to do everything possible to deliver the best set of results they can.
This is why they employ more AI engineers on the planet than anyone else. it’s the least possible thing they can do for their shareholders as well as their audience (ie. us).
The second premise is that a search engine that delivers bad results is a) pointless, and b) won’t last long.
We all know that HTML is tag based. Tags help browsers format the data in human readable form.
And that means tags also help search engines understand the importance or relevance of tagged content.
Some tags also contain elements. The anchor tag <a> is a good example. As well as having the ‘href’ element for the link it also has the ‘rel’ element. And one use of that element is to state that a link is ‘nofollow’.
A nofollow rel element in theory means that no link juice is passed through that link – and conversely, no rel tag means (in theory) link juice is passed.
So spammers of the past used to stack up thousands of ‘follow’ links in the hope that all that link juice would artificially raise the ranking of the page they all pointed to.
This activity still goes on today, and one form of it is the use of so called Private Blog Networks (PBNs).
Google have been aware and have been squashing this for well over a decade, yet the practice still persists, with many claiming that it still works.
That may be so for some, but they are up against machines that are already so much more powerful than your average human being at spotting patterns, that it is extremely unlikely they will get away with it for very much longer.
Then there’s the tricky problem of less than honest businesses paying for services to spam their competitors out of business.
It’s obvious and logical that Google have dealt with that sticky problem too (their top evangelist at the time – Matt Cutts – was spammed this way).
No one knows, but it makes complete sense that if Google detect link spamming of any sort and they cannot 100% equivocate that spamming with the owners or operators of the page or site being spammed, they are simply going to ignore the links.
Which means that it’s almost certain that the rel=nofollow element has little value anyway.
After all, if you have a page, and that page includes an outbound link to some other page, that other page should be relevant to whatever the topic is, and should therefore be a normal follow style link – why would anyone not do that? (the simple answer is because they want to be greedy with their link juice because they read somewhere that every outbound link loses them some of it).
Do you think that really matters to Google? Of course not. Go back to the first premise – Google and all search engines need to deliver good content. Period.
And so we come to all the links on a page. If you were writing a search engine and ranking pages, you would expect to see the chain of links incoming, outgoing and internally linking to be similar in context.
That’s all that matters.
If all the pages that link together are all themselves good quality pages including the pages they link to as well, you have the start of a half decent link quality assessment algorithm.
But stick in millions of spammy pages, sites and directories, plus all the myriad linking schemes that come with them (eg. silo’s, rings and PBN’s), and your algorithm is going to find it hard to differentiate between the good, the bad and the ugly.
And so you bring in pattern recognition, semantic indexing, artificial intelligence machine learning, and user profiling and you eventually come to a position far greater than figuring out links.
You start to be able to read the value of a page purely on its own merit. And that is the one fundamental thing that’s happening right now.
Great pages, full of the right content (the content that readers are searching for) is now ranking despite what the backlink heroes of old are claiming. I know this to be true because of the sheer number of pages my own sites are now ranking for (all without a single artificial backlink – every backlink has happened naturally).
Another SEO myth is the URL or page link. I’m not saying your link shouldn’t have some direct or semantic meaning – it should – that’s useful to the reader.
It’s just that so many experiments have been done with relevant links and so much has been against non-relevant link URL’s, that it’s quite rare to see any link that isn’t now human readable.
And yet, completely unreadable links still hit the top spot on Google.
So if in any doubt about what link you should give your newly published article, using the article’s title is the right thing to do – simply because it makes sense.
The same applies to the domain name myth. I’ve bought countless domains in the hope that, for example, accountants.com will rank in the accounting niche better than, say, blogsandco.com – but have found it makes no difference at all.
Having said that, it will help your customers understand if you use the domain blogsandcoaccountants.com – but don’t do that just to game the system. You’ll be wasting your money.
Speed matters – but not just for Google, it matters for your visitors. Why wouldn’t you spend some time or money ensuring your site loads as fast as possible. Anything else is madness.
Using WordPress is fine by the way, just don’t install too many plugins. Every plugin adds more overhead. Also, only choose official WordPress plugins – that is those that are registered with WordPress and appear in their plugin directory.
There are exceptions, such as ThriveThemes (and many others), but there are countless hundreds of rogue plugins or simply outdated plugins with security vulnerabilities. Don’t take any chances.
Ensuring your pages work on the mobile platform matters a lot too. But how much that matters depends more on your audience than it does on Google right now.
But don’t ignore it. Google have very specific guidelines about how pages should work on mobile, and again it’s about quality assurance and delivering the right content to users that matters.
You can bet that if two pages give the same information to the user, then Google will rank the one that works better on mobile higher when viewed on a mobile device. This is worth some thought when you’re looking at competing pages using a tool such as SEO Roadmaps.
Every site should have a sitemap. It goes without saying that anything that can help a search engine find all your pages with ease is worth having.
As they said in Toy Story – if you haven’t got one, get one!
Robots.txt files tell search engines which pages to ignore. Unless you’ve got pages you really don’t want a search engine to list, there is little need to bother with this.
Also bear in mind that no search engine has to obey a robots.txt file anyway.
Adding meta information to your pages is a good thing, because it allows search engines to decipher the meaning of certain parts of your page.
But if you use it to game the system – such as trying to get Google to use your page as the one to display with a carousel at the top of search, then you better make damn sure your page IS the best page and not trying to spam itself there.
Google understands patterns better than any collection of human brains can, so trust me when I say they know what you’re doing.
Visit Schema.org to find out more about this subject.
About the best thing you can do when researching your article is to make a list of the headings and subheadings used on all the top competing pages for your article.
This is not just good research, it will also inspire you to go down new paths as well as give you ideas for things you may not have previously thought about.
And better still, Google is flagging up what they think matters, so you can not do any worse than follow their lead here. The SEO Roadmap app makes this a doodle to do.
The same applies to the lists and bullet points you find on top competing pages. Stand on the shoulder of giants and take advantage of this simple way to do research.
Many SEO experts say you should embolden and highlight words and phrase in your articles where they have semantic context. The idea here is that Google will then weight them higher.
The reality is that it probably makes no difference whatsoever. Put yourself in Google’s shoes and ask yourself if a highlighted or italicised word should really make one page rank higher than another?
And if it did (and people say it did), then you can see how easy it would be to game the system (which many people did), and so it becomes obvious that it’s in everyone’s interest not to weight highlighted words above others.
This equally applies to headings and subheadings, but what is different about those, is that they give an article a flow and a certain semantic meaning, and will only rank anyway if the article itself has proper meaning (and is not more spam).
The most common question in SEO is how many words should an article have.
And the answer is, as many as it needs to answer the problem the searcher needs answering.
There is no specific figure – despite what you may have read elsewhere.
People have researched the subject and use statistics such as top pages having between 2,000 and 3,000 words.
Others such as the YoastSEO plugin state an article must have at least 300 words.
They’re both right and equally wrong. The point it, the number of words is NOT relevant to anything – with the exception of ONE, and only one, metric.
And that metric is latent semantic indexing (LSI), also known as Latent Semantic Analysis (LSA).
If Google can only figure out a page’s value by the number of semantically related phrases on it, and that number can be determined, as, say, 954. Then 954 would be the minimum number of words you should go for – but bear in mind, that number would have been determined from a specific already ranking page – and you’re not going to be able to do the same unless you copy it verbatim – which of course means your page would end up in the supplemental index – which means no one would see it anyway)
Use the SEO Roadmaps app to get these figures auto calculated for you.
This is the same argument as the myth about the number of words you need in order to rank an article.
It’s the same thing. It’s not relevant.
In fact, the only relevance between writing long or short copy is the value or universally acknowledged know-how of the information or thing you are trying to sell.
If you’re selling a cheap commodity, then it’s not going to take many words to explain it.
If it’s rare, scarce, expensive or some brand new idea, it will take more words to explain, and hence a longer article.
Books have dense text, with very little white space, and in the case of novels, usually no images at all.
And yet people read all of the 70,000 plus words with no complaints at all. So it becomes a matter of expectation too.
15 years ago people started adding meta keywords to their articles in the hope Google would rank them higher because of the relevance.
Then they started spamming those keywords no only in all the available meta tags – but also in the body text.
The result was swift action by Google and whole bunch of de-indexed pages.
Google said long ago that they have never looked at the keywords meta tag anyway, so that has simply been another myth spread by so called SEO gurus.
Everything is about context and semantic meaning these days, so articles regularly rank for keywords that they don’t even contain. This is because the similes, metaphors and synonyms that make up semantic meaning now count for at least as much meaning as direct use of specific keywords.
Which means you can happily craft excellent prose without worrying about stuffing it with keywords anymore.
However, a word of warning. If you do use keywords, and tend to over use them for no good reason, you may still get a penalty.
To avoid that. use a keyword density checker such as the one in the SEO Roadmaps app to ensure you haven’t accidentally over done it (you can use it to check the top 10 sites for the densities to ensure you’re in the right ball park too).
On page factors that may help when it comes to ranking pages include the context of headings and subheadings as well as lists and bullet points.
They may also include the number and quality of internal and external links on a pages including images, their links, and any alternative tagged text associated with them.
Other factors include keyword density, video and downloadable resources.
But whether these have any direct influence on ranking is a matter of debate as mentioned at the start of this article and since there is no answer, the golden rule is to always write for you audience first.
But ensure you don’t miss out anything that matters to Google by deeply researching what’s already ranking (SEO Roadmaps does this for you).
Off page factors include backlinks. That is, all the links that point to your page.
Note that search engines can only display a single page. That is what a link does, so they’re really not interested at all in any site’s home page unless that home page has the solution on it that people are searching for.
If this were not the case, then every search would simply show the root domain, and that would be the most useless set of search results ever!
Remember this when writing articles. Each page ranks on its own merit and the association of that page to the domain’s home page is completely meaningless.
Which brings up nicely to what I call the backlinking myth.
Backlinks matter. That’s fine. But only in the context of whether they are really truly relevant – and not been done simply to get better link juice (whatever that may mean!).
Google understands links. They know precisely whether a link is natural, relevant and good. And if it fails any of those tests, it counts for nothing.
There was a time when students sold links from their university/educational personal profiles to site owners looking to get some authority from being ‘recognised’ by a university.
The same happened with government sites. You can imagine what Google thought about that, and yet people are still selling this idea today.
Once again, don’t be sucked in. These short cuts waste your time and money.
If you write an article that really is useful as an educational tool or help in some way, there is nothing wrong with contacting the webmaster of a relevant educational establishment and letting them know about the article, but if they place a link at all, you had better hope they place it from an equally relevant page (or again, you’re wasting your time).
The one backlink that will do you some good is if you do guest blogging on a high ranking site. The backlink, if you are allowed one, will be placed in your mini bio at the foot of your guess post.
And whether you get kudos for that from Google is less important than the extra visitors (and increased celebrity status) you may get from the referral.
So to sum this up, don’t think about SEO at all when writing an article, but DO do your research into what Google and other search engines are already telling your about the subject (by looking at what’s already ranking), and do use a tool such as SEO Roadmaps to speed that process up.
Once your article is written and published, I highly recommend returning to it every few months to check its position and make sure you’re not missing out on new research done on the topic.
You can also then add an update note at the to of the article to let Google and your readers know they’re reading the latest information – and not some years old blog post completely out of date.
You can also create a video to compliment your article, post that on your YouTube channel and link back to your page from there.
I’m sure you can think of many more ways to extend an existing article and help it rise further up the ranks using tactics like this. Just keep it all relevant and well written.
Keep on doing more of the same. We also run a 30 day SEO Beginner to Expert workshop, which you can see a fast track version of on the SEO Roadmaps link here.
It's a methodology, when done properly, that gives a signal to Google (and other search engines) to pay attention to a particular page on a website.
What it's not is a way to game search engines to artificially rank one page above another. Why not? Because that page will eventually get de-indexed, and if there's a lot of evidence, the whole site the page is on can be de-indexed too.
Google has been telling us this for years. Never ever pay anyone to add backlinks to your site. Google understands natural backlinking profiles.
So if you want the best possible profile, publish good stuff and share it everywhere.
I show you how to do that in my SEO Expert course over here.
The great thing about SEO is that you don't need to worry about robots, crawlers and spiders - or all the other wizadry of search engines.
It's fine to geek out on them, but unless you have something called a robots.txt file on your website, you needn't worry about it.
(and if you do have a robots.txt file on your site, just make sure it's not blocking the content you want to rank).
There's only one hat worth wearing when it comes to SEO and all the tomfoolery that takes place in the great race to the top, and that is THE WRITE HAT (click to join my Facebook Group).
What I mean by that is don't game the search engines. They're all getting significantly better at detecting bad SEO practices, so the less of it that's done, the better off everyone will be.
Of course, that's never going to stop people from trying - that's life, but I feel sorry for the innocent people who get duped into buying some rubbish service that promises the world and delivers a deindexed site.
Having said all that, pretty much everything on the internet is manipulated in some way - from asking a friend to link back, to placing an affiliate link somewhere, to getting an entry on Wikipedia etc.
So it's all pretty much grey hat, but that's fine just so long as you don't stray into private blog networks (PBNs) or paid links to high pagerank sites (not only because Google themselves say the actual pagerank of a page is not only no longer published - it's not relevant anymore).
GEEKNOTE: Pagerank is the name of the original patented algorithm used by Google.
It's never been about getting a site indexed (ie. getting it on Google), it's always been about individual pages.
And this is because searchers can only read 1 page at a time. So each page should give some information on a specific topic so that Google and other search engines can understand what it's about and rank it appropriately.
More on that later though. For now what matters is getting Google to see that you have just published a page.
If your site already has pages indexed by Google, just sit back and wait. Google will send its crawlers over to you at some point.
What that point is depends on how often you update your site. If it's every day, then Google learns that and visits every day. If it's every month, then guess what!
Having said that, you can force Google to index a page using the Google Search Console tool here:
Facebook announced very early on that they were a Mobile First company.
What they meant by that was that all their user interfaces would be architected from a mobile phone screen size point of view.
It makes complete sense, since the world was already moving rapidly in that direction ever since Steve Jobs came out with the iPad and iPhone.
But it also means there are going to be a lot of design restrictions, not least of which is speed.
Google have followed suit and produced the AMP standard for mobile usability.
AMP = Accelerated Mobile Pages and you can find out all about that over here on the AMP site.
The long and short of it is that unless your pages operate at a reasonable speed on mobile networks, and that they're just as easy to use on mobile as they are on desktop, then Google states they may punish you by not displaying your pages so prominently.
It makes sense since Google want the user to get the best experience on their platform, or they just might go and try another one - and if that happens with enough people, Google will fail fast.
Which brings us neatly on to another awesome Google tool that does far more than tell you your site is pants when it comes to speed.
It's also going to be rubbished if your buttons are too close together, or your CSS takes a little too long to load - and the same goes for your various Javasccripts.
If you want to check that out, pop along to the Page Speed Insights page and pop your web address URL in there and check your scores for both mobile and desktop.
This is truly great tool if you're about to pay a fortune to a web development company because you can audit their own website first, and if it sucks, well, you'll know what to do.
Here's the link: Google Page Speed Insights Tool.
This is another area Google say you need to pay attention to.
If you place an image that's say 2000 x 1000 pixels in size in a placeholder designed to take an image of 500 x 400, then you're going to be wasting a lot of time loading the oversized image.
And that means a worse user experience, and that means Google are going to penalise you. No surprise there for all the reasons we've already discussed.
So what to do about it? Well, first off, there's a good chance the image has not been fully compressed in the first place, and secondly, it should be resized to fit before you bother compressing it.
The Page Speed Insights will let you know if that's a problem by the way, so use that as your first line of defence when auditing your site.
The point is that SEO is far more than putting up good content and getting a bunch of other recognised sites to link to it.
There's a whole bunch of compression software out there, but if you use WordPress as your CMS, then you're in luck as the free version of the Jetpack plugin (made by the WordPress people) will compress your images for you - all be it, only the new ones you add to your site.
This is a hugely underused feature of Search Engine Optimisation.
Since Google owns YouTube and you can upload an unlimited number of videos to your own dedicated channel on YouTube, you have an opportunity to create a video for every article you publish.
The reason that's a GREAT idea is that Google also ranks YouTube videos on Google!
And if you check out any phrase by entering that into Google, you can immediately see if they reckon it's the sort of phrase that a video would be perfect for simply by seeing if a video is ranking for it.
And on the other side, if there's no video in the top 10, then why not create one and see if it ranks. There's a chance that there's no ranking video because there are currently no videos it makes any sense to rank!
But it's not just ranking in Google that matters here. There's the vast opportunity that YouTube has in its own right. In fact YouTube is the second largest "search engine" after Google itself.
Everyone uses it to find visual "how to" videos so they can do something themselves that they would otherwise have had to pay a small fortune for.
if you do nothing else with SEO, start producing a video every week and build your YouTube channel and following. it will pay dividends in the future - and you can put a link back to your site in the video's description area too, thus bringing in even more traffic. What's not to like!
I will continue to expand this deep look into all things SEO on a weekly basis, so please do come back often and also bookmark it so you don't forget.
No one knows how Google really works. All we can see are the results it brings up.
And that is the source we use for all our SEO Roadmaps. It starts with Google. It could never be any other way.
But it’s what happens after Google that makes us stand out so tall from the crowd.
We use the latest analysis tools, both commercial and private to produce a blueprint that works.
A blueprint designed to generate traffic 24/7 that converts for whatever purpose you need.