Types of SEO
Search Engine Optimisation (SEO) is all about trying to rank highly on search engine results pages (SERPs). There are many ways of doing this, but there are two main categories of SEO practice: organic and paid. There are also black hat techniques, that could be considered an “organic” approach, but due to its dishonesty actually results in completely unnatural results that tend to be very short-lived.
Organic SEO
This method takes time, as it relies upon gaining traffic to the site naturally, through delivery of relevant content that people find and use.
Paid SEO
This can be used to quickly boost traffic to the site by bidding on keywords and utilising a pay-per-click system (PPC), essentially paying the search engine to put the site to the top of SERPs for chosen keywords until the link has been clicked enough times to exhaust your budget.
Black Hat SEO
There are plenty of ‘black hat’ techniques – attempts to organically force improvements to page rankings – but these are not advisable. Practices such as filling a page with hidden text to trick algorithms into identifying more content than there really is, or overloading on keywords unnaturally in the content might deceive the search engine initially, but users will not be fooled and leave the site very quickly. Search engines do consider user engagement duration times, so where a site is dishonest about what it actually offers or simply doesn’t deliver any meaningful content, visitors will pick up on that and the page ranking in search results will reflect a lack of engagement.
Content is King
In terms of SEO, the most important aspect of a website is its content: it’s why users visit in the first place. Google particularly places a lot of emphasis on the relevance of content, and this is best determined by observing the behaviour of real users. Real users do not visit websites to read your advertising, your keywords or your sales pitch, to test its speed or admire its design; users want real information, goods or services. Usually quite specific information, goods or services. If your site does not deliver what they are searching for, users will feel like they have been tricked, and just leave.
Sites that attempt to trick users will take a hit to their reputation, and search engines like Google are very sensitive to that. They monitor the engagement of users to help determine the relevance of a site’s content.
Things to bear in mind when considering the relevance of your site’s content:
- Freshness – If content has not been updated recently, it’s likely less useful than it was a few months ago. Search engines will deem it less relevant than similar content that was just published today.
- Engage visitors with real information – High bounce rates lower SEO scores. The page’s keywords must be related to the page’s actual content. Content containing nothing but buzzwords and key search terms might attract visitors, but those visitors will not be fooled into staying on the site.
Be compelling.
Be trustworthy.
It might seem easy to trick the search engines, and doing so might even land you a few hits at first, but it’s much more difficult to trick a human into thinking your content is worth their time. It either is or it isn’t, and a visitor knows this instinctively. Once they feel they have been duped, they’ll likely just bounce away from your site and never visit again. For this reason, trying to game the search engines’ algorithms is a lazy and useless tactic. The engines will more than likely tag you as spam, or just lacking in content. Either way, your SEO score will take a hit. The best thing to do is to deliver quality, up-to-date, interesting and useful content in an accessible way, so that real people want to visit your site. Search engines monitor user engagement, and it contributes heavily to your SEO score.
It’s best to play the long game. It can take a while to build up your reputation and start appearing on the first page of Google searches, but the quick fixes never work in the long run.
How Code Impacts SEO
Obviously website content is essentially made of code, so the code itself is crucial to how a search engine will ‘crawl’ it and decide where it ranks against certain keywords. There’s a lot to consider when coding: of course we want to make the code “work”, but we also have to consider how the code will affect SEO. This extra dimension to coding is definitely more art than science, but there are some good rules of thumb we all should follow.
Important Tags
- Title tags – Hugely important for SEO, as this is given centre stage on SERPs. Google results pages will display up to a maximum of 70 characters from the title tag. It’s useful to include your chosen keywords, but more important is to be transparent to users about where the link will take them.
- Meta tags – While Google does not consider meta tags in their page ranking algorithm, they are still important as they are often used in a variety of places. For example, the description meta tag is featured prominently in two important places: SERPs, and shared links on social media.
- Header tags – This is used to identify content that precedes the primary content of the web page and often contains website branding, navigation elements, search forms, and similar content that is duplicated across all or most pages of a website.
- ALT tags – Also known as “alt attribute” and “alt description,” an alt tag is an HTML attribute applied to image tags to provide a text alternative for search engines . Because machines cannot interpret the subject of an image – it is just a collection of pixels to a computer – search engines rely on alt tags to find images relevant to a search. Alt tags are also used by browsers and screen readers to describe an image to users where it may have failed to load, or the user has any other accessibility issues.
- Anchor tags – These are links on a webpage that can prompt an action within the browser, usually to direct the browser to a new webpage or to download a file. It is important not to use generic words within the anchor tags to allow both screen readers and search engines to assess the relevance of the link. A link that simply reads “Click here” is not helpful to either the screen reader or a search engine, as it does not communicate what the link actually does.
- Canonical tags – These are used to identify to search engines which copy of duplicated content is to be considered “canon” i.e. the original.
Incorrect/incomplete tags, duplicate code/tags, too much code, text hidden by code, and deprecated or unnecessary code can all negatively affect the SEO score.
Site Architecture
When considering SEO, it is important to be aware of the structure of a site. Search engines are intelligent enough to assign values to the different parts of a URL, so the deeper within the site a page is, the less important it is considered to be. Obviously the homepage at the root domain level is deemed to be the most important, but once you start getting into categories the importance diminishes. For example, in the URL ‘www.shoeshop.com/mens-shoes/sneakers/blue-laced-pumps’ the ‘Shoe Shop’ itself is the most important, followed by the ‘Men’s Shoes’ category. Slightly less important again is the ‘Sneakers’ subcategory, and finally the actual product itself, the ‘Blue-laced Pumps’ will not rank as high priority in SERPs at all. To find this product in a SERP, you’d need to use some very specific keywords in your search, but even then you’re far more likely to see ‘Sneakers’ or ‘Men’s Shoes’ nearer the top of the results.
Breadcrumbs are another helpful mechanism to improve SEO rankings. Displaying on the page itself exactly where it is situated within the site’s structure, it makes the site easier to navigate, helping to reducing the bounce rate when a user finds themselves on a page low down a hierarchy and want to retreat higher up. Google even use breadcrumbs in their search results, where they are provided, giving a visitor an idea of how the site is structured before they even visit, and – hopefully – making the link more likely to be clicked.
Duplicate Content
There may be instances where content on a website is considered duplicate by a search engine. For example, the ‘Blue-laced Pumps’ I mentioned previously might also be categorised under ‘Plimsolls’ or ‘Women’s Pumps’, in which case they could appear under multiple different URLs while having identical content. If this page has been indexed, the SEO value could potentially be split across all the ‘duplicates’, lowering its overall score. To rectify this, it’s important to nominate one of the copies as being the origin of the content by using a canonical tag on all duplicates, like this:
<link href="the-url-of-the-original-content" rel="canonical">
Page Structure
As well as the general architecture of the site, we need to consider the structure of each individual page, for both SEO and accessibility reasons. Proper user of heading tags is very useful here, as it helps to break the page up into sections, giving importance ratings to each section. The <h1>
tag should ideally contain content that matches the page’s <title>
tag, capturing the overall theme of the page. The theme should then be reinforced or refined using subheadings, using <h2>
tags, and so on. It is helpful to think of them as levels of grouping information, like so:
<h1>The Known Universe</h1>
<h2>The Milky Way Galaxy</h2>
<h3>Earth</h3>
<h4>United Kingdom</h4>
<h5>West Yorkshire</h5>
<h6>Leeds</h6>
It can also be beneficial to use anchor tags to allow a visitor to navigate to different parts of the page by clicking a link. These have the same valuable interlinking benefits as breadcrumbs do, with similar SEO benefits.
Hiding Information from Search Engines
There may be sections of a website that should not be indexed by search engines. It might be a page for employees only, or a page that shows a browser version of an email sent to customers in the event their email client cannot read it properly. Form completion pages too. It is often important that parts of a site do not appear in SERPs so that it cannot be found by accident, or maliciously – a good example might be a WordPress ‘wp-admin’ page. It’s definitely not advisable to have just anyone in the world stumbling across it with a Google search.
To hide a page from SERPs, a single line of code within the <head>
of the page will do it:
<meta name="robots" content="noindex">
This does not necessarily prevent search engines from crawling the page, or even caching it, but it does prevent it appearing in SERPs. It’s important to be careful with this functionality: don’t add it to page templates in WordPress unless you’re certain the template will only be used for pages you want to remove from SERPs.
A ‘robots.txt’ file can also be used to hide specific URLs from SERPs, but again this must be carefully handled. Don’t ever hide the root domain in this way, as it will hide the entire site from SERPs. It’s also important to bear in mind that this file is effectively public. You can visit practically any website and append “/robots.txt” after the domain to view which pages are being hidden from view. Seriously. Try visiting Amazon’s robots.txt file to see what I mean.
Search engines also pay close attention to external links on a webpage. Having just linked to Amazon a moment ago, this could be interpreted by Google that I endorse Amazon in some way, or even that I might have been paid by Amazon to link to their site via my own. Adding the rel="nofollow"
attribute to a link will remove this perceived relationship, so the link looks like this:
<a href="https://www.amazon.co.uk/robots.txt" target="_blank" rel="nofollow">
This also means that the link on my website will not contribute to Amazon’s SEO ranking, as search engines consider inbound links when evaluating a page’s ranking. Although honestly, I’m such a small fish that even if I used a rel="dofollow"
attribute it would barely matter. If, on the other hand, Amazon had a link to Spittoon on their site and they didn’t use rel="nofollow"
, that could massively boost my SEO score!
Website Speed
Visitors hate a website that is slow to load, and are likely to bounce if there is a delay of more than a couple of seconds or so. And search engines give preferential treatment to sites that get a lot of engagement from visitors, as that is how they determine that it provides relevant content. So it follows that a slow site will suffer in its SEO score. In fact, search engines consider a site’s speed when calculating a score. Add that to the fact that user engagement is also considered, and straightaway you’re looking at taking double the hit to your site’s SEO score if it’s running slowly. Tools such as Pingdom and Google’s PageSpeed Insights can help to assess the speed of a site, and even determine specific aspects that might be slowing things down.
Testing your SEO
There are many tools available to check how effective your SEO is likely to be, including WooRank and Google’s Lighthouse Audits, which can be run in-browser. You can check for invalid code using W3’s Validator site.
To analyse some of the more technical aspects of a site, there’s BuiltWith, which can determine all kinds of information about how a site has been built, such as its hosting provider, or whether it has SSL enabled.
It is important to regularly monitor your SEO results in order to keep your understanding of the needs of your visitors, and the search engines themselves, up-to-date.
Leave a Reply