Technical SEO is one of the most complex and misunderstood aspects of online marketing. There are a lot of acronyms, jargon, and technical terms that can be confusing for beginners. Don’t worry, we will define some of the most common technical SEO terms so that you can start to understand this important aspect of your website’s marketing strategy.
Crawl – The process of a search engine “spider” going through your website’s pages and links to collect data about your site. Crawling is how a search engine builds its index of pages to show in search results.
Crawlability – How easy it is for a search engine to crawl and index your website. Crawlability is affected by factors like broken links, duplicate content, robots.txt directives, and more. The more problems on your website, the harder it is for search engines to find and index your content.
Index – The database of all the pages a search engine has found and determined to be relevant for its search results. When you type a query into a search engine, it searches its index for pages that match your intent.
Ranking – The position of a website on a search engine results page (SERP). Generally, the higher up on the SERP, the more traffic a website will get.
Organic Search – When a user types in a query and clicks on one of the non-paid results; these results are determined by the search engine’s algorithm.
Paid Search – When a user types in a query and clicks on one of the paid results; these results are determined by how much the advertiser is willing to pay per click as well as some other factors.
Spider – Also called a web crawler or web robot, this is a program that browses the internet to create an index of all the websites it can find. For example, Google uses spiders to find new websites and determine how relevant they are to users’ queries.
SERP – The Search Engine Results Page is the page that a user sees after they enter a query into a search engine; the SERP includes both paid and organic results.
Keyword – A word or phrase that a user enters into a search engine to get relevant results. For example, if you sell shoes, some relevant keywords might be “shoes,” “sneakers,” or “athletic footwear’.
Robots.txt – A file that tells web robots (spiders) which pages on a website to crawl and which pages to ignore. For instance, if you have a page on your website that you don’t want anyone to see, you can use robots.txt to tell the spider not to crawl it (it also won’t harm your SEO this way!).
Sitemap – A sitemap is an XML file that contains all the important URLs on your website. It’s like a table of contents for your website that helps search engines find and index all your pages. Without it, your website may not show up in search results at all.
Canonicalization – This is the process of choosing the best URL when there are multiple choices. For example, if you have two pages with the same content, Google will only index one of them. The other page will be considered a duplicate, and will not show up in search results.
SCHEMA Markup – This is a code that you can add to your website to help search engines understand your content better. It can help your website show up in rich results, which are results that include things like images and videos.
If you need help with anything SEO, feel free to contact https://kingkong.co/seo-agency/ today!