Search Engine optimization history

A Brief History of SEO

What is the history of search engines and SEO?

Let’s take a short journey through time to understand how SEO has changed and how it got to what it’s now. The history of SEO is truly amazing to understand.

The Prehistory of the Web

When the Internet first appeared, there were no search engines. The number of websites was relatively small and we were really in the infancy of what the Web is today. So, for a site to be in any index, you just had to… get him indexed!

The process was relatively simple: you would put your site somewhere, and it would be on a big list. It was the ancestor of the directories that we find today, and it was quite difficult to find a specific site, without knowing its URL.

The robot’s work was super simple, they were submitted to a site, they browsed it, and refreshed the index created. Organic research was therefore virtually impossible, and this had to be remedied with a novelty: the first meta tags.

History of SEO

The appearance of the first meta tag tags

What is a tag? Just a keyword (and also a huge milestone in SEO history)! To specify the subject of a page to the robots that browse the web, we simply insert a tag to indicate what will be found in it as content. It must be said that at that time, the robots did not understand anything about the texts they found, so it was necessary to enter in this tag a string of characters that, in principle, will be understood by Internet users.

The beginnings of the Black Hat

The introduction of the first meta tags, and their use to classify sites, was the beginning of the Black-hat. A Black Hat webmaster is a person who will use all the means at his disposal to push a site in search results by breaking the instructions given by indexers or early search engines (sometimes even by breaking the law). The spectrum is wide, and anyone may appreciate differently the morality of any possible practice. Black-hat was critical in the development and history of search engines.

The introduction of the first meta tags, it was, therefore, the birth of the stuffing that we will meet massively for about ten years. The meta-keyword tag just came out? Okay, we’re going to fill it in.” The meta-description was just implemented? OK, we’re going to fill it in. Not necessarily in honest mode, but rather in “aggressive” mode. It was the birth of the keyword stuffing.

And so, inevitably, faced with the abuses of some webmasters, we had to find a new method to manage all this …

History of SEO – The Birth of Page Rank

In 1996, Larry Page and Sergey Brin created a search engine called Backrub. This event marked the history of search engines that we know today. The principle is quite simple: if a website is relevant, then it will receive many links from other sites. And, if it is relevant, it deserves to be ranked higher than the others. The page rank algorithm assigns a note to the pages, and this note will be used to classify them on given queries.

It is important to understand that this behavior is the foundation of Google. If what matters most is the number of backlinks, then to push a page higher in rankings, it was enough to create another page (or better, many other pages) with a link that points to the page we want to promote.

And that’s it, it was played: Web spam was born. And all that’s going to follow is a long war between Google and SEO!

A Brief History of SEO 1

Refining algorithms

Using the number of links as a database of site rankings, Google shot itself in the foot. The web market is growing, economic interests are growing, and as a result, many webmasters are adopting aggressive strategies to make money through their sites. This was the birth of link farms (poor quality pages containing only links), the use of “free” site templates with links stashed everywhere, etc.

Consequences: Search results degrade, and Google needed to react.

Understanding web pages

Google responds by creating new algorithms and filters. Primarily, Google was focused on creating intelligent robots, which will “understand” the meaning of a page by analyzing the words found there and weighting them with HTML markers (title tag, H1 tag, link anchors, etc.). This approach would later lead to the birth of algorithm updates such as Panda and Hummingbird.

Inevitably, the solution to ranking higher was to do keyword stuffing: we fill the page and the sites with keywords (plumber NY, plumber LA, plumber Miami, etc. It’s ugly, it’s a bad experience for the reader, but it works…

Creating profiles on many websites

Faced with the proliferation of link farms, and in the face of the semantic degeneration of web pages, Google refines its strategy, trying to detect a footprint on the sites it crawls. This footprint, as the name suggests, is a marker that will highlight recurring patterns and will help to remove spam.

Among the new algorithms were that implemented Dominic was one of the most important. This changed the way links are counted. Cassandra another update identified domain owners and detects hidden texts (white texts on white backgrounds for example, cloaking by identification of the user-agent, etc.).

The 2000s and Spam

In the 2000s, the war against spam was raging (identifying keyword stuffing, meta-tag stuffing, taking into account the anchor text for links, identifying site networks). Google multiplies and complicates its algorithms to remove spam pages from its index. The work of the Webspam-team is long, complex, and in addition to algorithmic, manual checks are also carried out.

Why this fight against spam? Simply because Google protects its market. This point is very important in the understanding of the search market today: Google is a capitalist company. Like all companies, Google is looking for profit. And its main product is organic search. On top of that, we can find Adwords, which is now a gigantic financial empire.

If the search results are poor, users will turn to another engine, and therefore, will not be able to click on the Adwords. So the quality and relevancy of search results are key for Google. Protecting search results, for Google, is simply a way to protect the $46 billion in revenue that this generates each year…

Optimizing indexing and results

Subsequently, Google will work on the speed of indexing pages and personalizing search results, in a logic of customer loyalty. A series of algorithms and new practices are emerging: massive indexing, Latent Semantic Indexing, creation of Sitemap.xml, personalized research, the goal of the game is to propose the most relevant results possible, indexed as quickly as possible.

Efforts pay off, and Google quickly confirms its market leadership position. And then, in 2009, new drama surfaces…

2009 brings new problems

In 2009, Google’s indexing model is running out of steam. Robots struggle to navigate all of the pages they find. This happens mostly becasue of the many spam sites, whose only reason was to improve the ranking of “money-sites” (window sites, sites paid through advertising, e-commerce sites, etc.)

It must be said that spammers have adapted particularly well to Google’s new filters, significantly increasing the quality of their work: the countries of Eastern Europe, South-East Asia, and China are particularly used by Western companies for optimization services as can be clearly seen in our SEO history.

As a result, the number of spam websites explodes, and Google starts to get overwhelmed. Indexing and classifying the Web, at a time when China is making copy-sites costs money in electricity, and puts big loads on the servers! The new linking methods of the time were very aggressive, giving search engines a lot more work.

The answer is an infrastructure overhaul called Caffeine (Google never chooses the names of its algorithms by chance).

The pre-release of Caffeine in 2009 heralds a new era. Spam risks bringing its indexing capabilities to its knees and Google decides to go the extra mile to eradicate this practice: Panda and Penguin are in the pipes…

The fundamental problem of SEO in 2015

Let’s recap our SEO history up to this point: To rank a page, Google needs to understand what it’s all about. Otherwise, it will not be able to compete with other optimized pages on the same query. That’s all the semantic comprehension work of the algorithm.

But then, how did it establish rankings?

Backlinks.

This is oversimplifying, but this is nonetheless the essence of SEO.

Search engine robots roam sites through links. So the more links you have (be careful, good links), the more likely it is that Google bots will crawl your website. The problem of search engines today is mainly to sort between the “real” and “spam” links… The fight continues with the release of Panda, Penguin, and the other algorithm updates we’ve seen since 2010.

2020’s Google Zoo (conclusion)

Since 2010, the giant of Moutain View has released a real Algorithm Zoo: Panda, Penguin, Hummingbird, Pigeon, etc.

Panda and Penguin were only set up to counter spam sites that are poorly edited. If Google will issue electronic certificates to identify site editors, we’ll be done with spam. But until then, SEO remains the most important tactic for ranking purposes. SEO is not dead!

The history of SEO is ultimately a fight between Google and SEO.

Here is a nice video showing the history of SEO

Future of SEO and looking back in histiry

Add a Comment

Your email address will not be published. Required fields are marked *