How do search engines work?

Written on 22 January, 2014 by WebCentral
Categories: Search Engine Optimisation | Tags: google, search engines

A common misconception among many internet users is that search engines search and display results from every single website that exists. This would mean, that for every single search that is performed the search engines would need to gather, analyse and display results of over 25 billion pages – if you think of the time is takes you to load a single web page, it simply isn’t feasible.

A recent study revealed that 11.5 billion web pages are in the publically available index – this means these websites can be found by performing a search on Google, Yahoo, Bing or another search engine. So, how then does a website get in the publically available index and how do the search engines pick and choose the results from so many indexed pages in just a few seconds.

All search engines from small to large use some sort of web crawling or spidering technology to index and sort content on the web. This technology works by releasing spiders, also known as search engine robots (bots) onto the internet. The search engine bots then visit each web page by following hyperlinks and index the content. The URL of the web page as well as the content are then stored in a database by the search engine – this database is what is searched when you perform a query on Google or another search engine.

The database that is searched also includes many other details of a website, including how many links there are to it, how many links out it has, how it is coded etc. All these details help the search engine to categorise the content and display the most relevant results for searches much more quickly.

Tweaking your website to change the details stored in the search engine database can help you get better results in the search engines – this is known as search engine optimisation.