search engine being used to find Wikipedia]]
A search engine is a program designed to help one access files stored on a computer, for example a public server on the World Wide Web. The search engine allows one to ask for media content meeting specific criteria (typically those containing a given word or phrase) and retrieving a list of files that match those criteria. Unlike an index document that organizes files in a predetermined way, a search engine looks for files only after the user has entered search criteria.
In the context of the Internet, search engines usually refer to the World Wide Web and not other protocols or areas. Furthermore search engines mine data available in newsgroups, large databases, or open directories like DMOZ.org. Because the data collection is automated, they are distinguished from Web directories, which are maintained by people.
The vast majority of search engine are run by private companies using proprietary algorithms and closed databases, the most popular currently being Google (with MSN Search and Yahoo closely behind). There have been several attempts to create open-source search engines, among which are Htdig, Nutch, Egothor and OpenFTS. 
When a user comes to the search engine and makes a query, typically by giving key words, the engine looks up the index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text.
The usefulness of a search engine depends on the relevance of the results it gives back. While there may be millions of Web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve.
Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the controversial practice of allowing advertisers to pay money to have their listings ranked higher in search results.
Soon after, many search engines appeared and vied for popularity. These included WebCrawler, Hotbot, Excite, Infoseek, Inktomi, and AltaVista. In some ways they competed with popular directories such as Yahoo. Later, the directories integrated or added on search engine technology for greater functionality.
In 2002, Yahoo! acquired Inktomi and in 2003, Yahoo! aquired Overture, which
owned AlltheWeb and Altavista. In 2004, Yahoo! launched it's own search engine based on the combined technologies of its acquisitions and providing a service that gave pre-eminence to the Web search engine over the directory.
Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s. Several companies entered the market spectacularly, recording record gains during their initial public offerings.
In around 2001, the Google search engine rose to prominence. Its success was based in part on the concept of link popularity and PageRank. Each page is ranked by how many pages link to it, on the premise that good or desirable pages are linked to more than others. The PageRank of linking pages and the number of links on these pages contribute to the PageRank of the linked page. This makes it possible for Google to order its results by how many web sites link to each found page.
A factor in Google's success, which spawned many offsprings, is the simplicity of its user interface.
Researchers at NEC Research Institute claim to have improved upon Google's patented PageRank technology by using web crawlers to find "communities" of websites. Instead of ranking pages, this technology uses an algorithm that follows links on a webpage to find other pages that link back to the first one and so on from page to page. The algorithm "remembers" where it has been and indexes the number of cross-links and relates these into groupings. In this way virtual communities of webpages are found.
Many web pages are updated frequently, which forces the search engine to revisit them periodically.
The queries one can make are currently limited to searching for key words, which may results in many false positives.
Dynamically generated sites, which may be slow or difficult to index, or may result in excessive results from a single site.
Many dynamically generated sites are not indexable by search engines; this phenomenon is known as the invisible web.
Some search engines do not order the results by relevance, but rather according to how much money the sites have paid them.
Some sites use tricks to manipulate the search engine to display them as the first result returned for some keywords. This can lead to some search results being polluted, with more relevant links being pushed down in the result list.