Google Blogoscoped

Forum

LMai Aritificial Intelligence Search Engine

Suresh S [PersonRank 10]

Thursday, May 24, 2007
12 years ago3,190 views

Have you been flummoxed wondering what exact key words to type when running a search on Google? Then you may want to take a look at LMai, a new algorithm developed by a researcher at Bangalore-based Sobha Renaissance Information Technology (SRIT). What is unique about LMai (or Latent Metonymical Analysis and Indexing) is that it uses mathematical techniques to come up with context based search results.
  
In simple language what this means is that once LMai has been installed as a plug-in on any search engine, irrespective of what words you type in, the algorithm will derive the possible connection or relationship between them and then throw up results. Traditionally search engines like Google, Yahoo or MSN depend on keywords, which are specific to initiate their crawlers, which then scan all the billion gigabytes of data on their servers before throwing up results.

   But where these engines fail is when you type in multiple words, which the crawler treats as individual entities and then throws up disconnected results. Say you type in mental health, a normal search engine will throw up results for both mental and health. LMai scores over them in that it will realise that mental health is a related term and will throw up specific results pertaining to mental health and also give you a choice of related results say about matters like schizophrenia or delusion which are directly related to mental health.
   The algorithm is also intelligent enough to scan the entire document and pull out relevant words relating to the search string and offer you a brief description of the document in two lines beneath the URL, making it easier for the end user to choose from.
   Syed Yasin visualised this product way back in 2003, after designing an algorithm that worked with speech engines to produce a personal computer that would talk to humans and carry out specific orders. In 2005 he had the draft ready of an algorithm that learns by itself. By April 2006, under the aegis of SRIT, Syed began work on a prototype of LMai which could change the face of search engines as we know them today.
   Explains Syed, “ LMai works on a novel decomposition technique that enables the algorithm to extract contextually related terms without any external guidance. Say you type in vegetables in the search string, LMai will throw up information about everything that is even related to vegetables like say farming, vitamins, manure, irrigation, crop cycles etc. It even solves problems faced by users like when searching for a word, which has multiple meanings like Java, which is not only an island but also a programming protocol. Normal search engines need to be told which one in particular you are looking for in the first place.”
   And since it’s a plug-in, it operates by integrating the results thrown up by the base engine and overwriting those that are not relevant. And this also affords it much more scalability since it can automatically tap into the data of the base search engine and one does not have to actually set up a separate search engine to enjoy its results.
   The concept of search is undergoing a paradigm shift. Big boy Microsoft is looking to have its own search engine soon and has allocated close to $1.1 billion for research in this specific area. Yahoo and Google are working on refining their searches while also attempting to maximise revenues from them. In this context, such a finding by an Indian company could make the big wigs sit up and take notice.
   And it is already making waves claims Madhu Nambiar, CEO & MD of SRIT, “There is wide interest in this space and we have already had external experts coming and testing the feasibility and viability of this product. We have tested LMai on more than 90000 files and more than 70 million words. It has now been proven that LMai is capable of powering next generation search engines.” SRIT has a patent for LMai pending in the US.

Philipp Lenssen [PersonRank 10]

12 years ago #

> Traditionally search engines like Google, Yahoo or MSN depend
> on keywords ...
> But where these engines fail is when you type in multiple
> words, which the crawler treats as individual entities and
> then throws up disconnected results. Say you type in mental
> health, a normal search engine will throw up results for both
> mental and health.

That Google doesn't just treat words as separate entities should be proven by the fact that a search for [mental health] yields different results than a search for [health mental]. Besides, Google Co-op's medical results do just that – offer related medical links for "disconnected word searches" like [heart problems]. This seems like yet another meta search engine with a hyped "related queries" feature thrown in... and this is typical for those search killer announcements; they erect a strawman ("Google is useless because of spam in results" – I'm overemphasizing) to burn it down with a seemingly smart pseudo-solution ("at our engine, users can vote to have specific spam results removed" – yeah, scale that so it can't be abused by spammers!).

This thread is locked as it's old... but you can create a new thread in the forum. 

Forum home

Advertisement

 
Blog  |  Forum     more >> Archive | Feed | Google's blogs | About
Advertisement

 

This site unofficially covers Google™ and more with some rights reserved. Join our forum!