Google Blogoscoped


Google and Spam .... more details  (View post)

* Miss Universe [PersonRank 7]

Monday, May 12, 2008
12 years ago3,637 views ...

* Miss Universe [PersonRank 7]

12 years ago #,2817,2287 ...

Review by PCMag

Juha-Matti Laurio [PersonRank 10]

12 years ago #

It appears Google doesn't know enough: ...

and ...

Above 3 comments were made in the forum before this was blogged,

Matt Cutts [PersonRank 10]

12 years ago #

Juha-Matti Laurio, I was talking about webspam, not email spam. But from what I understand, the Ars Technica story isn't accurate in that Gmail wasn't turned into an "open relay." It makes for a good headline though.

Ionut Alex. Chitu [PersonRank 10]

12 years ago #

<< A Google spokesperson tells The Inquisitr, though, that the issue is actually just an abuse of Gmail’s forwarding feature as opposed to any kind of open relay exploit. It does not, he says, present any security vulnerability.

“We are aware of the potential for this kind of abuse and we have controls in place to prevent large attacks,” the spokesperson said.

Part of the concern over the INSERT report stemmed from the fact that email sent through Google is considered so secure and trustworthy by spam filters. The implication was that this flaw could give spammers an easy way to bypass those filters and get right into people’s inboxes. Google pointed out to The Inquisitr, however, that its system attaches SPF and DKIM authentications only to regular outgoing emails, not forwarded ones. Because of this, any forwards sent via Gmail will not have the “stamp” of Google’s approval, so to speak, and would be picked up by spam filters just like mail sent from any other source. >> ...

* Miss Universe [PersonRank 7]

12 years ago # ...

Introduction to Google Search Quality
5/20/2008 06:20:00 PM
Posted by Udi Manber, VP Engineering, Search Quality

My name is Udi Manber, and I am a VP of engineering at Google in charge of Search Quality. I have been at Google for over two years, and I have been working on search technologies for almost 20 years.

The heart of the group is the team that works on core ranking. Ranking is hard, much harder than most people realize. One reason for this is that languages are inherently ambiguous, and documents do not follow any set of rules. There are really no standards for how to convey information, so we need to be able to understand all web pages, written by anyone, for any reason. And that's just half of the problem. We also need to understand the queries people pose, which are on average fewer than three words, and map them to our understanding of all documents. Not to mention that different people have different needs. And we have to do all of that in a few milliseconds.

The most famous part of our ranking algorithm is PageRank, an algorithm developed by Larry Page and Sergey Brin, who founded Google. PageRank is still in use today, but it is now a part of a much larger system. Other parts include language models (the ability to handle phrases, synonyms, diacritics, spelling mistakes, and so on), query models (it's not just the language, it's how people use it today), time models (some queries are best answered with a 30-minutes old page, and some are better answered with a page that stood the test of time), and personalized models (not all people want the same thing).

Another team in our group is responsible for evaluating how well we're doing. This is done in many different ways, but the goal is always the same: improve the user experience. This is not the main goal, it is the only goal. There are automated evaluations every minute (to make sure nothing goes wrong), periodic evaluations of our overall quality, and, most importantly, evaluations of specific algorithmic improvements. When an engineer gets a new idea and develops a new algorithm, we test their ideas thoroughly. We have a team of statisticians who look at all the data and determine the value of the new idea. We meet weekly (sometimes twice a week) to go over those new ideas and approve new launches. In 2007, we launched more than 450 new improvements, about 9 per week on the average. Some of these improvements are simple and obvious – for example, we fixed the way Hebrew acronym queries are handled (in Hebrew an acronym is denoted by a (") next to the last character, so IBM will be IB"M), and some are very complicated – for example, we made significant changes to the PageRank algorithm in January. Most of the time we look for improvements in relevancy, but we also work on projects where the sole purpose is to simplify the algorithms. Simple is good.

beussery [PersonRank 10]

12 years ago #

Yep, interesting they finally mention language, time and personalized results.

* Miss Universe [PersonRank 7]

12 years ago # ...

How many Google Webmaster penalties are there – rumor vrs fact

This article from a top SEO blog appears to focus on the so called
   1. -30 penalty
   2. -950 penalty
   3. Index Exclusion

   but in a recent blog post, the Google spam chief indicated that no manual SERPs adjustments were ever made – so besides the index exclusion – which/who is right???

This thread is locked as it's old... but you can create a new thread in the forum. 

Forum home


Blog  |  Forum     more >> Archive | Feed | Google's blogs | About


This site unofficially covers Google™ and more with some rights reserved. Join our forum!