Google Blogoscoped

Forum

Google in 1997?  (View post)

Timo Heuer [PersonRank 2]

Friday, February 9, 2007
17 years ago6,017 views

There are more copies of early Google pages: http://jason.it.googlepages.com/

Maybe this one is interesting: http://jason.it.googlepages.com/googleindex-imagesearchbeta

I think the source is absolutely trustworthy. Maybe it's a collection of the collections of different people posting their "backups" on Usenet or so...

Tony Ruscoe [PersonRank 10]

17 years ago #

Seems to be a copy of this:

http://backrub.tjtech.org/1997/

Which is a copy of this:

http://web.archive.org/web/19971210065417/http://backrub.stanford.edu/

(Although I don't know where they got the logo from as that seems to be missing.)

Oliver Herold [PersonRank 1]

17 years ago #

Quiet simple, yes – I've seen it ;D

Bob Morton [PersonRank 1]

17 years ago #

The earliest that the wayback machine has for it is this link from 1998..

http://web.archive.org/web/19981111184551/http://google.com/

There is another one from 1998 that looks similar to the one you linked to but with a much cleaner graphic.

Oren Goldschmidt [PersonRank 3]

17 years ago #

I've seen it before elsewhere, I think it might be genuine. Timo is also correct; if we want to find out for sure we should go through usenet (Not that I'm volunteering) and try to find a backup, back in those days people would keep local copies of anything they used a lot to reduce lag, and a lot of that made it to usenet.

Amanuel [PersonRank 0]

17 years ago #

Yes it used to look like that.

Oren Goldschmidt [PersonRank 3]

17 years ago #

Wow. Amanuel wins on merit of brevity!

:)

Roger Browne [PersonRank 10]

17 years ago #

Tony Ruscoe wrote:
> Although I don't know where they got the logo from as
> that seems to be missing

Maybe from here:
http://web.archive.org/web/19990508132024/www.google.com/stickers.html

Tony Ruscoe [PersonRank 10]

17 years ago #

Sorry, I probably should have said this:

<< Although I don't know *how they know that was the logo* as that seems to be missing... >>

Mike Empuria [PersonRank 1]

17 years ago #

I was having a look in the Sandford University Backrub site and found this: http://infolab.stanford.edu/~backrub/google.html – the paper that Brin and Page wrote submitting Google "a prototype of a large-scale search engine."

I liked this bit "Our immediate goals are to improve search efficiency and to scale to approximately 100 million web pages." They were thinking big even then.

There's also a nice formula for calculating PR and you've got to see the photos at the end

nascent [PersonRank 0]

17 years ago #

[Moved. -Philipp]

Umm, was it around then. Wikipedia, "Founded: Flag of United States Menlo Park, California (September 27, 1998)"

And, the waybackmachine, http://web.archive.org/web/*/http://www.google.com

1998 google, http://web.archive.org/web/19981111183552/google.stanford.edu/

Suresh S [PersonRank 10]

17 years ago #

They are talking about the docid . i have a program which larry gave to one of his collegue how to convert docid to url on Nov 1998
  
here is the code

#include <stdio.h>
#include "search.h"

FILE *checksfp = NULL;
int numurls;

unsigned long url2docid(char *purl)
{
unsigned int hi,lo;
struct UrlChecksum cs;
char *url;
unsigned int i,j,k;
int r;

if (!checksfp) {
checksfp = fopen(CHECKSTOIDSFN,"r");
LOG(("Checksfp %d", checksfp));
fseek(checksfp, 0, SEEK_END);
numurls = ftell(checksfp)/sizeof(struct UrlChecksum);
LOG(("numurls %d", numurls));
}

/* ignore a leading http:// if present */
if(!strncmp(purl, "http://", 7)) {
url = purl+7;
} else { url = purl; }

hi = checksum(url);
lo = checksum2(url);

i = 0;
j = numurls;

while(i<=j) {
k = (i+j) / 2;
/* printf_stderr("trying i=%d j=%d m=%dn", i,j,k);*/
r = fseek(checksfp, k*sizeof(struct UrlChecksum), SEEK_SET);
if(r!= 0) {
LOG(("Couldn't seek checks"));
return 0;
}
r = fread(&cs, 1, sizeof(struct UrlChecksum), checksfp);
if(r!= sizeof(struct UrlChecksum)) {
LOG(("Couldn't read checks"));
return 0;
}
/* LOG(("%u %u %u %u (%u %u %u)n", hi, lo, cs.hi, cs.lo, i, j, k));*/

if(cs.hi == hi) {
if(cs.lo == lo) return cs.docid;
else if(cs.lo < lo) i = k+1;
else j = k-1;
} else {
if(cs.hi > hi) j = k-1;
else i = k+1;
}
}
return 0;
}

int main()
{
char url[1024];

chdir("data");

while (1) {
printf("url> ");
fflush(stdout);
gets(url);
printf("docid: %dn",url2docid(url));
}
}

some perl code by them
#! perl -w
use Compress::Zlib;
$count=0;
while(1) {
$count++;
if ($count %1000==0) {print "$countn";}
read STDIN,$s,4;
while (!($s eq "xb8xd9x01x00")) {
die "$count" unless read STDIN,$f,1;
$s=substr($s,-3).$f;
print STDERR ".";
}
read STDIN,$s,4;
$len=unpack("V",$s);
# print "$lenn";
read STDIN,$build,$len;
$x=inflateInit();
($output, $status) = $x->inflate($build) ;

# print $output if $status == Z_OK or $status == Z_STREAM_END ;
}

#last if $status!= Z_OK ;

Forum home

Advertisement

 
Blog  |  Forum     more >> Archive | Feed | Google's blogs | About
Advertisement

 

This site unofficially covers Google™ and more with some rights reserved. Join our forum!