Skip to content

Latest commit

 

History

History
8 lines (5 loc) · 455 Bytes

File metadata and controls

8 lines (5 loc) · 455 Bytes

You have W billion URLs. How do you detect the duplicate documents?In this case, assume that "duplicate" means that the URLs are identical.

  1. If all URLs can put in memory, we could simply use hash table.
  2. Two pass method. First, allocate 4000 buckets in disk, and hash the URLs into [0,4000]. Then check each bucket, because same url will match to the same bucket.
  3. If in distributed systems, we can do it in parallel in 4000 different machines.