Just a thought: free distributed search?

Every once in awhile, I just get a hare-brained notion. Today's was, why do we use a central website for doing internet searches at all? Why Google?

Consider the success of the Planetary Society's distributed SETI project, and the distributed computing architecture that resulted from it. Consider the success of swarming download technology like BitTorrent. Consider how simple a basic web spider could be. Consider the efficiency of spidering networks locally. Consider the architecture of DNS.

See a pattern?

What if we replaced the concept of a search engine site with a search engine protocol? What if we ran small spidering operations on thousands of sites around the world instead of putting a massively parallel supercomputer in one room somewhere to do it? The individual spiders would be intelligent applications that learned their immediate environment, and then shared that data with others. Each person using the software could send queries into it, and it would propagate up through a series of spiders to find the best sources of information on the subject.

Probably, you'd still need central indexes somewhere. But what if the index servers where run by lots of people, and not just one company?

It would be a whole new architecture, of course, and there's probably some weaknesses to it, but the idea of a peer-to-peer based search network with peer applications sharing both the indexing and querying load with each other does seem feasible -- after all, distributed computing is able to capture more computing power more cost-effectively than just about any supercomputer architecture, so the power to do it is probably there.

Makes me wonder if someone is already building it.

Just a thought...

License

Verbatim copying and distribution of this entire article are permitted worldwide, without royalty, in any medium, provided this notice is preserved.