Considering the fact that Google already has both an XML-RPC and a SOAP bridge, it seems that this kind of service would be a no-brainer. Here’s how it would work: A message would be sent out in XML to either of the service and would include the title of the page, its URL, a status code (either new, updated or deleted), and specify the level of crawling you want (page or site). The Google service would then take this info and crawl the specific page or site as required.
The advantage for Google is clear: sites that use this service would no longer be part of the regular crawl for Google spiders, which could help Google focus on spidering other (new) sites that are not on this list. For sites that are running Google ads, this would also allow Google to produce better targeted results.
The advantage for web site developers is also clear: this system would allow them to more quickly get into the Google index, a pretty good source for traffic.
In the end, it’s a win-win scenario for all involved. So when will we see this new feature from Google?