Should Search Engines Break or Fix the Web?

Here’s an interesting thought. So many web pages are only ever accessed as a result of a search engine query, what would happen if search engines stopped including pages that contained broken code?

On the one hand, the most enormous scream from content providers would be heard. On the other hand, there would be a massive push to fix pages with broken code and bring much of the web into line with established public standards. In fact, some information might then be indexed that would have otherwise beenĀ unintelligibleĀ to the spiders.

And think of the extra impact. No longer will web browser programmers place such high emphasis on maintaining compatibility with broken sources. Web browser development might actually accelerate and release with fewer bugs and smaller download sizes. Software might even become faster.

Google, to take the largest market share holder, even has a site for web masters to receive feedback on their site’s availability for indexing. They really should point out which pages could not be indexed because they had invalid code.

But, I cannot see it happening. Google are doing the reverse and actually going backwards to support the older browsers. Oh well, nice dream.

Advertisements

Second Gmail Outage in a week

Well, I just managed to log in using the Gmail https web interface but the past couple of hours both my accounts have been rejecting me with login failures via IMAP. The http interface was too slow and timed out minutes ago.

This is the second major problem in a week which for Google is a surprising downturn in reliability. Twitter is currently full of complaints as of this second.

Still, it is essentially free.