Monday 3. May 2004 kl. 22:24

The Tower of Babel

I spent some time this weekend trying to make a small Python script that transmits data to a PHP script running on a web server using XML-RPC (XML-RPC is a protocol similar to the more well-known SOAP).

It turned out to be not so simple due to incompatibilities between the XML-RPC client and server libraries. So instead of just using the libraries as described in their documentation, I spent my time filing bugs, reading RFCs and making patches instead.

The XML-RPC specification isn't very precise or elaborate (not surprisingy, since it is written by the same author as six of the nine incompatible RSS specifications). This is unfortunate, because implementors are forced to obey the letter of the specification rather than what is assumed to be the spirit. Also, the standard doesn't mention other standards of relevance to the implementation, in this case RFC 3023, so implementors may fail to address an important issue in their code.

Standards are vital for interoperability in a connected world. While humans can communicate without much standardisation of vocabulary or language (consider travelling in a country where you don't speak the local language that the locals don't speak any language you know - somehow you still manage to get food every day), computers need a fairly strict set of rules for communicating. Without good standards, different implementations become incompatible or at least unnecessarily complicated, because they need to guess what the other party in a communication means. And that is a threat to accessability, diversity and competition.

A good standard is unamigous and easy to understand for people who wish to implement it. If you are not used to reading standards, you may not get much benefit from reading an RFC or a W3C standard. In this case, a good tutorial or a description in informal terms may be more helpful. But optimally, an implementor should be able to get the relevant points just by reading the standard itself. Also a standard should not have degrees of freedom that does not add any flexibility. For instance, a lot of standards specify that some texts are case-insensitive or that white-space may be added in a lot of places. These liberties do not add any flexibility but only complicate the implementation.

Just as vital for interoperability as standards are good implementations. One of the most important general design principles is be strict when sending and tolerant when receiving. All software has bugs (some [1], [2] more than other), and people often don't upgrade just because a new version with few (or at least other) bugs is available. So good software should be forgiving about faulty input.

Being forgiving yields another problem, though, especially if one implementation has a large market share (like almost anything from Redmond). Consider the number of web sites optimized for Internet Explorer. In this connection, "optimized for" often really means "happens to work in". Internet Explorer is very tolerant of errors in the HTML and will render anything that vaguely resembles HTML. So that a page is displayed as intended in Internet Explorer is not a good indicator that it will look the same in other browsers. Other browsers are tolerant of errors as well, but as long as there are no standards specifying exactly how they should treat a tag soup, the result is not well-defined, and their interpretation of the code may differ.

So in order not to be a part of the problem but rather a part of the solution, a tolerant implementation should, if possible, provide an indication that it has detected something that is in violation of some standard. A good example is the iCab browser for the Mac that has an indicator of HTML quality in the browser window's status bar. Not only does this inform the author that something is wrong, but it also makes errors more visible to everybody else, and that might encourage HTML authors to fix their code so that they don't appear ignorant.

A random implementation may be able to detect some errors, but usually there are a lot of edge cases that it does not encounter under normal operation. Instead a regular test suite or validator is needed to do an extensive automated test. The W3C HTML validator is a well-known such tool. It has one problem, though. It does not rank errors. This is not a problem if the goal is to eliminate all errors. But this may not always be an option, depending on the project timeframe or the skills of the HTML author (remember that writing HTML is not only a job for professionals, but for almost anybody who knows how to set the clock on their VCR). A web page can be displayed as expected in almost all browsers and still yield a lot of errors in the W3C validator. So the validator should rank them after importance to encourage users to at least fix the errors that are most likely to cause problems. Today, even some professionals discard the W3C validator as being too pedantic. Also, a validator should provide hints on how to fix the problem.

So what we need to get as good interoperability as possible is

There is still much to do.

Comments

  1. by Max Kanat-Alexander - Thursday February 14, 2008 @ 01:21 am

    This is a great post, in general. :-) I particularly like your point about not adding needless degrees of freedom into standards.

    -Max

No more comments can be added for this post.