Tag Archives: XML

W3C link roundup

There are a lot of announcements from W3C that are format-related, and I’m not always sure what to do with them. For the moment, I’ll put a bunch of recent links into this post, and perhaps will do the same occasionally to keep up to date.

First Draft of Efficient XML Interchange (EXI) Profile Published
Scalable Vector Graphics (SVG) 1.1 (Second Edition) is a W3C Recommendation
Last Call: CSS Speech Module
Three CSS Drafts Published; First Draft of Conditional Rules Module Level 3
CSS Values and Units Module Level 3 Draft Updated
CSS Image Values and Replaced Content Module Level 3 Draft Updated

XML Schema’s designed-in denial of service attack

Recently there was a discussion on the Library of Congress’s MODS mailing list, pointing out that the MODS Schema uses non-canonical URI’s for the xml.xsd and xlink.xsd schemas. The URI for xml.xsd simply points to a copy of the standard schema, but the xlink schema points at a modified version.

A person at LoC explained that the change to the XML URI was needed because the W3C server was being hammered by so many accesses by way of the MODS schema. Every time a MODS document was validated, unless the validating application used a local or cached copy, there would be an access to the W3C server. We’re told that “W3C was complaining (loudly) about excessive accesses and threatening to block certain clients.” The XLink issue is more complicated and not fully explained in the list discussion, but one part of the problem was the same issue.

The identification of XML namespaces with URI’s creates a denial-of-service attack against servers that host popular schemas, as an unintended consequence of the design. Since you can’t always know which schemas will become popular, this can create a huge burden on servers that aren’t prepared for it. The URI can never move without breaking the namespace for existing documents. I’ve written here before about this problem but hadn’t known it was so severe it was forcing important schemas to clone namespaces. This causes obvious conflicts when a MODS element is embedded within a document that uses the standard XML namespaces.

The only solution available is for applications either to keep a permanent local copy of heavily used schemas or to cache them. Unfortunately, not all applications are going to be fixed, and not all users will upgrade to the fixed versions. So we’ll continue to see cases where schema hosts are hammered with requests and performance somewhere else suffers for reasons the users can’t guess.

EXI is W3C recommendation

Efficient XML Interchange or EXI, the controversial binary representation of XML, is now a W3C standard. Unlike approaches which apply standard compression schemes to XML (e.g., Open Office’s XML plus ZIP), Efficient XML represents the structure of an XML document in a binary form. For some, this adds unnecessary obscurity to a format based on (somewhat) human-readable text. Others consider it a necessary step to reduce the bloat and slow processing of text XML.

The press release says: “EXI is a very compact representation of XML information, making it ideal for use in smart phones, devices with memory or bandwidth constraints, in performance sensitive applications such as sensor networks, in consumer electronics such as cameras, in automobiles, in real-time trading systems, and in many other scenarios.”

There are some things that can be done in XML but not in EXI. The W3C document says: “EXI is designed to be compatible with the XML Information Set. While this approach is both legitimate and practical for designing a succinct format interoperable with XML family of specifications and technologies, it entails that some lexical constructs of XML not recognized by the XML Information Set are not represented by EXI, either. Examples of such unrepresented lexical constructs of XML include white space outside the document element, white space within tags, the kind of quotation marks (single or double) used to quote attribute values, and the boundaries of CDATA marked sections.” Whether this is important will doubtless continue to be the subject of heated debate.

Misadventures in XML

Around 6 PM yesterday, our SMIL file delivery broke. At first I figured it for a database connection problem, but the log entries were atypical. I soon determined that retrieval of the SMIL DTD was regularly failing. Most requests would get an error, and those that did succeed took over a minute.

There’s a basic flaw in XML DTD’s and schemas (collectively called grammars). They’re identified by a URL, and by default any parser that validates documents by their grammar retrieves it from that URL. For popular ones, that means a lot of traffic. We’ve run into that problem with the JHOVE configuration schema, and that’s nowhere near the traffic a really popular schema must generate.

Knowing this, and also knowing that depending on an outside website’s staying up is a bad idea, we’ve made our own local copy of the SMIL DTD to reference. So I was extremely puzzled about why access to it had become so terrible. After much headscratching, I discovered a bug in the code that kept the redirection to the local DTD from working; we had been going to the official URL, which lives on w3.org, all along.

Presumably W3C is constantly hammered by requests for grammars which it originates, and presumably it’s fighting back by greatly lowering the priority of the worst offenders. Its server wasn’t blocking the requests altogether; that would have been easier to diagnose. The priority just got so low that most requests timed out.

Once I figured that out, I put in the fix to access the local DTD URL, and things are looking nicer now. Moving the fix to production will take a couple of days but should be routine.

The problem is inherent in XML: The definition of grammars is tied to a specific Web location. Aside from the problem of heavy traffic to there, this means the longevity of the grammar is tied to the longevity of the URL. It takes extra effort to make a local copy, and anyone starting out isn’t likely to encounter throttling right away, so the law of least effort says most people won’t bother to.

This got me wondering, as I started writing this post, why don’t parsers like Xerces cache grammars? It turns out that Xerces can cache grammars, though by default it doesn’t. As far as I can tell, this isn’t a well-known feature, and again the law of least effort implies that a lot of developers won’t take advantage of it. But it looks like a very useful thing. It should really be enabled by default, though I can understand why its implementers took the more cautious approach.