Category Archives: commentary

How to approach the file format validation problem

For years I wrote most of the code for JHOVE. With each format, I wrote tests for whether a file is “well-formed” and “valid.” With most formats, I never knew exactly what these terms meant. They come from XML, where they have clear meanings. A well-formed XML file has correct syntax. Angle brackets and quote marks match. Closing tags match opening tags. A valid file is well-formed and follows its schema. A file can be well-formed but not valid, but it can’t be valid without being well-formed.

With most other formats, there’s no definition of these terms. JHOVE applies them anyway. (I wrote the code, but I didn’t design JHOVE’s architecture. Not my fault.) I approached them by treating “well-formed” as meaning syntactically correct, and “valid” as meaning semantically correct. Drawing the line wasn’t always easy. If a required date field is missing, is the file not well-formed or just not valid? What if the date is supposed to be in ISO 8601 format but isn’t? How much does it matter?
Continue reading

Emoji interoperability (or its lack)

Unicode characters ought to have a specific denotation, even if their exact appearance depends on the font. A letter, a punctuation mark, or a Chinese ideograph should have the same meaning to everyone who reads it. There are problems, of course. There’s no systematic difference in appearance between A, the first letter of the Roman alphabet, and Α, Alpha, the first letter of the Greek alphabet. (However, when I had my computer read this article aloud to me for proofreading, it pronounced the latter as “Greek capital letter alpha”! Nice! It also pronounced the names of the emoji in this article, except the new ones in Unicode 11.0.) In some fonts, you can’t even tell the lower case letter l from the number 1 without looking carefully. This problem allows homograph attacks and “typosquatting.”

But the worst problem is with the Unicode Consortium’s great headache, emoji. These picture characters have just brief verbal descriptions in the Unicode standard, and font designers for different companies produce renderings that have vastly different connotations. Motherboard offers a sampling of the varied renderings. Here’s the “grimacing face” from Apple, Google, Samsung, and LG respectively.
Continue reading

Data Transfer Project: New models for interoperability

In spite of improved file standardization, interoperability of data is often a challenge. Say you’ve got a collection of pictures on Photobucket and you want to move them to a different site. You’ve got a lot of manual work ahead. It would be great if there were a tool to do it all for you. The Data Transfer Project aims at making that possible. Some big names are behind it: Facebook, Google, Microsoft, and Twitter. The basic approach is straightforward:

The DTP is powered by an ecosystem of adapters (Adapters) that convert a range of proprietary formats into a small number of canonical formats (Data Models) useful for transferring data. This allows data transfer between any two providers using the provider’s existing authorization mechanism, and allows each provider to maintain control over the security of their service.

Continue reading

DNA as data storage

What’s the oldest data format in the world? It’s not any of the ones that computer engineers developed in the 20th century, or even ones that telegraph engineers created in the 19th. Far older than those — by billions of years — is the DNA nucleotide sequence. We can think of it as a simple base-4 encoding of arbitrary length.

DNA double helix According to the usual, somewhat simplified, description, a DNA molecule is a double helix, with its backbone made of phosphates and sugars, and four types of nucleotides forming the sequence. They are adenine, guanine, thymine, and cytosine, or A, G, T, and C for short. They’re always found in pairs connecting the two strands of the helix. Adenine and thymine connect together, as do guanine and cytosine.

DNA for data encoding

That’s as deep as I care to go, since biochemistry is far away from my areas of expertise. What DNA does is fantastically complicated; change of few bits and you can get a human or a chimpanzee. But as a data model, it’s fantastically simple.
Continue reading

When sloppy redaction fails, resort to censorship

The Broward County School Board used an idiot’s form of “redaction” on a PDF before sending it to the media. The Sun Sentinel removed the blackout layer from the file and found newsworthy information on shooter Nikolas Cruz. They published it. Judge Elizabeth Scherer flew into a rage. She decreed, “From now on if I have to specifically write word for word exactly what you are and are not permitted to print… then I’ll do that.”

That’s called prior restraint, or censorship.
Continue reading

PDF or HTML for public documents?

Should official online documents be PDF files? Many institutions say they obviously should, but the format has some clear disadvantages. An article on the UK’s Government Digital Service site argues that HTML, not PDF, is the right format for UK government documents. Its arguments, to the extent that they’re valid, apply to lots of other documents.

It makes a plausible case against PDF. The trouble is that the case against HTML is even stronger in some ways.
Continue reading

The “Zip Slip” vulnerability

Sometimes my reaction to a story is “Wait, are they saying someone was that dumb? … No one could be that dumb! … Oh, gods, they were that dumb!” Naked Security’s account of the Zip Slip vulnerability is just such a story.

The article starts with a fair warning that the vulnerability is “so simple you’ll need to put a cushion on your desk before you read any further (in case of involuntary headdesk injury).” It explains that because of the coding mistake called “Zip Slip,” “attackers can create Zip archives that use path traversal to overwrite important files on affected systems, either destroying them or replacing them with malicious alternatives.” This is where I started to suspect.

The vulnerability isn’t in the Zip format as such, but in bad coding found in some of the zillion ad hoc pieces of software written to unpack Zip files. Have you figured it out yet? I’ll put the cut here to give you a chance to think…
Continue reading

Flash in the Library of Congress’s online archives

Everybody recognizes that Adobe Flash is on the way out. It takes effort to convert existing websites, though, and some sites aren’t maintained, so it won’t disappear from the Web in the next few decades.

When it’s minor or abandoned sites, it doesn’t matter so much, but even the Library of Congress has the issue. Its National Jukebox currently requires a browser with Flash enabled to be useful. Turning on Flash for reliable sites such as the Library of Congress should be safe, at least as long as those sites don’t include third-party ads from dubious sources. Not everyone has that option, though. If you’re using iOS, you’re stuck.

I came across the National Jukebox while doing research for my book project Yesterday’s Songs Transformed, and it’s frustrating that I can’t currently use it without taking steps which I’d rather avoid. The good news is that this is a temporary situation and work is already underway to eliminate the Flash dependency. David Sager of the National Jukebox Team replied to my email inquiry:
Continue reading

“Mad” file formats

There really are some file formats that have “mad” in their name or extension. Some people may find this blog when trying to learn about them, so I suppose I should provide some information about them.
Continue reading