Tag Archives: preservation

JHOVE 1.8 beta

A beta version of JHOVE 1.8 is now available for testing. Please report any problems. New stuff:

  • If JHOVE doesn’t find a configuration file, it creates a default one.
  • Generics widely added to clean up the code.
  • Several errors in checking for PDF-A compliance were corrected. Aside from
    fixing some outright bugs, the Contents key for non-text Annotations is
    no longer checked, as its presence is only recommended and not required.
  • Improved code by Hökan Svenson is now used for finding the trailer.
  • TIFF tag 700 (XMP) now accepts field type 7 (UNDEFINED) as well as 1
    (BYTE), on the basis of Adobe’s XMP spec, part 3.
  • If compression scheme 6 is used in a file, an InfoMessage will report
    that the file uses deprecated compression.
  • In WAVE files the Originator Reference property, found in the Broadcast Wave Extension
    (BEXT) chunk, is now reported.

PDF/A-3

The latest version of PDF/A, a subset of PDF suitable for long-term archiving, is now available as ISO standard 19005-3:2012. According to the PDF/A Association Newsletter, “there is only one new feature with PDF/A-3, namely that any source format can be embedded in a PDF/A file.”

This strikes me as a really bad idea. The whole point of PDF/A is to restrict content to a known, self-contained set of options. The new version provides a back door that allows literally anything. The intent, according to the article, is to let archivists save documents in their original format as well as their PDF representation. Certainly saving the originals is a good archiving practice, but it should be done in an archival package, not in a PDF format designed for archiving.

Mission creep afflicts projects of all kinds, and this is a case in point.

A field guide to “plain text”

In some ways, plain text is the best preservation format. It’s simple and easily identified. It’s resilient when damaged; if a file is half corrupted, the other half is still readable. There’s just the little problem: What exactly is plain text?

ASCII is OK for English, if you don’t have any accented words, typographic quotes, or fancy punctuation. It doesn’t work very well for any other language. It even has problems outside the US, such as the lack of a pound sterling symbol; there’s a reason some people prefer the name US-ASCII. You’ll often find that supposed “ASCII” text has characters outside the 7-bit range, just enough of them to throw you off. Once this happens, it can be very hard to tell what encoding you’ve got.

Even if text looks like ASCII and doesn’t have any high bits set, it could be one of the other encodings of the ISO 646 family. These haven’t been used much since ISO 8859 came out in the late eighties, but you can still run into old text documents that use it. Since all the members of the family are seven-bit code and differ from ASCII in just a few characters, it’s easy to mistake, say, a French ISO-646 file for ASCII and turn all the accented e’s into curly braces. (I won’t get into prehistoric codes like EBCDIC, which at least can’t be mistaken for anything else.)

The ISO 8859 encodings have the same problem, pushed to the 8-bit level. If you’re in the US or western Europe and come upon 8-bit text which doesn’t work as UTF-8, you’re likely to assume it’s ISO 8859-1, aka Latin-1. There are, however, over a dozen variants of 8859. Some are very different in codes above 127, but some have only a few differences. ISO 8859-9 (Latin-5 or “Turkish Latin-1”) and ISO 8859-15 (Latin-9) are very similar. Microsoft added to the confusion with the Windows 1252 encoding, which turns some control codes in Latin-1 into printing characters. It used to be common to claim 1252 was an ANSI standard, even though it never was.

UTF-8, even without a byte order mark (BOM), has a good chance of being recognized without a lot of false positives; if a text file has characters with the high bit set and an attempt to decode it as UTF-8 doesn’t result in errors, it most likely is UTF-8. (I’m not discussing UTF-16 and 32 here because they don’t look at all ASCII-like.) Even so, some ISO 8859 files can look like good UTF-8 and vice versa.

So plain text is really simple — or maybe not.

Unicode

Words: Gary McGath, Copyright 2003
Music: Shel Silverstein, “The Unicorn”

A long time ago, on the old machines,
There were more kinds of characters than you’ve ever seen.
Nobody could tell just which set they had to load,
They wished that somehow they could have one kind of code.

   There was US-ASCII, simplified Chinese,
   Arabic and Hebrew and Vietnamese,
   And Latin-1 and Latin-2, but don’t feel snowed;
   We’ll put them all together into Unicode.

The users saw this Babel and it made them blue,
So a big consortium said, “This is what we’ll do:
We will take this pile of sets and give each one its place,
Using sixteen bits or thirty-two, we’ve lots of space

   For the US-ASCII, simplified Chinese,
   Arabic and Hebrew and Vietnamese,
   And Latin-1 and Latin-2, we’ll let them load
   In a big set of characters called Unicode.

The Klingons arrived when they heard the call,
And they saw the sets of characters, both big and small.
They said to the consortium, “Here’s what we want:
Just a little bit of space for the Klingon font.”

   “You’ve got US-ASCII, simplified Chinese,
   Arabic and Hebrew and Vietnamese,
   And Latin-1 and Latin-2, but we’ll explode
   You if you don’t put Klingon characters in Unicode.”

The Unicode Consortium just shook their heads,
Though the looks that they were getting caused a sense of dread.
“The set that we’ve assembled is for use on Earth,
And a foreign planet is the Klingons’ place of birth.”

   We’ve got US-ASCII, simplified Chinese,
   Arabic and Hebrew and Vietnamese,
   And Latin-1 and Latin-2, but you can’t goad
   Us into putting Klingon characters in Unicode.

The Klingons grew as angry as a minotaur;
They went back to their spaceship and declared a war.
Three hundred years ago this happened, but they say
That’s why the Klingons still despise the Earth today.

   We’ve got US-ASCII, simplified Chinese,
   Tellarite and Vulcan and Vietnamese,
   And Latin-1 and Latin-2, but we’ll be blowed
   If we’ll put the Klingon language into Unicode.

Preservation in the geek mainstream

Digital preservation issues are gaining notice in the geek mainstream, the large body of people who are computer-savvy but don’t live in the library-archive niche. Today we have an article in The Register, “British library tracks rise and fall of file formats.” It cites the British Library’s Andy Jackson, supporting the view that file formats remain usable for many years, even if they’re no longer the latest thing.

The Register article is short but nicely done. It naturally skips over issues which Andy’s original article deals with, like just how you reliably determine the formats of files. What’s significant is that it shows that concern about the long-term usability of files isn’t just a concern of a few specialists.

The URI namespace problem

Tying XML schemas to URIs was the worst mistake in the history of XML. Once you publish a schema URI and people start using it, you can’t change it without major disruption.

URIs aren’t permanent. Domains can disappear or change hands. Even subdomains can vanish with organizational changes. When I was at Harvard, I offered repeated reminders that hul.harvard.edu can’t go away with the deprecation of the name “Harvard University Library/Libraries,” since it houses schemas for JHOVE and other applications. Time will tell whether it will stay.

Strictly speaking, a URI is a Uniform Resource identifier and has no obligation to correspond to a web page; W3C says a URI as a schema identifier is only a name. In practice, treating it as a URL may be the only way to locate the XSD. When a URI uses the http scheme, it’s an invitation to use it as a URL.

Even if a domain doesn’t go away, it can be burdened with schema requests beyond its hosting capacity. The Harvard Library has been trying to get people to upgrade to the current version of JHOVE, which uses an entity resolver, but its server was, the last I checked, still heavily hit by three sites that hadn’t upgraded. They don’t pay anything, so there’s no money to put into more server capacity.

The best solution available is for software to resolve schema names to local copies (e.g. with Java’s EntityResolver). This solution often doesn’t occur to people until there’s a problem, though, and by then there may be lots of copies of the old software out in the field.

For archival storage, keeping a copy of any needed schema files should be a requirement. Resources inevitably disappear from the Web, including schemas. My impression is that a lot of digital archives don’t have such a rule and blithely assume that the resources will be available on the Web forever. This is a risk which could be eliminated at virtually zero cost, and it should be, but my impression is that a lot of archives don’t do this.

It’s legitimate to stop making a URI usable as a URL, though it may be rude. W3C’s Namespaces in XML 1.0 says: “The namespace name, to serve its intended purpose, SHOULD have the characteristics of uniqueness and persistence. It is not a goal that it be directly usable for retrieval of a schema (if any exists).” (Emphasis added) That implies that any correct application really should do its own URI resolution.

One thing that isn’t legitimate, but I’ve occasionally seen, is replacing a schema with a new and incompatible version under the same URI. That can cause serious trouble for files that use the old schema. A new version of a schema needs to have a new URI.

The schema situation creates problems for hosting sites, applications, and archives. It’s vital to remember that you can’t count on the URI’s being a valid URL in the long term.

If you’ve got one of those old versions of JHOVE (1.5 and older, I think), please upgrade. The new versions are a lot less buggy anyway.

Spruce Awards: signal boost and self-promotion

Applications for SPRUCE Awards are now open.

SPRUCE will make awards of up to £5k available for further developing the practical digital preservation outcomes and/or development of digital preservation business cases, that were begun in SPRUCE events. Applications from others may also be considered, but in this case, please discuss your proposal with SPRUCE before submission. A total fund of £60k is available for making these awards, which will be allocated in a series of funding calls thoughout the life of the SPRUCE Project.

The current (open) call is primarily for attendees of the SPRUCE Mashup London.

Awards must be submitted by 5 PM (GMT, I suppose) on October 10, 2012.

The self-promotion part: Awards are made to teams affiliated with institutions, but they are permitted to use outside help, since in-house developers may already be fully committed. As an independent developer with expertise in file formats and digital preservation, I’d like it known that I’m available to contract for carrying out a SPRUCE project. My business home page describes my background and skills. Paul Wheatley has told me this is a possibility, so I’m not just coming out of the blue with this offer.

My schedule may change, of course, but if you contact me on a project I’ll keep you updated on my status, and I’ll follow through in full on any commitment I make.

Format conformity

By design JHOVE measures strict conformity to file format specifications. I’ve never been convinced this is the best way to measure a file’s viability or even correctness, but it’s what JHOVE does, and I’d just create confusion if I changed it now.

In general, the published specification is the best measure of a file’s correctness, but there are clearly exceptions, and correctness isn’t the same as viability for preservation. Let’s look at the rather extreme case of TIFF.

The current official specification of TIFF is Revision 6.0, dated June 3, 1992. The format hasn’t changed a byte in over 20 years — except that it has.

The specification says about value offsets in IFDs: “The Value is expected to begin on a word boundary; the corresponding Value Offset will thus be an even number.” This is a dead letter today. Much TIFF generation software freely writes values on any byte boundary, and just about all currently used readers accept them. JHOVE initially didn’t accept files with odd byte alignment as well-formed, but after numerous complaints it added a configuration option to allow them.

Over the years a body of apocrypha has grown around TIFF. Some comes from Adobe, some not. The titles of the ones from Adobe don’t clearly mark them as revisions to TIFF, but they are. The “Adobe PageMaker® 6.0 TIFF Technical Notes,” September 14, 1995, define the important concept of SubIFD, among other changes. The “Adobe Photoshop® TIFF Technical Notes,” March 22, 2002, define new tags and forms of compression. The “Adobe Photoshop® TIFF Technical Note 3,” April 8, 2005, adds new floating point types. The last one isn’t available, as far as I can tell, on Adobe’s own website, but it’s canonical.

Then there’s material without official Adobe approval. The JPEG compression defined in the 2002 tech notes is an official acceptance of a 1995 draft note that had already gained wide acceptance.

What’s the best measure of a TIFF file? That it corresponds strictly to TIFF 6.0? To 6.0 plus a scattered set of tech notes? Or that it’s processed correctly by LibTiff, a freely available and very widely used C library? To answer the question, we have to specify: Best for what? If we’re talking about the best chance of preservation, what scenarios are we envisioning?

One scenario amounts to a desert-island situation in which you have a specification, some files that you need to render, and a computer. You don’t have any software to go by. In this case, conformity to the spec is what you need, but it’s a rather unlikely scenario. If all existing TIFF readers disappear, things have probably gone so far that no one will be motivated to write a new one.

It’s more likely that people a few decades in the future will scramble to find software or entire old computers that will read obsolete formats. This doesn’t necessarily mean today’s software, but what we can read today can be a pretty good guide to what will be readable in the future. Insisting on conformity to the spec may be erring to the safe side, but if it excludes a large body of valuable files, it’s not a good choice.

Rather than insisting solely on conformity to a published standard, preservation-worthy files need to be measured by a balance between accepting files that will cause reading problems down the road and rejecting files that won’t. Multiple factors come into consideration, of which the spec is just one.

Defining the file format registry problem

My previous post on format registries, which started out as a lament on the incomplete state of UDFR, resulted in an excellent discussion. Along the way I came upon Chris Rusbridge’s post pointing out that finding a solution doesn’t do much good if you don’t know what problem you’re trying to solve. This links to a post by Paul Wheatley on the same subject. Paul links back to this blog, nicely closing the circle.

So what are we trying to do? A really complete digital format registry sounds like a great idea, but what practical problem is it trying to solve? We know it’s got something to do with digital preservation. If we have a file, we need to know what format it’s in and what we can do about it. If it’s in a well-known format such as PDF or TIFF, there’s no real problem; it’s easy enough to find out all you need to know. It’s the obscure formats that need one-stop documentation. If you find a file called “importantdata.zxcv” and a simple dump doesn’t make sense of it, you need to know where to look. You need answers to questions like: “What format is it in?” “What category of information does it contain?” “How do I extract information from this file?” “How do I convert it with as little loss as possible into a better supported format?”

I have a 1994 O’Reilly book called Encyclopedia of Graphics File Formats. If old formats are a concern of yours, I seriously suggest searching for a copy. (Update: It turns out the book is available on fileformat.info!) It covers about a hundred different formats, generally in enough detail to give you a good start at implementing a reader. There are names which are still familiar: TIFF, GIF, JPEG. Many others aren’t even memories except to a few people. DKB? FaceSaver?

With some formats the authors just admit defeat in getting information. The case of Harvard Graphics (apparently no connection to Harvard University) is particularly telling. The book tells us:

Software Publishing, the originator of the Harvard Graphics format, considers this format to be proprietary. Although we wish this were not the case, we can hardly use our standard argument — that documenting and publicizing file formats make sales by seeding the aftermarket. Harvard Graphics has been the top, or one of the top, sellers in the crowded and cutthroat MS-DOS business graphics market, and has remained so despite the lack of cooperation of Software Publishing with external developers.

While we would be happy to provide information about the format if it were available, we have failed to find any during our research for this book, so it appears that Software Publishing has so far been successful in their efforts to restrict information flow from their organization.

This was once a widely used format, so if you’re handed an archive to turn into a useful form, you might get a Harvard Graphics file. How do you recognize it as one? That isn’t obvious. A little searching reveals you can still get a free viewer for older versions of Windows, but nothing is mentioned about converting it to other formats. Even knowing there’s software available isn’t helpful till you can determine that a file is Harvard Graphics.

If you have a file — it’s Harvard Graphics, but you don’t know that — what do you want from a registry? First, you want a clue about how to recognize it. An extension or a signature, perhaps. When you get that, you want to know what kind of data the file might hold: In this case, it’s presentation graphics. Then you want to know how to rescue the data. Knowing that the viewer exists would be a start. Knowing that technical information isn’t available (if that’s still true) would save fruitless searching.

Information like this is scattered and dynamic. If the Harvard Graphics spec isn’t publicly available now, it’s still possible for its proprietors to relent and publish it. The notion of one central source of wisdom on formats is an impossibility. What’s needed is a way to find the expertise, not to compile it all in one place.

We need to concentrate not on a centralized stockpile of information but a common language for talking about formats. PRONOM uses one ontology. UDFR uses another. DBPedia doesn’t have an applicable standard. What I envision is any number of local repositories of formats, all capable of delivering information in the same way. The ones from the big institutions would carry the most trust, and they’d often share each other’s information. Specialists would fill in the gaps by telling us about obscure formats like uRay and Inset PIX, or they’d provide updates about JPEG2000 and EPub more regularly than the big generalists can. The job of the big institutions is to standardize the language so we aren’t overwhelmed by heterogeneous data.

Let’s look again at those questions I mentioned, as they could apply to this scenario.

What format is it in? The common language needs a way to ask this question. Given a file extension, or the hex representation of the first four bytes of the file, you’d like a candidate format, and there might be more than one. You’d like to be able to search across a set of repositories for possible answers.

What category of information does it contain? When you get an answer about the format, it should tell you briefly what it’s for. If you got multiple answers in your first query, this might help to narrow it down.

How do I extract information? Now you want to get some amount of information, maybe just enough to tell you whether it’s worth pursuing the task or not. The registry will hopefully give you information on the technical spec or on available tools.

How do I convert it? When you decide that the file has valuable information but isn’t sufficiently accessible as it stands, you need to look for conversion software. A central registry has to be cautious about what it recommends. A plurality of voices can offer more options (and, to be sure, more risk).

This vision is what I’d like to call ODFR — the Open Digital Format Registry — even though it wouldn’t be a single registry at all.

OAIS reference model

The OAIS reference model is a central piece of digital preservation. a new version (PDF), identified as CCSDS 650.0-M-2, has been released. It’s dated June 2012 but seems to have been publicly available for only a short time. Most people who know about OIAS know about SIPs, AIPs, and DIPs and not too much more, and I’m pretty much among the unlearned masses here, so I’ll just refer you to Barbara Sierman’s article, OAIS 2012 update, which has a summary of the important changes.

The state of file format registries

Looking through UDFR is like walking through a ghost town that still shows many signs of its former promise. The UDFR Final Report (PDF) helps to explain this; it’s a very sad story of a brilliant idea that encountered tons of problems with deadlines and staffing. What’s there is hard to use and, as far as I can tell, isn’t getting used much. I don’t see any signs of recent updates.

The website is challenging for the inexperienced user, but this wouldn’t matter so much if it exposed its raw information so developers could write front ends for specific needs. Chris Prom wrote that “it is a great day for practical approaches to electronic records because all kinds of useful tools and services can and will be developed from the UDFR knowledge base.” But I just can’t see how. I wrote to Stephen Abrams a while back about problems I was encountering (including my inability to log in in Firefox — I’ve since found I can log in in Safari), and his reply gave the sense that the project team had exhausted its resources and funding just in putting the repository up on the Web.

The source code is supposed to be on GitHub, but all that I see there is four projects, three of which are forks of third-party code and the fourth just some OWL ontology files.

If it were possible to access the raw data by RESTful URLs, even that would be something. So far I haven’t found a way to do that.

In fairness, I have to admit I was part of the failure of UDFR’s predecessor GDFR. The scope of the project was too ambitious, and communication between the Harvard and OCLC developers was a problem.

The most successful format registry out there is PRONOM. Programmatic access to its data is provided with DROID. GDFR and UDFR, with “global” and “unified” in their names, both grew from a desire to have a registry that everyone could participate in. PRONOM accepts contributions, but it’s owned by the UK National Archives, and this bothers some people, but it’s the most useful registry there is. The PRONOM site itself expresses the hope that UDFR “will support the requirements of a larger digital preservation community,” and it still would be great if that could happen.

Occasionally some people have discussed the idea of an open wiki for file format information. This would allow more free-form updates than the registries, and if combined with the concept of the semantic wiki, could also be a source of formalized data. I’m inclined to believe that’s the best way to implement an open repository.