Monthly Archives: June 2014

OOXML: The good and the bad

An article by Markus Feilner presents a very critical view of Microsoft’s Open Office XML as it currently stands. There are three versions of OOXML — ECMA, Transitional, and Strict. All of them use the same extensions, and there’s no easy way for the casual user to tell which variant a document is. If a Word document is created on one computer in the Strict format, then edited on another machine with an older version of Word, it may be silently downgraded to Transitional, with resulting loss of metadata or other features.

On the positive side, Microsoft has released the Open XML SDK as open source on Github. This is at least a partial answer to Feilner’s complaint that “there are no free and open source solutions that fully support OOXML.”

Incidentally, I continue to hate Microsoft’s use of the deliberately confusing term “Open XML” for OOXML.

Thanks to @willpdp for tweeting the links referenced here.

Library of Congress format recommendations

The Library of Congress has issued a set of recommendations for formats for both physical and digital documents. The LoC’s digital preservation blog has an interview with Ted Westervelt of the LoC on their development. They’re not just for the library’s own staff, he explains, but for “all stakeholders in the creative process.”

The guidelines repeatedly state: “Files must contain no measures that control access to or use of the digital work (such as digital rights management or encryption).” That’s pushback that can’t be ignored. In some cases, though, the message is mixed. For theatrically released films, standard or recordable Blu-Ray is accepted, but the boilerplate against DRM is included. I don’t know where they expect to get DRM-free Blu-Ray, but DRM-free options are few when it comes to big-name movies.

It’s also interesting that software, specifically games and learning materials, is included. This has been a growing area of interest in recent years. Rather than relying on emulation, the recommendations call for source code, documentation, and a specification of the exact compiler used to build the application.

There’s material here to fuel constructive debate and expansion for years.

Update on JHOVE

I’ve updated the UTF-8 module in the JHOVE source on Github to include the new code blocks for Unicode 7.0.0. Also, I’ve recently fixed the pom.xml file so it will put both the command line and the GUI JAR files into the local repository.

I need more input before I’m comfortable with creating a release 1.12 of JHOVE. I don’t have any prior experience with creating a public, open-source project that’s built with Maven, and I don’t know how much of the baggage of the SourceForge project really needs to be kept. There are some specialty JARs in the old project, but I don’t know if anyone uses them. Most importantly, there still needs to be a distribution in Zip and Tar formats. New features would be interesting, but the first thing is to make a JHOVE that was as useful as it was before.

Comments, suggestions, and code contributions are welcome, as always.

New blocks in Unicode 7

Unicode 7.0.0 has been released, with 2.834 new character codes. It’s been fascinating looking into some of the blocks that have been added; here’s a sampling.

Bassa Vah is a really obscure script from what is now Liberia, possibly predating the country. Old Permic is supposed to be a close relative of Cyrillic, but any visual resemblance is lost on me.

Some of the writing systems came from a religious impulse. Mende Kikakui was devised by an Islamic scholar and was once widely used for the Mende language in Africa. It’s been mostly displaced by the Latin alphabet. Shong Lue Yang introduced the Pahawh Hmong writing system for the Hmong language in southeast Asia, claiming to have received it from God. Pau Cin Hau, named after its creator, was a 20th century system used for religious writings in Burma. Its original version had over a thousand characters, but the Unicode block is based on the 57-character alphabetic system. The Manichaean alphabet is fascinating just because of its name, recalling the conflicts in early Christianity. According to tradition, Mani, the founder of Manichaeanism, created the alphabet.

Finally, one of the oldest writing systems in the world, Linear A, is new in Unicode 7. It’s from ancient Crete, and no one knows how to read its texts. Now you can create computer documents in it, if you’re a scholar of old languages or just like confusing people.

Still no Klingon, though.

Now the JHOVE UTF-8 module needs to be updated for all these new blocks.

The uses and abuses of PDF

PDF is a versatile format, but that doesn’t mean it should be used for everything. It’s a visual presentation format above all else. It lets you define a document with a specific appearance, with capabilities such as form filling and text searching. It’s not very good if you want a document that adapts to different device capabilities. If you need an editable format or a way to deliver structured data, there are much better alternatives.

When the Malaysian government released satellite data from the communications of Flight 370, which had disappeared, it delivered a PDF file. It looks very nice, but anyone who wants to extract and analyze the data has to do a lot of extra work. A spreadsheet or structured text (e.g., CSV) document is the right thing.

PDF can be used for e-books, but it’s not ideal. If you create normal sized pages, then on a phone they’ll either look tiny or require a lot of scrolling. Formats such as EPUB work better with a range of screen sizes.

Delivering text documents in PDF loses a lot of its value when the document is a scanned image rather than a text-based document. It can’t be searched, and people with visual disabilities can’t use text-to-speech. My condo association delivers its newsletters in scanned-image PDF. When I pointed out these problems at an owners’ meeting, I was told that the owners weren’t sophisticated enough to take advantage of those benefits. Our complex is a big one, and I’d be surprised if at least a few residents don’t use text-to-speech when they can. It’s not particularly hard to generate PDF files; scanning a finished document into a PDF seems like the hard way.

To maximize the usefulness of assistive technologies, you should create PDF/A if possible. It produces a slightly larger file, but it’s organized in a way that makes extraction of content easier and eliminates dependencies you might not have thought of.

Redacting PDFs is another tricky issue. If you simply black out an area, that’s the equivalent of gluing a piece of paper over it, and no harder to defeat. For advice on properly redacting documents, who better to turn to than the NSA? They may be a gang of criminals within the government, but they certainly know how to redact. It’s from 2006, though, so some of its advice could be dated.

There are lots of things you can do with PDF, but use it intelligently and where it’s appropriate.