Category Archives: Tutorial

File identification tools, part 3: DROID and PRONOM

The last installment in this series looked at file, a simple command line tool available with Linux and Unix systems for determining file types. This one looks at DROID (Digital Record Object IDentification), a Java-based tool from the UK National Archives, focused on identifying and verifying files for the digital repositories of libraries and archives. It’s available as open source software under the New BSD License. Java 7 or 8 is needed for the current release (6.1.5). It relies on PRONOM, the National Archive’s registry of file format information.

Like file, DROID depends on files that describe distinctive data values for each format. It’s designed to process large batches of files and compiles reports in a much more useful way than file‘s output. Reports can include total file counts and sizes by various criteria.

To install DROID, you have to download and expand the ZIP file for the latest version. On Windows, you run droid.bat; on sensible operating systems, run droid.sh. You may first have to make it executable:

chmod +x droid.sh
./droid.sh

Running droid.sh with no arguments launches the GUI application. If there are any command line arguments, it runs as a command line tool. You can type

./droid.sh --help

to see all the options.

The first time you run it as a GUI application, it may ask if you want to download some signature file updates from PRONOM. Let it do that.

It’s also possible to use DROID as a Java library in another application. FITS, for example, does this. There isn’t much documentation to help you, but if you’re really determined to try, look at the FITS source code for an example.

DROID will report file types by extension if it can’t find a matching signature. This isn’t a very reliable way to identify a file, and you should examine any files matched only by extension to see what they really are and whether they’re broken. It may report more than one matching signature; this is very common with files that match more than one version of a format.

It isn’t possible to cover DROID in any depth in a blog post. The document Droid: How to use it and how to interpret your results is a useful guide to the software. It’s dated 2011, so some things may have changed.

Next: Exiftool. To read this series from the beginning, start here.

File identification tools, part 2: file

A widely available file identification tool is simply called file. It comes with nearly all Linux and Unix systems, including Macintosh computers running OS X. Detailed “man page” documentation is available. It requires using the command line shell, but its basic usage is simple:

file [filename]

file starts by checking for some special cases, such as directories, empty files, and “special files” that aren’t really files but ways of referring to devices. Second, it checks for “magic numbers,” identifiers that are (hopefully) unique to the format near the beginning of the file. If it doesn’t find a “magic” match, it checks if the file looks like a text file, checking a variety of character encodings including the ancient and obscure EBCDIC. Finally, if it looks like a text file, file will attempt to determine if it’s in a known computer language (such as Java) or natural language (such as English). The identification of file types is generally good, but the language identification is very erratic.

The identification of magic numbers uses a set of magic files, and these vary among installations, so running the same version of file on different computers may produce different results. You can specify a custom set of magic files with the -m flag. If you want a file’s MIME type, you can specify --mime, --mime-type, or --mime-encoding. For example:

file --mime xyz.pdf

will tell you the MIME type of xyz.pdf. If it really is a PDF file, the output will be something like

xyz: application/pdf; charset=binary

If instead you enter

file --mime-type xyz.pdf

You’ll get

xyz.pdf: application/pdf

If some tests aren’t working reliably on your files, you can use the -e option to suppress them. If you don’t trust the magic files, you can enter

file -e soft xyz.pdf

But then you’ll get the uninformative

xyz.pdf: data

The -k option tells file not to stop with the first match but to apply additional tests. I haven’t found any cases where this is useful, but it might help to identify some weird files. It can slow down processing if you’re running it on a large number of files.

As with many other shell commands, you can type file --help to see all the options.

file can easily be fooled and won’t tell you if a file is defective, but it’s a very convenient quick way to query the type of a file.

Windows has a roughly similar command line tool called FTYPE, but its syntax is completely different.

Next: DROID and PRONOM. To read this series from the beginning, start here.

File identification tools, part 1

This is the start of a series on software for file identification. I’ll be exploring as broad a range as I reasonably can within the blog format, covering a variety of uses. I’m most familiar with the tools for preservation and archiving, but I’ll also look at tools for the end user and at digital forensics (in the proper sense of the word, the resolution of controversies).

We have to start with what constitutes “identifying” a file. For our purposes here, it means at least identifying its type. It can also include determining its subtype and telling you whether it’s a valid instance of the type. You can choose from many options. The simplest approach is to look at the file’s extension and hope it isn’t a lie. A little better is to use software that looks for a “magic number.” This gives a better clue but doesn’t tell you if the file is actually usable. Many tools are available that will look more rigorously at the file. Generally the more thorough a tool is, the narrower the range of files it can identify.

Identification software can be too lax or too strict. If it’s too lax, it can give broken files, perhaps even malicious ones, its stamp of approval. If it’s too severe, it can reject files that deviate from the spec in harmless and commonly accepted ways. Some specifications are ambiguous, and an excessively strict checker might rely on an interpretation which others don’t follow. A format can have “dialects” which aren’t part of the official definition but are widely used. TIFF, to name one example, is open to all of these problems.

Some files can be ambiguous, corresponding to more than one format. Here’s a video with some head-exploding examples. It’s long but worth watching if you’re a format junkie.

The examples in the video may seem far-fetched, but there’s at least one commonly used format that has a dual identity: Adobe Illustrator files. Illustrator knows how to open a .ai file and get the application-specific data, but most non-Adobe applications will see it as a PDF file. Ambiguity can be a real problem when file readers are intentionally lax and try to “repair” a file. Different applications may read entirely different file types and content from the same file, or the same file may have different content on the screen and when printed. So even if an identification tool tells you correctly what the format is, that may not be the whole story. I don’t know of any tool that tries to identify multiple formats for the same file.

Knowing the version and subtype of a file can be important. When an application reads a file in a newer version than it was written for, it may fail unpredictably, and it’s likely to lose some information. Some applications limit their backward compatibility and may be unable to read old versions of a format. Subtypes can indicate a file’s suitability for purposes such as archiving and prepress.

I’ll use the tag “fident” for all posts in this series, to make it easy to grab them together.

Next: The shell file command line tool.

A field guide to “plain text”

In some ways, plain text is the best preservation format. It’s simple and easily identified. It’s resilient when damaged; if a file is half corrupted, the other half is still readable. There’s just the little problem: What exactly is plain text?

ASCII is OK for English, if you don’t have any accented words, typographic quotes, or fancy punctuation. It doesn’t work very well for any other language. It even has problems outside the US, such as the lack of a pound sterling symbol; there’s a reason some people prefer the name US-ASCII. You’ll often find that supposed “ASCII” text has characters outside the 7-bit range, just enough of them to throw you off. Once this happens, it can be very hard to tell what encoding you’ve got.

Even if text looks like ASCII and doesn’t have any high bits set, it could be one of the other encodings of the ISO 646 family. These haven’t been used much since ISO 8859 came out in the late eighties, but you can still run into old text documents that use it. Since all the members of the family are seven-bit code and differ from ASCII in just a few characters, it’s easy to mistake, say, a French ISO-646 file for ASCII and turn all the accented e’s into curly braces. (I won’t get into prehistoric codes like EBCDIC, which at least can’t be mistaken for anything else.)

The ISO 8859 encodings have the same problem, pushed to the 8-bit level. If you’re in the US or western Europe and come upon 8-bit text which doesn’t work as UTF-8, you’re likely to assume it’s ISO 8859-1, aka Latin-1. There are, however, over a dozen variants of 8859. Some are very different in codes above 127, but some have only a few differences. ISO 8859-9 (Latin-5 or “Turkish Latin-1”) and ISO 8859-15 (Latin-9) are very similar. Microsoft added to the confusion with the Windows 1252 encoding, which turns some control codes in Latin-1 into printing characters. It used to be common to claim 1252 was an ANSI standard, even though it never was.

UTF-8, even without a byte order mark (BOM), has a good chance of being recognized without a lot of false positives; if a text file has characters with the high bit set and an attempt to decode it as UTF-8 doesn’t result in errors, it most likely is UTF-8. (I’m not discussing UTF-16 and 32 here because they don’t look at all ASCII-like.) Even so, some ISO 8859 files can look like good UTF-8 and vice versa.

So plain text is really simple — or maybe not.

Unicode

Words: Gary McGath, Copyright 2003
Music: Shel Silverstein, “The Unicorn”

A long time ago, on the old machines,
There were more kinds of characters than you’ve ever seen.
Nobody could tell just which set they had to load,
They wished that somehow they could have one kind of code.

   There was US-ASCII, simplified Chinese,
   Arabic and Hebrew and Vietnamese,
   And Latin-1 and Latin-2, but don’t feel snowed;
   We’ll put them all together into Unicode.

The users saw this Babel and it made them blue,
So a big consortium said, “This is what we’ll do:
We will take this pile of sets and give each one its place,
Using sixteen bits or thirty-two, we’ve lots of space

   For the US-ASCII, simplified Chinese,
   Arabic and Hebrew and Vietnamese,
   And Latin-1 and Latin-2, we’ll let them load
   In a big set of characters called Unicode.

The Klingons arrived when they heard the call,
And they saw the sets of characters, both big and small.
They said to the consortium, “Here’s what we want:
Just a little bit of space for the Klingon font.”

   “You’ve got US-ASCII, simplified Chinese,
   Arabic and Hebrew and Vietnamese,
   And Latin-1 and Latin-2, but we’ll explode
   You if you don’t put Klingon characters in Unicode.”

The Unicode Consortium just shook their heads,
Though the looks that they were getting caused a sense of dread.
“The set that we’ve assembled is for use on Earth,
And a foreign planet is the Klingons’ place of birth.”

   We’ve got US-ASCII, simplified Chinese,
   Arabic and Hebrew and Vietnamese,
   And Latin-1 and Latin-2, but you can’t goad
   Us into putting Klingon characters in Unicode.

The Klingons grew as angry as a minotaur;
They went back to their spaceship and declared a war.
Three hundred years ago this happened, but they say
That’s why the Klingons still despise the Earth today.

   We’ve got US-ASCII, simplified Chinese,
   Tellarite and Vulcan and Vietnamese,
   And Latin-1 and Latin-2, but we’ll be blowed
   If we’ll put the Klingon language into Unicode.