For the month of April, I’ll be working full time under a SPRUCE grant on making improvement to FITS, the Harvard Library’s File Information Tool Set. For this purpose I’ve created a fork on Github. This is a fresh fork from Harvard’s FITS repository on Github, so I’ve marked my older fork, OpenFITS, as deprecated. Harvard’s official version of FITS is still the Google Code repository.
My Github repository includes a wiki where I’ll describe progress in detail. The issues area is available for input. I don’t have any plans to address existing bugs in FITS, so please use this just for input on my work, including suggestions.
One area where I really want input is on what FITS should produce for video metadata. There isn’t a lot of consensus yet on what product-independent video metadata should look like. FITS has six different categories of files, each with its own metadata set: text, document, image, audio, video, and unknown. The metadata produced is a composite of the output from the various tools (including Tika, which I’m adding). The point isn’t to use an existing schema, but to put together a list of elements that characterize a video document. “Significant properties,” as people don’t like to say. XMP and MPEG-7 provide ideas, and most if not all audio metadata elements are also applicable to video. I’ve started a wiki page on video metadata within the Github project.
If you have an interest in shaping the output of FITS for video files, please provide input by commenting here, putting an issue on Github, emailing me, or whatever works best for you.