Not every report of malware in an image file is spurious. A report of malware smuggled through a PNG file looks plausible to me. It claims that for two years, criminals got malware undetected onto respectable sites with this technique. Ironically, the ads included ones for a so-called “Browser Defence.”
Unlike Check Point’s “Imagegate,” this report doesn’t claim the image alone can do anything, and it describes the technique in considerable detail. Check Point said it would give specifics about “Imagegate,” like what format is affected, “only after the remediation of the vulnerability in the major affected websites.” It’s still waiting, apparently.
The PNG exploit is impressively sneaky. A script which doesn’t trigger alarms checks the host browser’s defenses. If it finds a vulnerable target, it loads a PNG file whose alpha channel encodes the malware script, then decodes the script and runs it. The actual malware takes advantage of — wouldn’t you know it? — Flash vulnerabilities. The user doesn’t have to do anything except view the page to be victimized.
Attacks like this are why ad blockers have become so popular.
HTML 5.1 is now a W3C proposed recommendation, and the comment period has closed. If no major issues have turned up, it may become a recommendation soon, susperseding HTML 5.0.
Browsers already support a large part of what it includes, so a discussion of its “new” features will cover ones that people already thought were a part of HTML5. The implementations of HTML are usually ahead of the official documents, with heavy reliance on working drafts in spite of all the disclaimers. Things like the
picture element are already familiar, even though they aren’t in the 5.0 specification.
I’ve just learned that the Open Preservation Foundation is hosting a JHOVE Online Hack Day on October 11. I’m flattered people are still interested in the work I started doing over a decade ago, though getting some paying work would be far more satisfying.
The Libtiff library, which has been a reference implementation of TIFF for many years, has disappeared from the Internet. It was located at remotesensing.org, a domain whose owner apparently was willing to host it without having any close connection to the project. The domain fell into someone else’s hands, and the content changed completely, breaking all links to Libtiff material. Malice doesn’t seem to be involved; the original owner of remotesensing.org just walked away from the domain or forgot to renew it. Who owns it now is unknown, since it’s registered under a privacy shield.
Originally Libtiff was hosted on libtiff.org, but that fell into the hands of a domain owner with no interest in the project. I don’t know why. It still holds Libtiff code, but it’s many years out of date.
As I’m writing this, people on the Libtiff list are trying to figure out exactly what happened. There’s talk of trying to get libtiff.org back, though that may or may not be possible.
For the moment, there’s no primary source for Libtiff on the Web. I’ll hopefully be able to post more information later.
Posted in News
Tagged software, TIFF
The work on the TI/A project, to define an archive-friendly version of TIFF analogous to PDF/A, is still going, even though hardly any of it is publicly visible. Marisa Pfister’s leaving the project, along with her position at the University of Basel, was unfortunate, but others are continuing a detailed analysis of TIFF files used at various archives. This will help them to learn what features and tags are used.
The target of March 1, 2016, for a submission to ISO has been crossed out, and nothing has replaced it, but we can still hope it will happen.
Posted in News
Tagged ISO, standards, TIFF
A lot of applications claim they can display PDF files, but not all of them fully support the format. They won’t necessarily display all valid files correctly. The PDF Association has an article discussing this problem, with the main focus on the Microsoft Edge browser.
Edge offers only partial support for the JBIG2Decode and JPXDecode filters, which means some objects might not display. It doesn’t support certain types of shadings, so other objects could render incorrectly.
The strength of PDF is supposed to be that it will render the same way everywhere. You can blame Microsoft for not putting enough work into it, or Adobe for making the format too complex. I have enough experience with it to know it’s a seriously difficult format just to analyze, to say nothing of rendering. Is a format which presents such difficulties really the ideal for a universal document rendering format that people will count on far into the future?
Update: It gets worse. Take a look at this discussion of what’s in PDF.
Posted in News
Tagged Microsoft, PDF
The Unicode Consortium has announced the release of Unicode 9.0. It adds character sets for some little-known languages, including Osage, Nepal Bhasa, Fulani, the Bravanese dialect of Swahili, the Warsh orthography for Arabic, and Tangut. It updates the collation specification and security recommendations.
Most Unicode implementations will require just font upgrades, but full support of some of the more unusual scripts will require attention to the migration notes.
“Asymmetric case mapping” sounds interesting. I believe this means that the conversion between upper case and lower case isn’t one-to-one and reversible. The notes give the example of “the asymmetric case mapping of Greek final sigma to capital sigma.” Lowercase sigma has two forms; it’s σ except at the end of a word, where it’s ς. Both turn into Σ in uppercase.
What really has people excited about Unicode 9, if a Startpage search is any indication, isn’t any of these things, but that about 1% of the new characters are emoji and that Apple and Microsoft lobbied against one candidate emoji. I wonder if the Unicode Consortium regrets having gotten involved in that mess in the first place. There are no possible criteria except whims for what the set should include. There’s no limit on how many could be added. OK, having a universally set of encodings promotes information interchange, but the tail is wagging the 🐕.
By the way, what’s the plural of “emoji”? I use “emoji” as both singular and plural, but I’m seeing “emojis” with increasing frequency. It just looks wrong to me. Does anyone say “kanjis” or “romajis” for the other Japanese character sets? I had to argue with the editor to keep the title of my article “The War on Emoji” that way.
Posted in News
Apple is introducing a new file system to replace the twentieth-century HFS+. The new one is called APFS, which simply stands for “Apple File System.” When Apple released HFS+, disk sizes were measured in megabytes, not terabytes.
New features include 64-bit inode numbers, nanosecond timestamp granularity, and native support for encryption. Ars Technica offers a discussion of the system, which is still in an experimental state.
The next big jump in PDF may finally happen this year. The PDF association tells us that the spec for PDF 2.0 is “feature-complete” and will be available to the ISO PDF committee and members of the PDF Association in July. When this will turn into a public release still isn’t clear. A year ago the target was “mid-2016”; that seems unlikely now.
The specification will be ISO 32000-2. The current version of PDF, 1.7, is ISO 32000-1. More precisely, Adobe has published several extension levels to PDF 1.7. They’re a way of getting around having a version 1.8, which would be an admission that the ISO standard is outdated. Version 2.0 will get Adobe and ISO back in sync. Hopefully Adobe will publish the PDF spec for free, as it has in the past, so that it won’t be available just to people who pay for the ISO version. Currently an electronic copy of ISO 32000-1 costs 198 Swiss francs, or a bit more than $200.
Posted in News
Tagged PDF, standards
Lunar Mission One, a private nonprofit organization, is trying to recreate Arthur C. Clarke’s “The Sentinel” (the inspiration for the movie 2001) in real life. They hope to send a digital archive to the moon in 2024 and bury it there. As long as whatever is stored there can withstand intense cold, it should last a very long time.
The plan calls for two archives. One would contain items privately provided by people paying to have their data stored on the moon; the other would be a history of humanity. CEO David Iron (no relation to Tony Stark) raises the question of how living beings of the future will find it and says, “We need a permanent sign that will last for a billion years. … We need to invert the normal logic of searching for extra-terrestrial intelligence by transmitting; they can come to us.”
Posted in News