Tag Archives: archive

patool: Multilingual

I ran into some time-consuming real-world issues yesterday, so I have to apologize for missing a post. I’ll make up for it today.

As today’s tool, or perhaps as yesterday’s tool with another to come, here’s patool.


I don’t use multiarchive tools much. Part of that is just that I rely on tar most of the time, unless I get a different format from another source. But usually, the things I compress are simply tar‘ed up. That might make me one of the few people on the planet who knows the proper command sequence to un-tar something.

Regardless, patool has a few points that are worth discussion.

Most of patool seems to work as command-action-target format, so extracting a file — just about any compressed file, I might add — is as simple as patool extract file. The extension of the file appears to be irrelevant to patool — if I rename a file to show a different extension, it manages to extract it anyway.

Of course that might be the flexibility of the underlying compression tools in working with other formats. It’s hard to tell.

patool does a couple of things that you might like. patool can directly repack an archive to switch formats, which could save you a few steps if you’re converting all your old 7zip files into something more modern.

And patool seems smart enough not to overwrite a file that exists already, and will instead create a folder and drop the target in it. Very convenient.

Like a lot of multiarchive tools, patool seems only as multilingual, in terms of archive formats, as what you have installed on your machine. So I’m guessing if you want the ability to decompress .ace files, you’ll need to install unace first. So from a technical standpoint, patool doesn’t really save you any disk space.

patool is python-based, and in both AUR and Debian. If you’re interested in how it compares to multiarchive standbys like atool, unp or dtrx … give it a try and report back to us. 😀

z and z: A tale of two z’s

Remember my little rant from a few weeks ago, the one about single-character application names? If you don’t it’s just as well. I usually regret my rants. That one was no exception.

The point comes through though, since I have two z’s to report — this one and this one.

2014-07-05-6m47421-z-01 2014-07-05-6m47421-z-02

The z on the left is an intuitive compression-decompression tool. By all rights it should sense whether a file should be compressed or decompressed, and come up with the right results. If you remember atool or dtrx or unp, think of it as one of those, with enough smarts to do the opposite, if need be.

I did run into a few problems with z — the z on the left, that is. Compression seemed to sputter in the Arch version, while it looked for something called compress at /usr/bin/. It wasn’t finding it, and so half of what it could do, it couldn’t.

The z on the right is another fast directory switching tool. It’s by the same author as j and j2, and seems to follow the same pattern.

If you place the z.sh script somewhere in your $PATH, and change the $_Z_CMD variable to just “z”, then you should start building a database of recently visited directories. From there you can jump straight to a particular one by prefixing it — or part of it — with just “z”.

In theory, of course, and there is more to it than just that. It works acceptably well, although personally I’m not much of a fan of fast-directory-switcher-gizmos, and so a lot of it is probably lost on me.

So there are the two z’s, and I’d like to just take one last second to remind everyone out there who is working on building The Next Great Killer App, The Program That Will Change Life As We Know It, The Application That Consumed The Entire Universe In One Slobbery Gulp, to please — please — think rationally for just the briefest moment, and give your program a name that’s longer than just one stupid letter. 😡

vbackup: A little archive wizardry, for Debian fans

As best I can tell, vbackup is not available in Arch, Fedora or OpenSuse. I looked through each of those and found no traces of it. That’s a little surprising.

2014-06-14-lv-b7175-vbackup-01 2014-06-14-lv-b7175-vbackup-02 2014-06-14-lv-b7175-vbackup-03

vbackup calls itself a “modular” backup system, but I only find it packaged in Debian and its derivatives. The home page explains that it can duplicate a Debian package list so maybe that makes sense, but the very next line adds RPM support as well.

Perhaps the word just hasn’t gotten out yet.

vbackup claims it can support customizable backup scripts to work alongside its own defaults. And it apparently can back up mdadm and lvm data, and archive MBRs. It also can handle networked backup solutions too, or rely on nfs of scp for remote access. That’s pretty impressive.

My favorite part, of the few parts that I tried, was the setup wizard. For as many other archive tools as I’ve seen in this year-and-a-half adventure, it’s nice to find one that will at least set up a configuration for you, rather than dropping a cryptic configuration file on your lap and tapping its foot while it waits for you to set it up correctly.

In all honesty I didn’t run vbackup completely, and I never got close to restoring anything I did with vbackup. So it may be that in spite of a long list of features, it doesn’t really perform as well as imagined.

If that’s the case, I leave it to you to resolve. 😉

unp: Boring is not always a bad thing

The Debian package pages say unp is a perl script that can unpack almost anything, provided you have the supporting formats installed in your system. In that way, it appears to be a lot like atool or dtrx, both of which we perused last year.


And this is where I run dry on things to say about unp. It has no color. It doesn’t do much other than rip files out and drop them in a folder.

It has a smaller set of specific flags that will fine-tune your decompression experience. And it seems proficient enough to know how to extract almost any type of archive; enter unp -s and it will tell you what it supports at any given moment.

But that’s all there is. Even the AUR version points at the Debian package page, which is rather boring. 😐

Ah well. It’s not always a bad thing to be boring. 😉

tarman: A fullscreen archive navigator

For some reason, tar has a reputation for being cryptic or difficult to handle.

That’s mystifying to me, probably because I use it on a weekly basis as a bland, uncompressed file bundler. For me, tar cvvf package.tar file1 file2 file3 is certainly no challenge to remember. I can think of far more complex and unintuitive software in the Linux landscape.

For those who can’t handle the challenge of remembering c and v and f and tarname and filename, they may want to look into tarman.


It’s been a while since our last fullscreen archive manager — 2a, if I remember right. tarman pulls the same stunt as 2a, but does it in a cleaner fashion, I believe.

tarman works a lot like Xarchiver or File Roller, in that you can navigate your directory tree and archived files within in it. Select a file or several files, press a to archive them. Ta-da!

Or alternatively, enter an archive (tarman seems to be able to handle bzip2-compressed tar archives; it may know others too), select a file, press e and it will be extracted. Ta-da! Again!

Press F1 or ? for in-your-face help cues. Press q to quit.

As for faults, I can only mention a flickering effect on each keypress. I think tarman is trying to refresh the display at every keystroke, and the redraw is flashing as a result. A little irritating, but minor, and probably something that can be corrected.

That’s about it. Not a lot to it, and it does the job well.

And you avoid all the stress of trying to remember three letters and a couple of names. 🙄 👿

rdup: Still more backup options

Today seems to be backup day. I suppose given that it’s April Fools Day, I should probably take that as a hint.

rdup is next, and as I understand it, rdup and its brethren hope to keep the backup chore as close as possible to a simple, Unixy way of doing things.

rdup-simple is the one-shot script to perform a backup. A folder tree is probably the easiest way to show how it behaves.


rdup-simple pushes archives into a nested format, following the date. “rdup-simple” is right.

I am a little foggy on the footwork involved, but if I understand it right, rdup-simple incorporates rdup itself, which is capable of generating and tracking lists of file-by-file changes, and tackling only those.

There are a couple of other tools involved, and when you whip them all together and give them a source directory, you get rdup-simple and the above results.

It’s apparently possible to use any of the incorporated tools by itself, and that’s where the details get a little fuzzy for me. I leave it to you to figure out.

I like rdup-simple for being, well, simple 🙄 but for offering the opportunity to get my hands really dirty. It’s a shame I don’t have more intricate backup needs; I have a feeling I would like to get into the details. 😐

rdup is in Debian and AUR; the AUR script points to the wrong location for the source package, but will build if provided with the source tarball. Just so you know. 😉

rdiff-backup: Mirrored, with increments

I charged into rdiff-backup thinking it would be only a little more complex than rdiffdir was yesterday. Luckily I wasn’t too far off the mark.

rdiff-backup can make backups while conserving bandwidth, which is probably a great idea on the whole. It also makes incremental backups, and the home page promises file recovery over previous backups too.

I didn’t delve that far into it, but I do have a little to show for my effort:


My hope there was to show that rdiff-backup’s product is not only a mirror image of the source, but also includes data on what changed between runs. It might be a little difficult to follow; trust me if it’s not obvious.

Compared to a straight rsync, I can see where this would be preferable, if it conserves bandwidth and can offer access to past backups as well. I usually just refresh my archives with a simple rsync -ah --progress --delete, and there have been times I wished I could step backward once or twice in history.

On the other hand, this is very clean and straightforward, without a lot of the wrangling that I’ve seen in some other console-based backup tools. Given the need — such as a large-scale networked system — I’d definitely think this over as an option. 😉

pigz: Equality among parallelized compression tools

Miguel called me out the other day, for including pbzip2 when I mentioned repeatedly that I wouldn’t include esoteric compression tools in this little adventure.

He’s right on the one hand, since pbzip2 — and now pigz — are specific to one particular algorithm. But they both do such cool things:


I don’t think I can add much more to the 1000 words that image is worth. Same flags and arrangement as pbzip2, only this time I used a 256Mb file of random characters, because I am impatient. 😈

I should offer the same caveat this time as I did last time: You may not see much improvement on a single-core machine.

And now for the daring feat of the day, jamming this, pbzip2 and parallel all into the same command …

ls random-{1,2}.txt | parallel pbzip2 -f -k -9 | parallel pigz -f -k -9

Let me just press enter and we’ll see if I spawn a singularity aga

pbzip2: The luxury of multiprocessing

This is one of those times when a screenshot will tell you a lot more than I can, with words:


pbzip2, the parallificated bzip2, chopping a good 20 seconds of the compression time on a 256Mb clump of random text.

In that situation, nothing else is running and this laptop has an SSD in it, so it’s fairly quick to start with. But pbzip2 still manages to slash the time it takes to smush it down a bit.

The fun part of pbzip2 is watching htop while it’s running. In the case of vanilla bzip2, the system load meter on one processor spikes to 100 percent, while the other sits near idle.

But pbzip2 kicks both of them up to max on this Core2 Duo, and the fan suddenly starts to whine a little louder. 😉

That does, of course, suggest that on a single core machine, you might not see any improvement at all. Logic says without an advanced CPU, there’s little space to share.

Give it a try and see what happens; you never know, there might be a tiny bump.

In closing, I’m a little surprised pbzip2 isn’t more famous. Perhaps there’s something sketchy in its history that I don’t know about.

For now, I’m going to tempt fate and try

ls random-{1,2}.txt | parallel pbzip2 -f -k -9

and see what happens. Yes, combining parallel and pbzip2 might just trigger a black hole in the center of my computer. But just let me press Enter now and see wha

dtrx: An extractor with an impressive repertoire

I talked about atool a long time ago, and dtrx is the other universal extraction tool that I know of.


dtrx’s best features aren’t visible in that screenshot. For one, the list of archives it supports is quite long.

On top of that, by default dtrx sends the output into its own directory. I, for one, get a little tired of force-feeding unzip the flag to dump into a dedicated directory.

Little things like that are what I prefer.

I suppose it’s worth mentioning that you can’t just automagically extract anything from any archive, without installing the supporting packages for that compression.

So don’t complain to me when dtrx can’t open a rar file, and you don’t have unrar installed.

Happy extracting. 😉