Tag Archives: backup

dotbot: A (dot)files (bo)o(t)strapper

Something else you might have noticed: I’m not following alphabetical order any more. 😉

I started doing that about a year ago, because I was conveniently skipping titles I didn’t know anything about or sensed were over my head.

Scraping through the alphabet was a way of making sure I at least mentioned each title before moving to the next one. The way things were going, I would have ended up with a list of software I had no interest in, and no desire to examine.

So at the moment, it’s a simple ls vimwiki/ | shuf -n1 that determines the next title. This will also keep you guessing, so you never know how close we are to The Real End of The List. 😉

However I do it, I still see titles that I’m unlikely to adopt, sometimes for personal reasons.

Next is Anish Athalye‘s dotbot, a tool specifically designed to handle dotfiles symlinked against github. I’ve seen people do this in the past, or use github as a repository for their dotfiles.

I’m a little timid about that though. Call me old-fashioned, but I still find The Cloud to be a little sketchy. I always have. Plus, I have used some real maverick software in the past, that stores passwords in plain text in dotfiles. 😯

Of course, my sense of dread stems from the downward turn of online privacy in recent years. But it also plays havoc with my less-than-stellar link to the Internet. When you realize you have a fragile connection, you adjust your lifestyle accordingly.

Anish promises that dotbot is easy to set up and for what I saw, it was. I don’t have a screenshot this time, and I apologize for that.

At the time of this writing, dotbot had seen attention within the last month. Like I mentioned earlier, a lot of the upcoming programs are worth mentioning because they’re more active; dotbot is a good example of that.

If you give dotbot a whirl, and it suits your fancy without triggering alarm bells, please use it with my blessing. Send along a screenshot if you like. 😉 Just because I steer away from a program doesn’t mean everyone else should. You are your own person. Stand your ground. Hold your head up.

Go forth, and preach the gospel of dotbot. 😀

vbackup: A little archive wizardry, for Debian fans

As best I can tell, vbackup is not available in Arch, Fedora or OpenSuse. I looked through each of those and found no traces of it. That’s a little surprising.

2014-06-14-lv-b7175-vbackup-01 2014-06-14-lv-b7175-vbackup-02 2014-06-14-lv-b7175-vbackup-03

vbackup calls itself a “modular” backup system, but I only find it packaged in Debian and its derivatives. The home page explains that it can duplicate a Debian package list so maybe that makes sense, but the very next line adds RPM support as well.

Perhaps the word just hasn’t gotten out yet.

vbackup claims it can support customizable backup scripts to work alongside its own defaults. And it apparently can back up mdadm and lvm data, and archive MBRs. It also can handle networked backup solutions too, or rely on nfs of scp for remote access. That’s pretty impressive.

My favorite part, of the few parts that I tried, was the setup wizard. For as many other archive tools as I’ve seen in this year-and-a-half adventure, it’s nice to find one that will at least set up a configuration for you, rather than dropping a cryptic configuration file on your lap and tapping its foot while it waits for you to set it up correctly.

In all honesty I didn’t run vbackup completely, and I never got close to restoring anything I did with vbackup. So it may be that in spite of a long list of features, it doesn’t really perform as well as imagined.

If that’s the case, I leave it to you to resolve. 😉

rsync: Needs no introduction

I don’t think there’s much I can say about rsync that isn’t already common knowledge or preaching to the choir.

kmandla@6m47421: ~/downloads$ rsync -ah --progress source/ destination/
sending incremental file list
            925 100%    0.00kB/s    0:00:00 (xfr#1, to-chk=9/11)
            835 100%  815.43kB/s    0:00:00 (xfr#2, to-chk=8/11)
            892 100%  871.09kB/s    0:00:00 (xfr#3, to-chk=7/11)
            901 100%  879.88kB/s    0:00:00 (xfr#4, to-chk=6/11)
            893 100%  872.07kB/s    0:00:00 (xfr#5, to-chk=5/11)
            900 100%  878.91kB/s    0:00:00 (xfr#6, to-chk=4/11)
            886 100%  865.23kB/s    0:00:00 (xfr#7, to-chk=3/11)
            832 100%  812.50kB/s    0:00:00 (xfr#8, to-chk=2/11)
            883 100%  862.30kB/s    0:00:00 (xfr#9, to-chk=1/11)
            888 100%  433.59kB/s    0:00:00 (xfr#10, to-chk=0/11)

kmandla@6m47421: ~/downloads$ 

rsync is, was, and has been one of my favorite tools for a very long time, and short of single-file, one target copies, it’s the one thing I use to copy, backup, synchronize or just plain double-check.

rsync works across networks, across directories and within file trees. It gives clean progress indicators, can run completely silent, can delete files that aren’t in the source folder, and will avoid updating files that don’t exist in the destination. Just tell it what you want.

I think that will do for now. Like I said at the start, if you know it, there’s no point in me gloating over it. And if you don’t … waste no time in trying it out. 😉

rdup: Still more backup options

Today seems to be backup day. I suppose given that it’s April Fools Day, I should probably take that as a hint.

rdup is next, and as I understand it, rdup and its brethren hope to keep the backup chore as close as possible to a simple, Unixy way of doing things.

rdup-simple is the one-shot script to perform a backup. A folder tree is probably the easiest way to show how it behaves.


rdup-simple pushes archives into a nested format, following the date. “rdup-simple” is right.

I am a little foggy on the footwork involved, but if I understand it right, rdup-simple incorporates rdup itself, which is capable of generating and tracking lists of file-by-file changes, and tackling only those.

There are a couple of other tools involved, and when you whip them all together and give them a source directory, you get rdup-simple and the above results.

It’s apparently possible to use any of the incorporated tools by itself, and that’s where the details get a little fuzzy for me. I leave it to you to figure out.

I like rdup-simple for being, well, simple 🙄 but for offering the opportunity to get my hands really dirty. It’s a shame I don’t have more intricate backup needs; I have a feeling I would like to get into the details. 😐

rdup is in Debian and AUR; the AUR script points to the wrong location for the source package, but will build if provided with the source tarball. Just so you know. 😉

rdiff-backup: Mirrored, with increments

I charged into rdiff-backup thinking it would be only a little more complex than rdiffdir was yesterday. Luckily I wasn’t too far off the mark.

rdiff-backup can make backups while conserving bandwidth, which is probably a great idea on the whole. It also makes incremental backups, and the home page promises file recovery over previous backups too.

I didn’t delve that far into it, but I do have a little to show for my effort:


My hope there was to show that rdiff-backup’s product is not only a mirror image of the source, but also includes data on what changed between runs. It might be a little difficult to follow; trust me if it’s not obvious.

Compared to a straight rsync, I can see where this would be preferable, if it conserves bandwidth and can offer access to past backups as well. I usually just refresh my archives with a simple rsync -ah --progress --delete, and there have been times I wished I could step backward once or twice in history.

On the other hand, this is very clean and straightforward, without a lot of the wrangling that I’ve seen in some other console-based backup tools. Given the need — such as a large-scale networked system — I’d definitely think this over as an option. 😉

rdiffdir: A succinct sync

Forgive me if I jump slightly out of order. I wanted to work with rdiffdir today, and I promise to touch on rdiff-backup tomorrow.

Also please forgive me if I don’t have screenshots this time. I think I can adequately explain what’s happening, and rdiffdir isn’t particularly wordy.

I have practical experience with it, albeit a few years out of date. At a time when I quit lugging an ancient laptop back and forth to work to listen to music, rdiffdir made it easy to synchronize my main music archive at home with the remote one at work … without a network connection.

“What witchcraft is this?!” you might howl. I’ll give you the command sequence, and you work out what’s happening. Office machine first:

rdiffdir signature music/ music.signature

Then at home:

rdiffdir delta music.signature music/ music.delta

Carry that back to the office, and …

rdiffdir patch music/ music.delta

And that’s it (or at least what I remember of it). The signature command creates a distinct impression of what’s available on the office machine. The delta creates a file packed with changed material from the home machine, and the patch command merges it with the destination at the office again.

It’s very clever, really. What you avoid is rsyncing entire folders to USB drives, then USB drives to destination folders — hopefully saving time, and space on your intermediary drive.

I could see where this would also be useful for completely offline backups, where you want to preserve file arrangements and integrity on one machine with another that is completely disconnected. Which, in this day and age, isn’t a bad idea. 😯

rdiffdir is part of duplicity, which is available in Debian and Arch. Tomorrow, rdiffdir’s ugly kid brother. 😉

By the way, I should mention that everything I know about rdiffdir I learned years ago from this page. Credit where credit is due. 😀