Tag Archives: download

greg: You’re so vain

Everyone named “Greg” out there in the world can now sit up straight and imagine this little program is named in their honor.

2015-04-21-6m47421-greg

I was introduced to greg after yesterday’s note about podcastxdl, and in spite of its lack of color and command-action-target input style, I think I like it better than the latter.

Of course, that screenshot isn’t very interesting, but what you see there is a lot of the way greg works. It maintains a list of podcasts and addresses, and you can wrangle them with fairly straightforward actions.

greg add adds to that list. greg remove drops it off, after you confirm it. greg check sees if anything is updated, and greg sync synchronizes your local folder with what’s available online. Like I said, it’s fairly straightforward.

I don’t see anything offhand that disappoints me about greg. I ran into no errors except when I fed it an invalid link, and it warned me that it wasn’t going to work. And aside from the lack of color and lack of an “interface,” it seems to work perfectly without my empty-headed suggestions.

So there’s greg, which we can add to the meager list of podcast aggregators for the console. Now do you see it? “greg”? “aggregator”? Aha. … 😉

podcastxdl: One-shot downloads for your ears

There are not many podcast tools I can mention, in the years spent spinning through console-based software. In fact, I can think of only about four. But here’s one you can add to your list, if you’re keeping one: PodcastXDL.

2015-04-19-6m47421-podcastxdl 2015-04-19-6m47421-podcastxdl-02

PodcastXDL works in a similar fashion to podget, which you might remember from a looong time ago. Give PodcastXDL a url and a file type, and it should parse through the stream and pull down everything that matches.

It can also spit out links, meaning you can use PodcastXDL to supply links to files, rather than download them. There are also command-line options to start or stop at specific points in a feed, which might be helpful for cropping out older files.

I’ll be honest and say I had a few difficulties working with PodcastXDL, most notably that it didn’t accept my target download directory. If you run into issues with PodcastXDL and nothing seems to be arriving, I would suggest leaving off any -d argument.

Other than that small hiccup, PodcastXDL did what it promised, and I ran into no major issues. It has good color, plenty of options and has seen updates within the past month or so, if you shy away from dated software.

If you need something quick and one-shot for podcast downloads, this could work for you and is better looking than podget was. If you’re looking for something more comprehensive and with more of an interface, stick with podbeuter.

groove-dl: Jumping the shark

I was tempted to skip over groove-dl because my list of stream ripper tools is starting to devolve into a tool-per-service array, and when things become discrete and overly precise, I start to fall toward the same rules that say, “no esoteric codec playback tools.”

2015-02-24-6m47421-groove-dl

I can’t complain too loudly though, because things like gplayer and soma are past titles that were more or less constrained to one site or service, and suddenly chopping off a portion of The List wouldn’t be fair.

But it wouldn’t be a terrible disservice, since most of what I was able to discover about groove-dl is encapsulated in that screenshot. Follow the command with a search string, and groove-dl will return a list of matches and the option to download a song.

Very straightforward, but also very rudimentary. Beyond the first 10 results, there’s no apparent way to continue through search. groove-dl itself doesn’t have any command flags that I could find; in fact, using -h or --help just pushed those strings through as search terms. Entering a blank line just brings groove-dl to a halt. Entering an invalid character causes a python error message. And yet entering a number beyond the list (like 12 or something) starts a download of some unidentified tune that matched your search, but wasn’t shown on screen. Go figure. :\

groove-dl will allow you to pick multiple targets though, and does use a generic but informative download progress bar to follow your selections. I can’t complain about that. And I see that there is a graphical interface, and it may be that there are more functions available to you from that rendition, than in the text-only interface.

But overall, with such a narrow focus and a narrow field of options and wide array of ways to confound it, I think there might be other, better utilities around for pulling tracks from Grooveshark.

groove-dl is in AUR but not Debian. If you try to install it, you’ll also need python-httplib2, which wasn’t included in the PKGBUILD. Happy grooving. 😉

driftnet: Dutifully duplicating

I’ve been tinkering with driftnet over the past day or so, in a little experiment born out of a suspicion that a web site was preloading images before a link was clicked. It’s completely out of context for this site, but it did introduce me to another console tool.

2015-02-19-r8-0acre-driftnet

Mostly I want to keep a note of driftnet here, because I have a feeling I will want to use it again in the future.

And to be honest, as far as driftnet’s console output, there isn’t much to see. In its “default” form, driftnet sends its findings to a viewer window, which suggests it is more intended for a graphical audience anyway.

But it does have an “adjunct” mode that omits that. Instead it keeps a running list of images it senses, and otherwise follows its standard operating procedures. Armed with that much function, you could make a case that it has a nongraphical format as well. (It supposedly can also sense audio files that are transferred, but I didn’t test that.)

And as you can see in that wide and spacious screenshot above, it does a good job grappling with images that pass through an interface, and stashing them for your later perusal.

Of course, the obvious uses for driftnet would be threefold: (1) too keep a local copy of images that your machine retrieves, (2) to access images that are otherwise unsave-able from a browser, or (3) to later accuse some miscreant of abusing their Internet access privileges by requesting images that are inappropriate. 😡

There may be other applications; however you use it, in its console-only format it should be lightweight enough to run in a spare tty, and duly make duplicates of what activity transpires.

driftnet is in Debian-based distros as you can see above. It’s also in AUR but neither the GTK nor Debian patch version would build for me. I didn’t work to hard to get an Arch copy though; it may be acceptable just to hijack the binary from Debian and run from there. 😉

gmail-attachment-downloader: You don’t want to know

Yesterday was the last work day of the month, and in my job that’s both a blessing and a curse, since it’s extraordinarily hectic, but it’s also payday. So my apologies for missing a day, but that job pays, and this one doesn’t. 😉

To complicate things I got two tips via e-mail, one from Rashid and one from Lewis, mentioning gmail-attachment-downloader. I like to check things before I add them to the list, and at first glance it looked like a simple python script that scrapes through attachments in your account, and gives you a local copy.

That’s true, but I should be clear: It downloads attachments. All attachments. Every last one. From the beginning. Of all time. 😯

So while I don’t have much to show for gmail-attachment-downloader, I do have about 10 years of junk to sort through as a result.

2015-01-31-6m47421-gmail-attachment-downloader

Aside from that warning, there are some other notes I should offer.

I ran into errors when I tried gmail-attachment-downloader in straight python in Arch. python2 appears to be the preferred framework here. Aside from that, I needed no peculiar dependencies. It takes no flags or options.

I gave my account with the @gmail.com suffix; the first time I tried with only the prefix and it wasn’t as successful. That I blame on GMail though, since I know it tends to want full addresses as “user names.”

As you can see, gmail-attachment-downloader is clever enough to avoid name collisions, and will skip over files that are identical and rename files that are similar. I don’t know if that means it is performing some sort of hash check or if it is just looking at file size. Either way is fine with me, but if you have a better idea, talk with the author.

My only suggestion for an improvement would be some sort of date stamping addition. Pulling down years of stashed .config files is fine, but without preserving the original date of the message, or perhaps prefixing the name with original date, everything is just swirled together.

And I suppose I should mention — again — that this is an all-or-nothing adventure. There’s no way (yet) to prompt to download a file, screen messages and pull down attachments by filter, or otherwise control the product. Start it up, set it spinning, and come back a few hours later.

And then spend the next day or two wondering what the context was for the half-dozen Anpanman wallpaper images buried somewhere in your account. Did I really e-mail those to myself … ? 😕

dosage: Get yours daily

It seems like a very long time since dailystrips, the comic downloader that had too many years between it and the current generation of comic hosting sites. dailystrips tried hard but as best I could tell, was unlikely to ever recover its 2003-era glory.

dosage, on the other hand, seems to have a firm grasp of The Way Things Are Now.

2014-12-30-jsgqk71-dosage-01

dosage takes the name of the comic as a target, and dutifully downloads the image at your command. It also archives those targets in a folder tree, meaning after you start collecting images, dosage only needs one abbreviated command to update your entire collection.

It’s a good system and lends itself to the process. To add to that, you can attach target dates to dosage commands, and retrieve specific issues. Or add the -a flag, and pull down everything from a date to present. And retrieve “adult” comics, with a specific flag.

Supposedly dosage can retrieve around 2000 comics from their respective host, and I can vouch for two or three I really didn’t think it would know, but it grabbed quite willingly. If you want to test its ability, you can feed it the --list flag, and see a giant list of what it knows, sent straight to your $PAGER.

2014-12-30-jsgqk71-dosage-02

I see where dosage flags multi-language comics with their translations, and so if you’re looking for something in another tongue, dosage may be able to help you.

Compared with dailystrips, dosage seems to have better access and better retrieval skills. Of course, that’s not really fair since dailystrips hasn’t seen much activity over the last decade.

dailystrips did have the option to build primitive HTML pages and plant your comics in them though, and while I do see something similar in dosage, it took me a few tries to build it correctly, and it seemed rather finicky if it had already built a file.

dosage is quite useful and if you’re a fan of comics — printed or electronic — it’s a must-have tool. And the beauty of dosage may be that it doesn’t require you to live in a graphical environment, since it’s primarily the downloader and organizer, and not the viewer.

And what should you do for a viewer? Well, that’s something we could review. … 😉

wiki-stream: Less than six degrees of separation

I didn’t intend for there to be two Wikipedia-ish tools on the same day, but one good wiki-related utility deserves another. Or in this case, deserves a gimmick.

Josh Hartigan‘s wiki-stream (executable as wikistream) tells you what you probably already know about Wikipedia: that the longer you spend daydreaming on the site, the more likely you are to find yourself traveling to oddball locations.

2014-12-29-jsgqk71-wiki-stream

You might not think it possible to travel from “Linux” to “physiology” in such a brief adventure, but apparently there are some tangential relationships that will lead you there.

I don’t think Josh would mind if I said out loud that wiki-stream has no real function other than to show the links that link between links, and how they spread out over the web of knowledge. Best I can tell, it takes no flags, doesn’t have much in the way of error trapping, and can blunder into logical circles at times.

But it’s kind of fun to watch.

wiki-stream is in neither Arch nor AUR nor Debian, most likely because it’s only about a month old. You can install it with npm, which might be slightly bewildering since the Arch version placed a symlink to the executable at ~/node_modules/.bin. I’m sure you can correct that if you know much about nodejs.

Now the trick is to somehow jam wiki-stream into wikicurses, and create the ultimate text-based toy for time-wasting. … :\