Tag Archives: download

greg: You’re so vain

Everyone named “Greg” out there in the world can now sit up straight and imagine this little program is named in their honor.

2015-04-21-6m47421-greg

I was introduced to greg after yesterday’s note about podcastxdl, and in spite of its lack of color and command-action-target input style, I think I like it better than the latter.

Of course, that screenshot isn’t very interesting, but what you see there is a lot of the way greg works. It maintains a list of podcasts and addresses, and you can wrangle them with fairly straightforward actions.

greg add adds to that list. greg remove drops it off, after you confirm it. greg check sees if anything is updated, and greg sync synchronizes your local folder with what’s available online. Like I said, it’s fairly straightforward.

I don’t see anything offhand that disappoints me about greg. I ran into no errors except when I fed it an invalid link, and it warned me that it wasn’t going to work. And aside from the lack of color and lack of an “interface,” it seems to work perfectly without my empty-headed suggestions.

So there’s greg, which we can add to the meager list of podcast aggregators for the console. Now do you see it? “greg”? “aggregator”? Aha. … 😉

podcastxdl: One-shot downloads for your ears

There are not many podcast tools I can mention, in the years spent spinning through console-based software. In fact, I can think of only about four. But here’s one you can add to your list, if you’re keeping one: PodcastXDL.

2015-04-19-6m47421-podcastxdl 2015-04-19-6m47421-podcastxdl-02

PodcastXDL works in a similar fashion to podget, which you might remember from a looong time ago. Give PodcastXDL a url and a file type, and it should parse through the stream and pull down everything that matches.

It can also spit out links, meaning you can use PodcastXDL to supply links to files, rather than download them. There are also command-line options to start or stop at specific points in a feed, which might be helpful for cropping out older files.

I’ll be honest and say I had a few difficulties working with PodcastXDL, most notably that it didn’t accept my target download directory. If you run into issues with PodcastXDL and nothing seems to be arriving, I would suggest leaving off any -d argument.

Other than that small hiccup, PodcastXDL did what it promised, and I ran into no major issues. It has good color, plenty of options and has seen updates within the past month or so, if you shy away from dated software.

If you need something quick and one-shot for podcast downloads, this could work for you and is better looking than podget was. If you’re looking for something more comprehensive and with more of an interface, stick with podbeuter.

groove-dl: Jumping the shark

I was tempted to skip over groove-dl because my list of stream ripper tools is starting to devolve into a tool-per-service array, and when things become discrete and overly precise, I start to fall toward the same rules that say, “no esoteric codec playback tools.”

2015-02-24-6m47421-groove-dl

I can’t complain too loudly though, because things like gplayer and soma are past titles that were more or less constrained to one site or service, and suddenly chopping off a portion of The List wouldn’t be fair.

But it wouldn’t be a terrible disservice, since most of what I was able to discover about groove-dl is encapsulated in that screenshot. Follow the command with a search string, and groove-dl will return a list of matches and the option to download a song.

Very straightforward, but also very rudimentary. Beyond the first 10 results, there’s no apparent way to continue through search. groove-dl itself doesn’t have any command flags that I could find; in fact, using -h or --help just pushed those strings through as search terms. Entering a blank line just brings groove-dl to a halt. Entering an invalid character causes a python error message. And yet entering a number beyond the list (like 12 or something) starts a download of some unidentified tune that matched your search, but wasn’t shown on screen. Go figure. :\

groove-dl will allow you to pick multiple targets though, and does use a generic but informative download progress bar to follow your selections. I can’t complain about that. And I see that there is a graphical interface, and it may be that there are more functions available to you from that rendition, than in the text-only interface.

But overall, with such a narrow focus and a narrow field of options and wide array of ways to confound it, I think there might be other, better utilities around for pulling tracks from Grooveshark.

groove-dl is in AUR but not Debian. If you try to install it, you’ll also need python-httplib2, which wasn’t included in the PKGBUILD. Happy grooving. 😉

driftnet: Dutifully duplicating

I’ve been tinkering with driftnet over the past day or so, in a little experiment born out of a suspicion that a web site was preloading images before a link was clicked. It’s completely out of context for this site, but it did introduce me to another console tool.

2015-02-19-r8-0acre-driftnet

Mostly I want to keep a note of driftnet here, because I have a feeling I will want to use it again in the future.

And to be honest, as far as driftnet’s console output, there isn’t much to see. In its “default” form, driftnet sends its findings to a viewer window, which suggests it is more intended for a graphical audience anyway.

But it does have an “adjunct” mode that omits that. Instead it keeps a running list of images it senses, and otherwise follows its standard operating procedures. Armed with that much function, you could make a case that it has a nongraphical format as well. (It supposedly can also sense audio files that are transferred, but I didn’t test that.)

And as you can see in that wide and spacious screenshot above, it does a good job grappling with images that pass through an interface, and stashing them for your later perusal.

Of course, the obvious uses for driftnet would be threefold: (1) too keep a local copy of images that your machine retrieves, (2) to access images that are otherwise unsave-able from a browser, or (3) to later accuse some miscreant of abusing their Internet access privileges by requesting images that are inappropriate. 😡

There may be other applications; however you use it, in its console-only format it should be lightweight enough to run in a spare tty, and duly make duplicates of what activity transpires.

driftnet is in Debian-based distros as you can see above. It’s also in AUR but neither the GTK nor Debian patch version would build for me. I didn’t work to hard to get an Arch copy though; it may be acceptable just to hijack the binary from Debian and run from there. 😉

gmail-attachment-downloader: You don’t want to know

Yesterday was the last work day of the month, and in my job that’s both a blessing and a curse, since it’s extraordinarily hectic, but it’s also payday. So my apologies for missing a day, but that job pays, and this one doesn’t. 😉

To complicate things I got two tips via e-mail, one from Rashid and one from Lewis, mentioning gmail-attachment-downloader. I like to check things before I add them to the list, and at first glance it looked like a simple python script that scrapes through attachments in your account, and gives you a local copy.

That’s true, but I should be clear: It downloads attachments. All attachments. Every last one. From the beginning. Of all time. 😯

So while I don’t have much to show for gmail-attachment-downloader, I do have about 10 years of junk to sort through as a result.

2015-01-31-6m47421-gmail-attachment-downloader

Aside from that warning, there are some other notes I should offer.

I ran into errors when I tried gmail-attachment-downloader in straight python in Arch. python2 appears to be the preferred framework here. Aside from that, I needed no peculiar dependencies. It takes no flags or options.

I gave my account with the @gmail.com suffix; the first time I tried with only the prefix and it wasn’t as successful. That I blame on GMail though, since I know it tends to want full addresses as “user names.”

As you can see, gmail-attachment-downloader is clever enough to avoid name collisions, and will skip over files that are identical and rename files that are similar. I don’t know if that means it is performing some sort of hash check or if it is just looking at file size. Either way is fine with me, but if you have a better idea, talk with the author.

My only suggestion for an improvement would be some sort of date stamping addition. Pulling down years of stashed .config files is fine, but without preserving the original date of the message, or perhaps prefixing the name with original date, everything is just swirled together.

And I suppose I should mention — again — that this is an all-or-nothing adventure. There’s no way (yet) to prompt to download a file, screen messages and pull down attachments by filter, or otherwise control the product. Start it up, set it spinning, and come back a few hours later.

And then spend the next day or two wondering what the context was for the half-dozen Anpanman wallpaper images buried somewhere in your account. Did I really e-mail those to myself … ? 😕

dosage: Get yours daily

It seems like a very long time since dailystrips, the comic downloader that had too many years between it and the current generation of comic hosting sites. dailystrips tried hard but as best I could tell, was unlikely to ever recover its 2003-era glory.

dosage, on the other hand, seems to have a firm grasp of The Way Things Are Now.

2014-12-30-jsgqk71-dosage-01

dosage takes the name of the comic as a target, and dutifully downloads the image at your command. It also archives those targets in a folder tree, meaning after you start collecting images, dosage only needs one abbreviated command to update your entire collection.

It’s a good system and lends itself to the process. To add to that, you can attach target dates to dosage commands, and retrieve specific issues. Or add the -a flag, and pull down everything from a date to present. And retrieve “adult” comics, with a specific flag.

Supposedly dosage can retrieve around 2000 comics from their respective host, and I can vouch for two or three I really didn’t think it would know, but it grabbed quite willingly. If you want to test its ability, you can feed it the --list flag, and see a giant list of what it knows, sent straight to your $PAGER.

2014-12-30-jsgqk71-dosage-02

I see where dosage flags multi-language comics with their translations, and so if you’re looking for something in another tongue, dosage may be able to help you.

Compared with dailystrips, dosage seems to have better access and better retrieval skills. Of course, that’s not really fair since dailystrips hasn’t seen much activity over the last decade.

dailystrips did have the option to build primitive HTML pages and plant your comics in them though, and while I do see something similar in dosage, it took me a few tries to build it correctly, and it seemed rather finicky if it had already built a file.

dosage is quite useful and if you’re a fan of comics — printed or electronic — it’s a must-have tool. And the beauty of dosage may be that it doesn’t require you to live in a graphical environment, since it’s primarily the downloader and organizer, and not the viewer.

And what should you do for a viewer? Well, that’s something we could review. … 😉

wiki-stream: Less than six degrees of separation

I didn’t intend for there to be two Wikipedia-ish tools on the same day, but one good wiki-related utility deserves another. Or in this case, deserves a gimmick.

Josh Hartigan‘s wiki-stream (executable as wikistream) tells you what you probably already know about Wikipedia: that the longer you spend daydreaming on the site, the more likely you are to find yourself traveling to oddball locations.

2014-12-29-jsgqk71-wiki-stream

You might not think it possible to travel from “Linux” to “physiology” in such a brief adventure, but apparently there are some tangential relationships that will lead you there.

I don’t think Josh would mind if I said out loud that wiki-stream has no real function other than to show the links that link between links, and how they spread out over the web of knowledge. Best I can tell, it takes no flags, doesn’t have much in the way of error trapping, and can blunder into logical circles at times.

But it’s kind of fun to watch.

wiki-stream is in neither Arch nor AUR nor Debian, most likely because it’s only about a month old. You can install it with npm, which might be slightly bewildering since the Arch version placed a symlink to the executable at ~/node_modules/.bin. I’m sure you can correct that if you know much about nodejs.

Now the trick is to somehow jam wiki-stream into wikicurses, and create the ultimate text-based toy for time-wasting. … :\

wikicurses: Information, in brief

If you remember back to wikipedia2text from a couple of months ago, you might have seen where ids1024 left a note about wikicurses, which intends to do something similar.

2014-12-29-jsgqk71-wikicurses-linux

Ordinarily I use most as a $PAGER and it might look like most is working there, but it’s not. That’s the “bundled” pager, with the title of the wikipedia page at the top, and the body text formatted down the space of the terminal.

wikicurses has a few features that I like in particular. Color, of course, and the screen layout are good. I like that the title of the page is placed at the topmost point, and in a fixed position. Score points for all that.

Further, wikicurses can access (to the best of my knowledge) just about any MediaWiki site, and has hotkeys to show a table of contents, or to bookmark pages. Most navigation is vi-style, but you can use arrow keys and page up/down rather than the HJKL-etc. keys.

Pressing “o” gives you a popup search box, and pressing tab while in that search box will complete a term — which is a very nice touch. There are a few other commands, accessible mostly through :+term formats, much like you’d see in vi. Press “q” to exit.

From the command line you can feed wikicurses a search term or a link. You can also jump straight to a particular feed — like Picture of the Day or whatever the site offers. If you hit a disambiguation page, you have the option to select a target and move to that page, sort of like you see here.

2014-12-29-jsgqk71-wikicurses-disambiguation

That’s a very nice way to solve the issue.

There are a couple of things that wikicurses might seem to lack. First, short of re-searching a term, there’s no real way to navigate forward or back through pages. Perhaps that is by design, since adding that might make wikicurses more of an Internet browser than just a data-access tool.

It does make things a little clumsy, particularly if you’ve “navigated” to the wrong page and just want to work back to correct your mistake.

In the same way, pulling page from Wikipedia and displaying it in wikicurses removes any links that were otherwise available. So if you’re tracking family histories or tracing the relationships between evil corporate entities, you’ll have to search, read, then search again, then read again, then search again, then. …

But again, if you’re after a tool to navigate the site, you should probably look into something different. As best I can tell, wikicurses is intended as a one-shot page reader, and not a full-fledged browser, so limiting its scope might be the best idea.

There are a couple of other minor points I would suggest. wikicurses might offer the option to use your $PAGER, rather than its built-in format. I say that mostly because there are minor fillips that a pager might offer — like, for example, page counts or text searching — that wikicurses doesn’t approach.

But wikicurses is a definite step up from wikipedia2text. And since wikicurses seems to know its focus and wisely doesn’t step too far beyond it, it’s worth keeping around for one-shot searches or for specialized wikis that don’t warrant full-scale browser searches. Or for times like nowadays, when half of Wikipedia’s display is commandeered by a plea for contributions. … 🙄 😡

mps: Not unlike its brother

I was sorely tempted to gloss over mps, because I mentioned mps-youtube way back in September. But I’ve spent a short time with it and I think it’s worthy of mention in its own right.

2014-12-27-jsgqk71-mps

mps sticks very close to mps-youtube in terms of operation and playback; enter a search term at the startup and mps will show a series of results. Cue the number of the track and mps feeds it into mplayer (or mpv), and the standard keys and controls are available to you.

The home page for mps suggests it can also create playlists, search for single tracks or through album lists, download tracks as well as stream, and a few other nifty tricks.

The home page also says the program works with python 2.7 and 3.3, but does not require any python dependencies. I’m a little fuzzy on that, but as a general rule of life, I subscribe to the principle that less dependencies is better.

I’ll keep this short since much of what mps does is similar to mps-youtube, and rehashing the features of one isn’t necessarily an endorsement of the other. If you liked the way mps-youtube worked — and I did, quite a lot as a matter of fact — mps is going to be familiar and enjoyable. Try one, try the other. 😉

nzbget: Heretofore unbeknownst to me

I had honestly never heard of .nzb files until this morning, when nzbget popped out of my vimwiki folder as the choice of the Fates for today.

2014-12-11-6m47421-nzbget

I make my apology for not knowing about .nzb files as a corollary to my relative ignorance about Usenet in general. I think until I had configured slrn to work properly, that entire portion of the Internet was really just a gray area for me.

So .nzb files don’t appear on my radar until the end of 2014, which I suppose should embarrass me. I do like how nzbget handles them though.

Good use of color. Has an arrangement like most or mutt or some other console applications. Expands or contracts to fit your terminal size. The major key commands are on the screen and will update to show your choices at any given moment.

Configuration was a little tetchy, only because nzbget wants a .nzbget file (with some alternatives) as a configuration before it will start. I just gave it a blank file, a la touch .nzbget, and it started well enough after that.

As a bonus, if you’re one of those people who abhors stale software, I see by the home page that the stable version is dated November 27, and a testing version is only three days newer than that. So you rest easy in the knowledge that your software is only days, if not weeks, old. 🙄

Debian versions for Wheezy are quite a bit older, but the Arch version in community seems to be the newest.

I am curious to see what is available through .nzb files, that I can’t necessarily get through .torrents or traditional downloads. Owing to my meager experiences with Usenet readers, I must admit I have my doubts. … 😕