If you need a slick way to reach out and take from the grand repository of data we call The Internet, curl can do it.
Between this and wget, I’ll admit I don’t know the advantage. I notice that some programs prefer one to another, but I don’t know if that’s just programmer’s decision, or if there’s a specific reason for that.
curl’s web site says it can handle a lot — and I mean a lot — of different protocols, which might be a clue.
curl is usually a critical tool in almost any distro. I know for example if I try to yank it out of my lean-mean IceWM installation on Arch, I get a HoldPkg warning … meaning it’s hooked into some low-level, primordial packages that are keeping this delicate collection of software afloat.
I don’t know what else to say about curl. I have used it in the past, rarely, to pull something from the Internet on systems that I didn’t want to tax with a full-scale browser.
It worked fine then, and I have no reason to doubt it would work for you. 😉
Curl is what pacman uses to download from the repos. You can set it otherwise, but curl is included in the depends array of pacman.
Curl does gopher which imho is a damn good reason to use it 🙂
Pingback: Applications/Games-Related Links for September-October 2013 | Techrights
Pingback: netrik: Almost the ultralight-est | Inconsolation
Pingback: puf: Waiting for its moment to shine | Inconsolation
Pingback: Bonus: R is for released | Inconsolation
Pingback: Bonus: A dozen more remainders | Inconsolation
Pingback: cscope: The code navigator | Inconsolation
Pingback: cscope: The code navigator | Linux Admins