Category Archives: Linux

All my thoughts on Linux

Place shifting action 2

tivo.jpgI’ve been working on a command-line oriented TiVo2Go downloader so I can automate getting items off my tivo, or at least do it when I’m on the road via ssh (and then kick off that other script…)  I’ve written a quick and dirty TiVo library, and a sample script called TiVo2Disk which uses the library.

On its own, the TiVo library stuff I whipped up will just download the content from the TiVo still locked up in TiVo’s DRM.  If you pair it with tivodecode (like the sample script does) you can get the video as an MPEG-2 stream.  Beware, HD content is HUGE.

I’ve only tested with my Series 3, and I’ve run it under both OS X and Linux, but it should work on any .  The UI on the sample script is pretty bad, but its an early version.  I’m interested in an feedback, patches, etc anyone might have.  I still need to do some rdoc as well.

This works for me, but I’d like it to be more useful for more people.   In any case, feel free to checkout tivo-ruby-0.1.tar.gz.

CMake so far

I’ve been investigating cmake at work as a better build system for our cross platform C based projects. I’m thinking about starting up a third one, so now is the prefect time to really go after this as for one project we have a build system per platform and on the other we have two build systems. When you mix in wanting to make universal binaries on OS X its yet another wrinkle. cmake was recently chosen by KDE to be the build system for KDE4 since KDE4 will be fully supporting Windows and OS X, as well as the other unicies via X. I used a small convenience library as the test piece as it was only two files big, but it had the requirement of at least two external libraries.

Some pros for cmake that I’ve found so far (compared to what we’ve been doing):

  • support a big number of build environments on the different platforms. On windows it sports 11 different build environments, OS X 3, and Linux 2. For OS X and Linux, you only really need those two or three, but on windows it supports 4 different versions of visual studio as well as Borland, Watcom, and gcc.
  • Takes care of the flags needed to build executables and libraries on those supported platforms.
  • Does out of source builds on windows.
  • Tracks dependences on all platforms without an external application
  • Does search and replacing on things like .in files without having to call out to external applications

Some cons I’ve found so far:

  • The documentation on the web page is pretty horrid. The book is pretty bad too, especially when compared to other technical books I’ve read recently, but its much better than the website. When combined with the book and experimentation, the FAQ is helpful.
  • Doesn’t really have the concept of convenience libraries. This will result in common files being built multiple times. I don’t like this, but its not fatal.
  • The CMakeCache is getting in my way more than being a help, but that might be the side effect of my learning process right now.
  • I haven’t yet figured out how to make it query the person compiling the app if it can’t find something. This may not be possible. At the very least I want to make it bitch and bomb out if a required dependency isn’t there. I just haven’t found it yet, I’m thinking.

This isn’t an exhaustive review yet, but I wanted to get down what was in my mind before I forgot. I had just found the convenience library thing and that’s what inspired the post. My next step is to move a full existing project over to being built with cmake. This is a library that depends on expat, boost, curl, antlr, and (optionally) swig. Should be a good challenge.

[Update 11:58: I found an answer to my “bomb out if the dependencies are missing” question. Thanks, devchannel!]

[Update 2:51: No this isn’t here just for g0ff. Turns out the latest cmake has modules to Find Java, Doxygen, Boost, Curl, Expat, and Swig already. It looks like just custom items for antlr and cppunit will be needed. Also, it only ever wants to link against dynamic libraries, not static ones. That’s a PITA.]

[Update 5:52: Okay, the convenience library thing is upsetting.  The output of what I was working on is a static library and there are same example command line tools that link against it.  From my reading of the cmake stuff, I should just include the library source files to the target for the executables being created.  The problem with this is for n example programs, I’m compiling librets n times.  This doesn’t seem very optimal.]

OS X moving unix forward

For every stupid hard coded Steve Jobism in OS X1, there’s some really awesome unix extentions I’d like to see elsewhere.  The big one for me today has to do with DNS handling.

I’ve been playing with OpenVPN to get access to my network at home.  Since I have a MacBook Pro from work, that’s been my end point client.  I’ve been using Tunnelblick as my OpenVPN client to connect to OpenVPN server on my linux box (installed via DAG’s RPM repository.)  One thing that bugged me was how to get DNS so I can see my internal home DNS without breaking access to work’s internal DNS.  If I was using a linux laptop, I think my solution would have to do with running a local instance of named with some wacked out config to do caching only and refer to different DNS servers.  Hardly dynamic and a giant PITA to get going.

I was curious about how to make this go though, and what general solutions people had when I came across a post by Mike Erdely titled OpenVPN + DNS + OS X.  That is exactly what I wanted to do!  As a bonus he’s even using Tunnelblick.

Mike shows how OS X’s DNS resolver uses an /etc/resolver directory to get additional per-domain configuration information, as opposed to the blanket /etc/resolve.conf that unix users have come to know.  To get the mac to resolve kgarner.com using my doman’s internal DNS server I just need to create /etc/resolver/kgarner.com and put nameserver 192.168.1.10 inside of it.  This directs OS X’s resolver to ask 192.168.1.10 for any kgarner.com query.  He also shows how to flush OS X’s DNS cache via lookupd so if I had hit any of my public kgarner.com IPs the resolver will send me to the private ip instead of the public one i’ve already hit.  There’s also two simple scripts that you can integrate with OpenVPN to add and remove the /etc/resolver entry as needed.

The fact that OS X’s resolver will check for entries /etc/resolver first is the type of smart unix extentions I’d like to see more of.  There’s no reason Linux’s resolver can’t be doing something like this.  It would make VPNs easier to implement, and doesn’t seem to be that hard to add to the resolver code.

Other examples of OS X moving stuff forward is the init/cron/at all in one launchd.  I’m slowly starting to agree that init, cron, and at are all sides of the same coin.  Don’t get me wrong, launchd has some issues, but the idea is a step in the right direction, especially for machines that will sleep.  A lot of what OS X has done to make unix better is especially for mobile sleep-capable devices like laptops.

1 Ask MARK for a laundry list of them…  🙂 

color grep

Every once in awhile I come across a feature of a piece of software, generally, a small utility that I hadn’t known about and that shows immediate value.  Today Jon showed me the --color flag for GNU grep.  It uses color to highlight the term you were searching for in the line returned.  For example:

# grep –color=auto -i metadata todo.txt
Metadata Functions to Move:
   MetadataView…Make sure only our indexed items are passed up.

Its a very simple thing, but one of those that I’m surpised I haven’t been using.  I know have a shell aliases for that.  See the grep documentation for more information.

[Update 6/2: linux.com has a great article on GNU grep’s new features which talks about the color.  One that I’m particularly expected about is the ability to use Perl-style regular expressions.]

Concatenate PDFs

I often like to print out many web pages to read on the train.  To not waste paper I like to print them 2 up and double sided.  If the printer supports it, I also like to staple the pages.  On Linux, I use Firefox to print to postscript, then used a2ps to have the PS files combined, 2-uped, and short-side duplexed.  I’d then manually staple it, as there was no good way to tell the print center at work to staple it.  I’d use a command line similar to this:

a2ps -Eps -Afill -stumble 1.ps 2.ps 3.ps 4.ps

I tried this approach under OS X, but the problem is that the postscript that is generated on OS X is so detailed that it takes forever to process to print out, on the order of 2 minutes of processing per article.  Since PDF is the spooling format for printing in OS X (coming soon to linux) I thought I’d look to see if there was an easy way to concatinate PDF files so I could then have the regular printing interface (via Preview) handle the 2-up, double-sided, stapling goodness.

After much searching around I found this article and later this web page.  Combining a bit from both, I came up with following that works really well in my few days of testing.

texexec --pdf --paper=letter --pdfarrange --result all.pdf 1.pdf 2.pdf 3.pdf 4.pdf

It runs really quickly (especially in comparison to the a2ps method) and then I just open all.pdf and print from there.  It requires that you have teTeX installed.  On both Linux and OS X I had this installed as part of the prerequesets for docbook and doxygen.

Things we have relearned today

Regular backups are your friend. Also, if you can’t normally back something up (i.e. data in open ldap’s backend) do a regular dump and back that up.

rsync/unison/scp data off your co-loc to a local machine, and back it up again, just for good measure.

The new drive is the drive most likely to die first.

Losing /var really fucking sucks.

Recreating most of the information in ldap out of mail logs is cool, though.

Drinking doesn’t solve sysadmin nightmares, but it makes you feel good while you’re having the nightmare.

Really Slick Screensavers for FC4

For those who haven’t seen them, the Really Slick Screensavers they really are some nice eye candy. A number of years ago, Tugrul Galatali ported them to Linux, primarily for use with XScreenSaver.

I’ve been pretty busy the past few months, so I was living without the RSS on my desktop since my move to FC4. I finally had some time to kill today, so I went about to get them up and running, and in RPM form. Since last time I installed them (version 0.7.4) Tugrul updated them to version 0.8.0. The spec file I used needed to be updated to match 0.8.0, to fix a small bug in the 0.8.0 build system, and account for differences in FC4. After a few moments of screwing around, I’ve got a spec file that works, a patch that works, and a built rpm.

This spec file is by no means perfect, as I’ve been learning how to build spec files on the fly, but it’ll work in most cases. I haven’t loaded it up with BuildRequires, but if you’ve got a modern desktop like GNOME or KDE installed and their devel packages, you should be in good shape.

I have to give credit to who originally put the spec file together, but I forget who that was. I’m fairly certain I got it from Tugrul’s page, but I can’t seem to find it again.

self-made HAL and iPod problem on FC4

This was originally going to be a post bitching about how much of a pain in the ass FC4 had been for me when compared to its older brother, FC3. However, it turns out the problem I was having that almost sent me back to FC3 was of my own doing.

When FC3 first came out, it didn’t have support for Firewire built into its kernel, so I’ve been hand compiling a kernel since then so I could use my iPod. When I installed FC4, I just used my .config and built a kernel for FC4. This was the source of my undoing and of losing a few hours debugging.

The show stopper that almost sent me back to FC3 was that I couldn’t get my Firewire based iPod working with FC4. I couldn’t find any reports of this problem, so I figured it had to be something unique to me. I finally found this series of posts from the HAL mailing list. This pointed out that it was a kernel problem.

I rebooted and dropped back to the latest Fedora-supplied kernel and lo-and-behold, it worked. So, it turns out that the way Firewire reports that device type in the kernel changed for the iPod. Its become more specific, which is not a problem, but it introduced a new type that the current version of HAL that comes with FC4 was unaware of.

The kernel used to report the iPod as (emphasis mine):

Vendor: Apple     Model: iPod              Rev: 1.53
Type:   Direct-Access                      ANSI SCSI revision: 02

It now reports it as:

Vendor: Apple     Model: iPod              Rev: 1.53
Type:   Direct-Access-RBC                  ANSI SCSI revision: 02

To fix this, I took the patch from that first post I linked to above and the hal src rpm from FC4, and merged them together and rebuilt. Boom, it works as it should. I’ve put a diff of my changes to hal.spec up, in case anyone else wants to redo this.