Move along, nothing to see here.

October 5th, 2013

Not that I post much anyway, but future content will end up on my github/jekyll site instead.  It’s available here: http://sagotsky.github.io/  I may set up a redirect or something when I’m not lazy.

Long story short, I don’t see the point in running wordpress for this site.  I can write what I want in markdown, push with github, and not having to worry about running out of date php on a shared host.  That alone is reason to switch, but I wouldn’t mind closing my dreamhost account either.

I finally used a bash array in a way that made sense and required no googling

December 15th, 2011

I’m not sure what’s wrong with me. In the past every time I’ve attempted to use an array in bash I reached the conclusion that arrays were the breaking point where I should give up and write a perl script instead. Finally found a case where the bash array was simple and reasonably elegant.

One of the constructs I find myself using on a daily basis is piping find into while read line, and then doing stuff with each line. In this case I was using sed to separate the file name and path to the file and then passing those off to some other functions. Seems like something find should be doing for me, don’t ya think?

Well, find does. Take a look at -printf in man find. Instead of printing the full path, you can tell find to give you whatever output you want. In my case I went with ‘%h %f\n’ which is the path, then the file.

Why does this make a difference? Because read has the -a option, which tells read to use an array instead of a scalar variable. Conveniently enough, array entries were delimited by a space.

The end result was that I could run the following:

find $PATH -printf '%h %f\n' | while read -a foo ; do
DIR=${foo[0]}
FILE=${foo[1]}

etc. It didn’t dramatically change the logic of what I was doing but cut out that sed cruft and removed any chance of me screwing up the regexes. And this will work for anything that find’s printf supports.

Alternatively, if remembered the indices of an array pisses you off, setting DIR=0 and FILE=1, would let you pretend to use an associative array by calling ${array[$DIR]} or something like that. I think I’d try this in a longer script, but it wasn’t necessary today so I haven’t actually tried it yet.

Bash alias args

March 15th, 2011

Long ago I learned that bash’s aliases don’t process args and you should just use functions instead. Turns out you can do args. This isn’t the most useful thing in the world because, well, functions. But it’s still cool.

Basically you write an alias to evaluate subshell like sh. Pass that $1 and $2, but do it in a single quote so they don’t evaluate. Your args get tacked onto the end of that and then it evaluates. You’re essentially treating sh as lambda.

Here’s the alias:
alias arg="sh -c 'echo \$3 \$2 \$1' --"

It looks little different if you check it in a shell, but I assure you it works:

<03:02:50> sagotsky@calculon:~$ alias arg
alias arg='sh -c '\''echo $3 $2 $1'\'' --'
<03:02:54> sagotsky@calculon:~$ arg a b c
c b a

php’s multi_curl

January 28th, 2011

PHP’s multi_curl has the saddest documentation I’ve ever encountered. What should have been an afternoon project in total(or 10 minutes in bash with xargs -P) ended up being 2 days to figure out. Now that I think I know how multi_curl works I can start making it actually do something.

Anyway, I’m posting my code because there aren’t enough examples of this kind of code on the web IMO. What this does is grab content from a bunch of hosts. It does this in parallel so that the whole execution time is that of the slowest host. But it also lets you limit how many connections to use at once (meaning that it will be the speed of several of the slowest hosts). This is to prevent PHP from crapping the bed if you open more connections than you have file handles. No, this is not heavily tested, but it’s better than anything I’ve found on google for similar purposes.

I apologize if wordpress drops whitespace and/or messes up funky characters. You can download the actual file here: http://files.sagotsky.com/multicurl.php.txt


// Report simple running errors
error_reporting(E_ERROR | E_WARNING | E_PARSE);

// urls to grab
$hosts = array(
'http://google.com',
'http://whatismyip.org',
'http://fakedomainthatprobablyisntevenregistered.mil',
'https://google.com',
'https://whatismyip.org',
'https://fakedomainthatprobablyisntevenregistered.mil',
);

$curling = array(); //active curl handles
$done = array(); //store finished host => content
$mh = curl_multi_init();
$is_running = null;
$handle_limit = 3;

do {
//fill up the curl queue at start of reqs and if any urls have finished
while (sizeof($curling) < $handle_limit && sizeof($hosts) > 0) {
$host = array_pop($hosts) ;
$curling[$host] = curl_init( $host );
curl_setopt( $curling[$host], CURLOPT_CONNECTTIMEOUT, 10 );
curl_setopt( $curling[$host], CURLOPT_RETURNTRANSFER, true );
curl_multi_add_handle( $mh, $curling[$host] );
}

// run curl
curl_multi_exec($mh, $is_running);

//pop finished hosts.
//curl_multi_remove_handle also removes the downloaded data, so we stash it for later.
$ready = curl_multi_select($mh);
$info = curl_multi_info_read($mh);
if ( $info && $info['msg']==CURLMSG_DONE ) {
//copy response
$host = curl_getinfo( $info['handle'], CURLINFO_EFFECTIVE_URL );
$http_code = curl_getinfo( $info['handle'], CURLINFO_HTTP_CODE );

/* only keep valid urls */
if ($http_code) {
$content = curl_multi_getcontent( $info['handle'] );
$done[$host] = $content;
}
#print_r(curl_getinfo($info['handle']));

//remove from queue and close handles.
curl_multi_remove_handle( $mh, $curling[$host] );
curl_close( $curling[$host] );
unset( $curling[$host] );
#echo "finished: $host\n";
}

} while ($is_running>0);

foreach($done as $host => $content) {
#$results = curl_multi_getcontent( $curling[$i] );
#echo "$i:\n$results\n\n";
echo "\n$host\n$content\n";
}

curl_multi_close($mh);

echo "\ndone\n";

?>

dmenu and rhythmbox

August 16th, 2010

dmenu is cool. For a while I’ve been wanting to use it to jump to new tracks in rhythmbox. Had I realized how easy it was, I’d have done this sooner.

#!/bin/sh

# selects song from dmenu, plays in rhythmbox

# must be same as path in rhythmbox, or else rb will add to library
MUSIC="$HOME/Music/"
DMENU_OPTS="-l 12 -b -i -fn -misc-fixed-*-*-*-*-20-*-*-*-*-*-*-* -sb #112 -sf #afc -nf #579 -nb #112"

SONGFILE=$( find $MUSIC -iname "*.mp3" | sed -e "s/\/home\/sagotsky\/Music//g" | dmenu $DMENU_OPTS )

if [ $? -eq 0 ] ; then
rhythmbox-client --play-uri="$MUSIC/$SONGFILE"
fi

#rhythmbox-client --play-uri="`find $HOME/Music/ -name "*.mp3" | dmenu -l 20 -i`"

dmenu takes a newline separated text. You select one line from that text and dmenu returns it. Beautifully simple. So all this script does is find your mp3s and tell rhythmbox to play the one you select.

It needs to display the whole path to each file. I’ve been thinking of just printing genre/artists/album/song and then doing a 2nd find to add it. This seems like overkill though.

I added some sed to filter out the path to my music folder. Unfortunately this is hard coded into $SONGFILE. I’d love for it to be derived from $MUSIC, but the forward slashes don’t play well with sed. I’m sure there’s a way around this, but I don’t care all that much.

Also, if you point to a new location for an old file, rhythmbox will add it as a second copy. Just make sure you set $MUSIC to the same location that RB uses for your music and this shouldn’t be a problem.

Some light rtorrent magic

April 26th, 2010

This shouldn’t be difficult, but documentation is poor and scripting is unintuitive. Hopefully someone else will find this post useful.

I set up rtorrent to allow for better seeding. My previous config moved completed files to a new dir. rtorrent kept seeding from those (I guess because it still held the same filehandle), but if rtorrent ever got closed it lost the files. Since I’ve been shutting down the computer at night (a filthy practice, I know), I haven’t been a good little seeder.

This fixes that. Instead of moving completed files, we symlink them.
on_finished = move_complete,"execute=ln,-s,$d.get_base_path=,~/torrents/finished/"

Simple enough, right?

But I also didn’t want to have to clean up my symlinks and move files later. That should be done automatically. Thankfully rtorrent also gives the on_erase event. When you remove a file from rtorrent (which also cleans up the .torrent) it runs your on_erase event. I use this to move the files to where the symlink is.

Only it’s not that easy. mv won’t clobber a symlink. cp won’t copy either, realizing that the symlink points to the file you’re copying. rm followed by move seemed reasonable, but rtorrent gave me no way to concatenate “~/torrents/finished/” and $d.get_base_filename. (rtorrent also didn’t tell me $d.get_base_filename existed – I had to grab and grep their code to find that one).

Anyway, the only way I found to remove a file where the path and filename are separate arguments is to use find. I also could have made a script and called that from rtorrent, but I’m too stubborn for that.

on_erase = move_complete,"execute=find,~/torrents/finished,-name,$d.get_base_filename=,-maxd epth,1,-delete ;execute=mv,-u,$d.get_base_path=,~/torrents/finished"

A second execute= statement moves your downloaded files to where the symlink used to be. I’m still not sure why move_complete is used. There are other options for that, but I can’t tell if they make a difference and documentation on this is non-existent.

Volume notifications

April 6th, 2010

Wrote a script to update my xmobar whenever the volume changes. Previously this was based on writing to a FIFO whenever I changed the volume. That worked, but only within my account. Other users could change the volume and I wouldn’t know until I made noise. Prior incarnations simply polled amixer.

Anyway, this script is post worthy because it does 3 things that are new for me.

#!/bin/sh

# Prints volume changes to stdout.
# Depends on inotify-tools package

SOUND_DEV="/dev/snd/controlC1"

volume() {
amixer -D default sget Master,0 \
| grep dB \
| head -n 1 \
| cut -f 7 -d " " \
| sed -e's/\[//g' -e 's/\]//g'
}

# loop only runs when inotify didn't fail (not present on all systems)
# and when parent process is xmobar. this _should_ ensure script quits after xmonad resets
while [ $? -eq 0 ] && [[ $(ps p $PPID | grep xmobar) ]];
do
volume
inotifywait $SOUND_DEV -e ACCESS -e CLOSE_WRITE > /dev/null 2>/dev/null
done

# also check PID of parent. if that's gone or not xmobar, bail.

[ $? -eq 0 ] – Okay, checking the exit status isn’t that new to me. But doing it as the condition in a loop is. Maybe I learned something from that perl book after all. Anyway, I was worried that inotify-tools was non standard and I didn’t want the script looping forever if the package was missing.

ps p $PPID | grep xmobar – This is the other condition of the loop. This demands more explanation. I’m calling this script from xmobar, which is a panel for displaying output from my window manager. It’s not interactive, but shows a ton of information. I find it a lot easier to get this to do my bidding than something like gnome-panel which depends on applets. Anyway, xmobar has a CommandReader module that runs a script or program and shows its stdout. The problem is that whenever xmobar gets restarted another instance of my script gets run. Realistically this isn’t a big problem, but I wanted to try to fix it the right way.

My first idea was to store my script’s pid and kill any other instances of the script. But I’ve done that before and wanted to try something new. Instead, we use ps to check for $PPID. What is $PPID? It’s the pid of the parent of this process. In this case it’s the xmobar that spawned my script. We use grep to check that the process is still xmobar. If the original xmobar is killed, grep finds nothing and the loop ends. There is a chance that a second xmobar will spawn with the original pid, but that’s a low enough chance that I can’t be bothered to care.

Finally, inotifywait is why I wrote the script. inotifywait lets you monitor your filesystem. It will sit there blocking the script until certain events take place. When they do it tells me what those events were (although I’m ignoring them in this case and using inotifywait’s completion to mean its time for the rest of the script to run). It’s pretty straightforward, but I had no idea this package existed until earlier today.

Cygwin in dropbox

February 17th, 2010

I occasionally have to use windows for work.  I often end up using it for gaming.  If I ever try to get anything done in windows, unless I’m ssh’ing into a computer with a proper OS, I have to do it in cygwin.  But cygwin is slow and obnoxious to install.  And because I don’t use it often I don’t bother configuring a good environment for it.

Enter dropbox.  Useful for syncing files across machines.  Why not sync a cygwin installation?

The obvious way to do this is in a portable app.  Google tells me that Symbiosoft published a version.  Google does not tell me how to make that version work.  I mucked around with their scripts but didn’t get too far.  The version of cygwin they used is two years old and its packages can’t be updated anymore. Time to try making a portable version myself

This was easier than expected.  A default installation will work.  All you have to change is Cygwin.bat to reflect the changing location of Cygwin.

The version I came up with is this:

cd > tmpFile
set \p cygwin=<tmpFile
del tmpFile

chdir "%cygwin%\bin"
bash --login -i

The nonsense at the top is there because I couldn’t figure out how to run a command inside another for shell expansion.  Backticks or $(cmd) would work in a proper shell.  Unlike in linux, calling cd with no args tells you the current path, a la pwd.  chdir is wrapped in quotes because Dropbox installs to “Documents and Settings” by default, and the spaces are bad for windows.  Finally we run bash.  Feel free to replace this with “rxvt -e bash” to run a nicer terminal.

Happily this worked.  I’ve only tested on two windows installations, but that’s all I have access to so it’s good enough for me.  Dropbox installations had slightly different paths, which showed that the cd line was doing its thing correctly.

The only problem at this point was that Dropbox synced slowly.  Apparently there’s a lot of overhead between files, and cygwin has a lot of files.  The transfer (I went with defaults plus cvs, ssh, vim, and zip) was 148mb, but took several hours.  The dropbox daemon said it was going at 2.6k/s.  If I ever need a terminal fast it would be more efficient to install a new cygwin, but for machines I use regularly this was fine.

There was one other problem that I consider my own damn fault.  I had a different username at home and at work.  Cygwin reflected this, so my profile did not mirror itself.  I ended up making a symlink in /home.  I’m not sure how that will work out, but if it’s problematic I’ll just change usrnames.

– edit –

Ran into more username problems.  This time it was the same name being problematic.  /home/username was fine, but the /etc/passwd and /etc/group entries weren’t.  What I ended up doing is creating /etc/passwd.d and group.d and putting a file in there for each host I use.   Then I wrote a script to copy over the current host settings to /etc/passwd and group (originally I used a symlink, but that apparently doesn’t work for /etc/passwd).  The script also makes a settings file for the current host if none exists.  I didn’t bother figuring out what to do if there are machines that share a hostname.  I suppose that instead of basing it on the host you could use mac address or something.  Finally I edited /etc/profile (which is where the annoying mkpasswd.exe errors come from) and called my script in place of all the echoed errors at the bottom.  Now instead of griping about group errors, cygwin silently fixes them.  Note that this script WILL bludgeon any /etc/passwd and /etc/group files you already have.  I would never run this on a real linux machine, but for a terminal inside of a toy OS I don’t think it’ll do any harm.

Here’s the script. Consider this a work in progress.:

#!/bin/sh

# this script attempts to handle /etc/passwd and /etc/group in a portable
# cygwin installation.  it does so by making /etc/{passwd,group}.d and filling
# those with configs for each host, then symlinking to the current host.
# this is ugly but serviceable.  

# call this script from /etc/profile.  the case near the bottom
# handles mkpasswd errors.  just comment out the echoes and call
# this script instead.

echo "Fixing /etc/{mkpasswd,group} for this cygwin host."

HOST=$(hostname.exe)

for FILE in passwd group ; do
        rm /etc/${FILE}

        # check for .d directories - create if missing
        if [ ! -d "/etc/${FILE}.d" ] ; then
                mkdir /etc/${FILE}.d
        fi

        # check for local files - create if missing
        if [ ! -f "/etc/${FILE}.d/${HOST}" ] ; then
                mk${FILE}.exe -l > /etc/${FILE}.d/${HOST}
        fi

        cp /etc/${FILE}.d/${HOST} /etc/${FILE}

done

CLI user switch

April 30th, 2009

11/18/09 edit

The new gdm in Karmic Koala is deprecated and featureless.  It does not support this script.  I don’t know if other gdm’s will be so useless, but if you’re on Karmic, don’t waste your time with this one.

—-

I recently switched to xmonad but stuck with a gnome-panel due to my dependence on gnome’s user switcher applet and my girlfriend’s refusal to learn ctrl-alt-f8.  Google told me that several other people were stuck in this situation.

Anyway, I found a way to do ctrl-alt-f8 in the command line, provided you’re using GDM.

gdmflexiserver -a -l -c "SET_VT #"

Where # is the number of the virtual console you’d like to switch to.

I also wrote a script to get a list of running xservers, grep for a username, and switch to the first console owned by that user. It’s not robust or complete, but gdmflexiserver seems arcane so I’m publishing this anyway and hopefully someone will find it useful.

#!/bin/bash

# Takes an arg, switches to that user's gdm session or starts a new one
# if none is open

# needs to be -u user, -?h help
# needs option for locking

function usage() { sed -e 's/^  //' <<EndUsage
  gdmSwitch.sh usage:
  gdmSwitch.sh takes exactly one argument, which is a username.  If that username
  has an open xsession, switches to it.  This is kinda like a console version of
  gnome-switcher-applet except that (for now) it doesn't let you query for users
  and pick one.

EndUsage
  exit 1
}

if [ $# -ne 1 ] ; then
    usage
else
    NAME=$1
    VT=$(gdmflexiserver -c CONSOLE_SERVERS | sed -e "s/^.*$NAME,\([0-9]*\).*$/\1/")

    #check if we found a term or if the whole string didn't match by doing a character count
    N=$(echo "$VT" | wc -c)

    if [ $N -gt 2 ] ; then
	# no open session
	gdmflexiserver -a -l -c "FLEXI_XSERVER"
    else
	# switch to open session
	gdmflexiserver -a -l -c "SET_VT $VT"
    fi
fi

note to self…

December 23rd, 2008

Note to self …

The downside of trees and tabs is that it adds a form to the page.  AJAXy text inputs may end up inside the form.  If they do that, they’ll do things on submit.

To cancel submitting, event handler should be attached to the form, not to the input.  onsubmit=’foo(); return false;’ will cancel the submit even and call foo().

That is all.