Tuesday, August 25, 2009

Quicky

Palm pre/webOS is an interesting new platform, I've gotten a look at the emulator recently and I must say I like most of the tools out of the box more than android. Configs are in json and javascript/html is the main development environment (with css for themeing). I particularly like json because it avoids xml hell involved with many java projects who choose to abuse it. Anywho, I now know how webOS got it's name. Kudos to palm, this should also be one of the fastest/easiest platforms (graphics/reactivity seems to say that alone) but they can take advantage of the javascript engine wars (dalvik can often be terribly slow, it was a big mistake IMHO on android, they should've stuck with mono). One thing that weirded me out about the SDK for webOS though was official support only for ubuntu i386 linux (among macos/windows). Thankfully it worked without a hitch on opensuse x86_64 (they should really let people know it works great when not using ubuntu or even i386 for that matter! also virtualpc 3.0 as opposed to 2.2) but when I tried it inside qemu of 32bit ubuntu 9.04, the virtualpc emulator is uses seems to have problems and dies right after grub is loaded. It turns out qemu (with kvm acceleration) and virtualpc (uses something along those lines) are mutually exclusive. But you won't get the nice helpful error message if you try running it inside of the emulator... you will just get a black screen right after grub. I hope others don't make the same mistake, this cost me quite a few hours the other day to find the cause for such a stupid little thing. Why should emulators using kvm or whatever be mutually exclusive or not run within each-other...? Whatever, up next: pqaeq, the pulse audio equalizer front-end.

Thursday, August 20, 2009

another new old project posted

exploder: uniform simple and sane unarchiving
Handles even passwords and make's sure to always put the resultant files in a subdirectory so your home never gets polluted again.

Tuesday, August 18, 2009

audiomap

New project! audiomap: Bullcrap free audio conversion. I say new but I actually finished it over a year ago. Cheers.

Wednesday, August 12, 2009

Composition over Inheritance: The functional approach to GUI Programming

Ever program with a GUI toolkit? Far too many classes and inheritance for some of the small tasks, right? This is especially true of event handlers meaning this isn't just GUIs though, it's any type of FSM (Finite State Machine) setup. The problem is that the event callback has to do some randomly scoped work that the actual FSM code can't be designed to handle easily.

So there are two approaches taken:

  • Event handlers are a function that takes a pointer and a userdata struct (this approach is often taken in c/c++).
  • Event handlers are an object which implements an interface (an adapter/wrapper class).

The second approach is the one Java often takes. It's painful and you will usually get buried under many trivial classes (inner-anonymous or implemented in their own separate file) and syntax. This is a large barrier and makes a simple problem more complicated. I prefer a solution closer to the first, but the way it is typically implemented is still wrong! There is almost just as much syntax wasted what with creating the function somewhere else in the code and declaring, packing, and unpacking each of these userdata structs. It's just ugly. It doesn't need to be so bad though.

Essentially, one needs to gain an insight of what information the callback actually needs access to. These would be the fields in your userdata struct or what the adaptor classes access from the instantiating class. This tends to be the entire class (if applicable), one or two parameters, or nothing at all. An FSM model really just wants to do the following:
If condition x, perform callback y
That's pretty much the conceptual for every given state/node for all pairs (x,y) of condition/callbacks each respective node contains. Simple right? Well, we just need to get back to our conceptual roots. Enter partials or currying from the functional languages to make that possible.
You might have heard of lambdas or anonymous functions before. Often one must use these to get away with partial function application if not in a purely function language. Languages like Python, Ruby (blocks too), lisp, haskell, and even Matlab all have these in one form or another. Some like, python, can make it a little more tricky to use partials than it should be but a fairly natural way exists. Now Python (and thus also Ruby, to some degree) are going to be the focus of the rest of this entry because these languages have bindings to every GUI toolkit known to man. They are also dubbed RAD (Rapid Application Development) or prototyping languages because how fast one might implement a fairly complex program.

Ruby implements this a little more correctly so not much needs to be said there other than an example like this:
def multiply(x, y); x * y; end
z=5
multiply_by_z = lambda { |x| multiply(z,x) }
In python, we can get away with this like the following:
from functools import partial
def multiply(x,y): return x*y
z=5
multiply_by_z=partial(multiply,z)
Now here's how to code less when hooking things up in FSMs and thus GUIs. Remember how we really were going through a roundabout way of just saving/packing parameters so that we could use them later in a callback for an event? Well, now you just make your function up or perform your adaptive-behavior in a simple lambda (if applicable) or nested function (this part isn't that new except in languages such as Ruby/Python you have nested functions which allow you to put the callback right where it makes sense and possibly not making you name mundane things which are obvious like the multiply above). Ruby automatically saves the arguments you've specified in the function call already and in python you wrap them with a partial object. So now when you hand the callback to your FSM model addEventOnCondition(callback()) or addEventOn(condition(), callback()) methods you will notice that on the FSM side, you only need to take void/no argument functions! The callbacks already have had their arguments set, they just haven't been called yet. This simplifies the FSM code as well as gets rid of all those userdata struct classes. There is also no need for any adapter classes. The FSM classes implement just what they conceptually need. Functions/Event callbacks are handed just what they need before their actual application, right where you have the most natural access to them. The syntax barrier is low. The code is short and as close as currently possible to the ideal. This all translates into less effort spent and less bugs to deal with.

Using partials/currying makes sense. I encourage every programmer to try and use these constructs, it's staggering how much they can simplify your designs. Bonus points if you start using map/each instead of writing for-loops all the time.

Wiki for the equalizer

Here I've you can run make install at the moment you're fine unless you're using opensuse factory. I'll try to whip up those ubuntu packages soon.

Equalization Abound!

Alright so I promised a post on equalization, but first I'm going to discuss a bit of theatricals. Many people have heard of equalization but most probably don't have a very good idea of what it really does. You see most speakers don't respond to all frequencies of music/sound ideally and so the equalizer provides a way to amplify/attenuate the frequencies that don't do well so that they then play to our ears ideally. As one might imagine, there's more than one way one can do this. But the method implemented is all about tradeoffs. Audiophiles listen up:
  • Analog
  • Digital - IIR (Infinite Impulse Response)
  • Digital - FIR (Finite Impulse Response)
Now I'm not an analog electronics expert/novice (yet!) but I do know analog is very tricky and nothing works exactly the way you want it to, you will always have non-linear side-effects. Not to mention you need to buy some expensive equipment to get analog filtering stuff or be an expert yourself. If you want cheap and practical (and experts agree, even best performing!) go with digital. But here you have a choice. IIR, much like analog, has some unwanted side effects.
IIR pros:
  • Low latency
  • Little computing power necessary (for where it will run at least)
IIR cons:
  • need very high precision (floats/single precision barely cut it) or things get unstable
  • Tough to design = hard to reconfigure
  • Limitations on how close your filter is to it's ideal response, often pretty big limitations.
  • Possible PITA to code in C (what most system software is) if you use filter banks to offset the problem of high precision.
  • Need upfront good CPU to design the filter/solve coefficients if you can't do it by hand
Lastly we have digital FIR filters.
FIR pros:
  • High (but fixed!) latency, dependent on filter type however (for equalization, the latency is high).
  • Very high accuracy
  • If you can imagine a frequency response, you can do it with this method.
  • The filter works exactly as you designed it to.
FIR cons:
  • Moderate to High Computing power necessary depending on the size and FIR computation algorithm.
  • Need to have a good FFT algorithm to work with reasonably sized filters and this isn't that trivial.

So for my little venture into the audio DSP world (previously I just did alot of image work), I chose the last one as its the easiest to design/reconfigure on the fly and leads to the best results. Additionally, anything newer than a Pentium 2 shouldn't have problems playing music. It's also the kind of filtering I have the most experience with. But as a rather neat detail, let me make this clear:
You've maybe heard of low/high pass filters, shelf filters etc? Well, anything short of adding delay or echo to the signal this one "equalizer" can do (It can actually do that sort of thing with only a few small modifications for which I see no value in ATM). It's really more of a full on general DSP filter offering pretty much all of that functionality in one. The only real limitation that would require some restructuring is filter size but most people would probably be happy with a filter of size 2^16 (don't panic, FFT loves 2^n numbers and goes through it in a blaze in real time).

So yes, I'm happy about the theoretical aspects of "equalizer" and have gotten some kicks out of it. But now lets discuss some prior work and motive. If you're reading this, you might've heard about LADSPA and Steve Harris' plugins. He offers a few plugins, most of which I wasn't interested in except for mbeq (multi-band equalizer). It was useful to me and I thank the man for his work, but it just doesn't work properly in many ways. Its a PITA to configure with the setup of the ladspa sink in pulseaudio. One must put in numbers for which only gain meaning if you read the source (though you kinda get a feel for what positive and negative numbers do for the most part). There's a few problems with Harris' approach to the equalizer:
  • The frequency bands (hz) used/defined there!=frequencies we speak of naturally. They are proportional, however.
  • Each band then represents ~46 frequency bands (varies by sink sample rate) which while not too bad, is not the finest level of control
  • It should'nt be such a hassle to change the frequency band coefficients and instead I will be opting towards a gui so users can see frequency modification in realtime, the only real way things like this are tunable. This bit is actually more against pulseaudio's current implementation with ladspa, not with Harris.

Now why am I doing my equalizer again? Well, the start of it was my logitech X-540 5.1 speaker system which I bought a few months ago. Terrific speaker OMG way too much base. They have one bass dial which allows you to trade between tremble/bass but sound sounds so crappy if its less than half way maximum. It feels empty. Don't get me wrong though, I don't really like too much base like most people seem to enjoy (I don't understand why people feel it so necessary to hear those LFE which shouldn't usually be there anyway). It took some time to configure and some tradeoffs but with MBEQ and the ladspa sink, I got pretty good audio. You see, the Frequencies below 60hz from any speaker are also sent to this subwoofer from the X-540 set = way too much bass for most things. I also live in an apartment and I'm sure my neighbors wouldn't appreciate my subwoofer blasting them at 3 in the morning or some such. The ladspa sink has had a problematic past as of late with PA though and it broke quite a few times which left me hanging out to dry and gave me a difficult time a few days when I needed to listen to music/watch stuff. And finally, I just wanted to process some audio. I had filtered some other songs in the past with audacity, was amazed at the difficulty I had doing it. After enough time/effort, was impressed with the results on some songs that had some static in the background. It had cleaned them up and gotten rid of the parts that were bad while having no loss to the quality of the song, in fact this is one of those times where the quality of the song is improved. Audacious goes out of its way to use some of the above techniques I've mentioned but guard you from making up the specifics of the actual filter used, which I find interesting/weird.

Anywho, I knew right where to implement my equalizer so it'd have maximum effect: pulseaudio. A lowlatency audio framework with I believe a very promising future. On top of making plugins like mine possible (most OS's sound stuff doesn't like floats which are a requirement!), it does some interesting stuff that makes forwarding sound streams between sound cards or networks trivial. Oh yes, its also cross platform. And this isn't your run of the mill winamp equalizer, its system level baby. Every application can easily take advantage (whether it wants to or not, flash) of it if you forward the stream using paman or somesuch or set it as your default sink (as I do). Up next I'll explain how to set up your own equalizer until my packages make it into the Ubuntu repos. Opensuse Factory users are lucky enough to already have packages availble. But you'll still have to wait for the next post to find out where to dl :-)

Monday, August 10, 2009

First Post!

This is pretty much my first blog. I will be trying to show off some of my more useful software/personal projects.

Who am I? Well, I think I'm an upcoming linux/kernel/GUI/AI/DSP and all around software-renaissance-man (soon to get into the EE portion of things when I get money for supplies/screwing around). I graduated from UCI in 2009 majoring in Computer Science from the Donald Brend School of Computer Science (I nearly had a 3.65 GPA, some class screwed me over by .003 or something! Also, OUCH to the like 3 Cish grades I got in my entire UCI stay, those things hurt!). I studied alot of image processing, AI, Computer Vision (face recognition), automatic 3d model creation from commodity hardware (specifically 2 canon digital cameras, a tripod, and matlab), BioInformatics, Computer Hardware/Architecture (esp MIPS, also used VHDL), and Operating Systems design, and of course algorithms/graphs. The rest is pretty much GE or major requirements that aren't all that interesting.

I've been using linux since 2002, starting with slackware linux (hitting the end of version 8 and just in time for version 9). I moved on to use gentoo at some point between 2004 and 2005 and then switched to Ubuntu as this is around the time the clamour started on how much of a bs-free desktop it was. I didn't quite agree at the time but it was better than compiling all your own packages or dealing with third party people just to get up-to-date GNOME (specifically dropline gnome in slackware). Eventually I realized that pretty much all distro's were the same and that the only things that varied were their choices of versions of software, package format (tgz,deb,rpm), and how often they provided real updates to their system software. Ubuntu was too slow updating for me when I had a new laptop, and it broke a few times to many, so I tried out OpenSuSe in 2008 and have stuck with it on my laptop/desktop since. But I don't have anything against ubuntu. I actually put it on my mom's and sister's computers. It works enough for them it seems (doesn't make much difference of linux/windows if "things" work out of the box and you have no previous computer experience). But for doing this I never have to worry about them getting spyware/virii much, just falling for phishing (it's happened a few times already, alright!). Ubuntu does a good job of making sure those things which are difficult and of questionable legality (from the distro's POV) are properly dealt with and given to the user for a BS-free distro. Not many distros try and do this, I commend them for it. If only they allowed more things to be a bit more up to date! Oh well, there's always PPA's in the future. OpenSuse just has more "official" "stablizing" (aka stable) software branches and bleeding edge branches. And it also offers something like the PPA system as well. I like this approach better for my home machines. I also think writing rpm spec's is a bit easier than deb's control files though I think ubuntu has documented this pretty well in the last year so I may have to re-evaluate.

Anywho, In my time I've had to mess around/fix with quite a bit of computers, including my friend's/relatives. I've also been trapped in borked systems where the only two things working where the sole instance of bash running and sln (staticly linked ln) fixing bad glibc upgrades (it's alot of fun when tab completion becomes your only method of ls'ing) . I'd like to think there's no faulty software system I can't fix given some time. I've made a ton of small time software projects and a few big ones (with respect to me at least) for JPL and a RCC/WSTF NASA project in various languages (mostly C/C++/Java/Python) covering many subject areas.

I also can't wait till I can afford to get into robotics again and hope I can get into creating some interesting pet projects if not help out with making the first real automatic cars (human drivers suck!). Things have been quiet since the last DARPA grand challenge, but hopefully I don't miss the boat (fun fact: I believe one of the machines (Stanford?) used Ubuntu Dapper Drake LTS).

Anyway, I'll soon post a follow up about my new work with pulseaudio and equalization including why the ladspa solution sucks and doesn't work like how you think it does. I'll try and post some of my cool/usefull scripts sometime after that.