Recently the bank that financed my car switched their phone payment systems over from their more traditional "press a number to do stuff" to a "talk to a computer and try to do stuff" interface, and my reluctance to pay my car payments has increased tenfold.
Before they switched the phone interface, I almost had the correct sequence of numbers to press entirely memorized to where I could press 3-5 numbers in sequence and be done with my "payment session" in under two minutes. Meaning in a matter of two minutes, I could initiate a transaction from my checking account, to send almost $300 to Chase, in two minutes. I hated losing the money, but I loved the efficiency.
Recently however, they've "pulled a Vista" and replaced a wonderfully functional system that "got the job done" with a bloated, slow and buggy system that infuriates me everytime I need to talk to the computerized woman at the other end of the line. A rapid mashing of touch-tone keys on my phone has been replaced with:
PaymentBot: Welcome to Chase Auto Finance!
*pause*
PaymentBot: If you would like to make a payment, say "make a payment." If you would like to check your payoff balance, say "payoff balance." If you would like to blow a goat, say "baaaaaaaa."
Tyler-Unit: make a payment
PaymentBot: It looks like you want to make a payment, if this is correct say "yes."
Tyler-Unit: yes (at this point I'm usually irritated that I've past the two minute mark)
PaymentBot: First I need to find your account, please say your account number or your social security number, or enter them into the phone
Tyler-Unit: *mashes on keys*
PaymentBot: The number you entered was 1-2-3--4-5--5-6-5-4, if this is correct, say "yes."
Tyler-Unit: YES
PaymentBot: I'm sorry, I didn't catch that, if the number you entered: 1-2-3--4-5--5-6-5-4 is correct, say "yes."
Tyler-Unit: YES
PaymentBot: Okay, if you would like to make a payment over the phone, say "phone." If you would like to make a payment via mail, say "mail."
I could continue, but I won't .
Just to get to the point where I finally need to enter my payment details (because Chase couldn't update their system to, god-forbid, remember the same information I've been mashing into a keypad for the past two years) takes close to five to eight minutes.
Between the various financial institutions that I need to deal with every month I get to fight with terrible websites, miserable phone interfaces and idiot-customer service representatives, it almost makes me regret being fiscally responsible (our government isn't, why should I have to?). I'm hoping there's a special portion of hell reserved for whichever numbnuts in middle-management at Chase decided "OMG! Voice interfaces are TOTALLY kewl!!!!!"
Are there means of consolidating smaller (think below $10,000) loans from one bank to another? While it's pretty obvious that Chase can effectively handle its finances, they certainly can't handle "user-experience", if your customers' only interaction with you as a company is over the phone, or over snail-mail, it's usually in your best interest to make sure those "interfaces" to your customers are top-notch.
I hate voice interfaces.
Howdy!
Welcome to my blog where I write about software
development, cycling, and other random nonsense. This is not
the only place I write, you can find more words I typed on the Buoyant Data blog, Scribd tech blog, and GitHub.
Five and Seven Zeroes is Big.
It was recently announced that Slide (this little start-up that I work for) raised some more money. Neato.
Since Max isn't the aeron chair kind of CEO, it looks like we're going to spend that money on things like "engineers, hardware, etc." Bummer, I've always wondered how an $800 chair can exist that doesn't rub your feet and write your code for you.
Regardless, should be a fun year.
(p.s. we need more engineers)
Since Max isn't the aeron chair kind of CEO, it looks like we're going to spend that money on things like "engineers, hardware, etc." Bummer, I've always wondered how an $800 chair can exist that doesn't rub your feet and write your code for you.
Regardless, should be a fun year.
(p.s. we need more engineers)
Just Curious
At what point, as a man, do you give up hopes of being in a rock band?
Perforce Backups, Revisited.
A very long time ago I wrote about my backup script for archiving my entire Perforce repository. I can finally write the obvious follow-up to the post, as I've finally had to use my backups.
In my scenario, the last backup I took was in February of 2007, almost an entire year ago (my development slowed around that time). During my transit from San Antonio to San Francisco the "server" my Perforce repository ran off, also known as orange (seen on the bottom here), a "headless laptop", had its disk completely fail. Up until recently however I haven't had a replacement for "orange" but now that I have pineapple sitting in a colocation facility, I have a new candidate for a Perforce server.
Luckily I had made a habit of burning my backups to DVDs every two weeks, since two weeks of nightly backups would fill up an entire 4.7GB DVD (I still have no idea how my own source repository grew to 120MB or so). After rsync'ing the latest backup tarballs, it was completely up to Perforce to reliably restore them.
Perforce's documentation is very good, so I suggest going over the backup and recovery procedures if you find yourself needing to recover from backups.
Within about 15 minutes I had restored the Perforce database files as well as the actual source code itself and begun to sync a new Perforce client up with the new server (thanks to my p4tunnel script).
I can't talk enough about how much I really like Perforce as a version-control-system and am nothing short of elated to finally have my repository back online, it only goes to show how backups are crucial for anything you might ever want later, in my case backups albeit old backups, were still better than no backups.
In my scenario, the last backup I took was in February of 2007, almost an entire year ago (my development slowed around that time). During my transit from San Antonio to San Francisco the "server" my Perforce repository ran off, also known as orange (seen on the bottom here), a "headless laptop", had its disk completely fail. Up until recently however I haven't had a replacement for "orange" but now that I have pineapple sitting in a colocation facility, I have a new candidate for a Perforce server.
Luckily I had made a habit of burning my backups to DVDs every two weeks, since two weeks of nightly backups would fill up an entire 4.7GB DVD (I still have no idea how my own source repository grew to 120MB or so). After rsync'ing the latest backup tarballs, it was completely up to Perforce to reliably restore them.
Perforce's documentation is very good, so I suggest going over the backup and recovery procedures if you find yourself needing to recover from backups.
Within about 15 minutes I had restored the Perforce database files as well as the actual source code itself and begun to sync a new Perforce client up with the new server (thanks to my p4tunnel script).
I can't talk enough about how much I really like Perforce as a version-control-system and am nothing short of elated to finally have my repository back online, it only goes to show how backups are crucial for anything you might ever want later, in my case backups albeit old backups, were still better than no backups.
SXSWi and Me
I spoke with Tammy (our PR mastermind) about whether or not Slide was going to let me out of my cage to go to South by Southwest Interactive this coming March and it seems like they might actually let me! (I'm just as surprised as you are)
Unfortunately things with Facebook were at such a ridiculous pace when SXSWi was accepting panel submissions, that I never got a chance to submit my panel idea: "Slide is awesome, now let's talk about how great Slide is." This leaves me in a slightly awkward position, I cannot remember the last conference or event that I went to where I wasn't speaking or talking or dancing with a baboon in front of a live studio audience. Even at the last SXSWi I was there for about 36 hours and most of that time was spent setting up and then helping run BarCamp Austin2. Ideally I'd like to get on stage with some of the guys from Twitter, Facebook, Bebo, Google and maybe even Myspace to discuss the more open social web that we seem to be moving towards and a bit about how awesome Slide is. It's probably nothing more than a pipe-dream however, since the panels seem to be quite locked down at the moment.
Of course, if nobody will have me, then I might be relegated to slumming up and down 6th street in Austin hanging out with the usual drunkards that I know in Austin (you know who you are) and getting into trouble. Mmm, trouble.
Regardless, if you're going to be in Austin for SXSWi let me know, I've got a stack of swanky new business cards I want to get rid of :)

Unfortunately things with Facebook were at such a ridiculous pace when SXSWi was accepting panel submissions, that I never got a chance to submit my panel idea: "Slide is awesome, now let's talk about how great Slide is." This leaves me in a slightly awkward position, I cannot remember the last conference or event that I went to where I wasn't speaking or talking or dancing with a baboon in front of a live studio audience. Even at the last SXSWi I was there for about 36 hours and most of that time was spent setting up and then helping run BarCamp Austin2. Ideally I'd like to get on stage with some of the guys from Twitter, Facebook, Bebo, Google and maybe even Myspace to discuss the more open social web that we seem to be moving towards and a bit about how awesome Slide is. It's probably nothing more than a pipe-dream however, since the panels seem to be quite locked down at the moment.
Of course, if nobody will have me, then I might be relegated to slumming up and down 6th street in Austin hanging out with the usual drunkards that I know in Austin (you know who you are) and getting into trouble. Mmm, trouble.
Regardless, if you're going to be in Austin for SXSWi let me know, I've got a stack of swanky new business cards I want to get rid of :)

What a heaping pile of FAIL.
I had mentioned previously that iChatAgent in Leopard leaks, I'm going to take that statement back. iChatAgent in Leopard hemorrhages memory, and I think I know why now.
While I was napping, there was a network hiccup causing iChat to get
disconnected, when the network connectivity returned, it first tried to sign on a couple of Jabber accounts, both of which use self-signed SSL certificates. Being the lovely old chap that iChat is it prompted the user (napping at the time) to accept the self-signed certificate. While the dialog box was up, iChat sat waiting around before it signed on the other accounts and spun and spun and spun.
iChat spun and spun and spun until all the available disk space for virtual memory was used up by every process that had to swap out to make space for iChatAgent's demands on real memory, and then by iChatAgent itself swapping out.

God fucking damnit.
While I was napping, there was a network hiccup causing iChat to get
disconnected, when the network connectivity returned, it first tried to sign on a couple of Jabber accounts, both of which use self-signed SSL certificates. Being the lovely old chap that iChat is it prompted the user (napping at the time) to accept the self-signed certificate. While the dialog box was up, iChat sat waiting around before it signed on the other accounts and spun and spun and spun.
iChat spun and spun and spun until all the available disk space for virtual memory was used up by every process that had to swap out to make space for iChatAgent's demands on real memory, and then by iChatAgent itself swapping out.

God fucking damnit.
Comparing IronPython and CPython
First a little background to help explain some of the terms, etc. "Python" is a language, similar to how "Java" is a language; unlike Java wherein the language is also relatively synonymous with the actual implementation of that language, Python has multiple implementations. If you've run python(1) from the command line, you're most likely running the CPython implementation of the Python language, in effect, Python implemented in C. Other implementations of Python exist, like Jython (implemented on top of the Java virtual machine), PyPy (Python implemented in Python), and IronPython (Python implemented on top of the .NET CLR).
I was talking with some of the guys from the #mono channel on GIMPNet about IronPython versus CPython as far as performance is concerned and I decided that I would refine my testing (using pybench) for more similar versions of the respective implementations, in as controlled of an environment as possible.
I ran pybench.py on a "quiet" (i.e. not-busy) machine sitting in a remote datacenter not too far from Novell, the machine is a Pentium III (i386) based machine running openSUSE 10.3. Since IronPython reports it's "implementation version" as Python 2.4.0, I decided to build and run CPython 2.4 against it. IronPython is running on top of the recently released Mono 1.2.6 which I also built from source (I got IronPython from the IPCE package in YaST however). pybench reported the various implementation details for both as such:
CPython
IronPython
IronPython did alright, but it got pretty thrashed on a lot of the benchmarks. Unfortunately it's hard to tell whether it's Mono getting beaten up, or whether it's IronPython itself that's losing the battle here, running similar tests on the .NET 2.0 CLR would be beneficial but not something I am curious enough to boot a Windows virtual machine for. Regardless, here are the results, I've highlighed the rows where IronPython performs better than CPython.
The results are disappointing but not all that surprising, especially with regards to string manipulation. I attempted to run the same pybench.py tool on top of Jython but Jython doesn't appear to support the "platform" module, so I don't have a really good baseline for "managed/virtual machine-based Python implementations" right now. However, given the lack of evidence otherwise, I'll just go ahead and assume IronPython blew the doors off of Jython :). In general though this isn't the be-all end-all benchmark for IronPython, especially on Mono, but it does give a nice hint of where some improvements could be made both in the Mono runtime and IronPython. I'll have to run the benchmarks again with the newer versions of both implementations of Python to see where they're improving or degrading but by all means don't let this deter you from checking out IronPython! I'll be writing up a few code samples over the next couple weeks that I hope will be helpful to those "unenlightened" among us; dynamic languages on the CLR, what has the world come to.
I was talking with some of the guys from the #mono channel on GIMPNet about IronPython versus CPython as far as performance is concerned and I decided that I would refine my testing (using pybench) for more similar versions of the respective implementations, in as controlled of an environment as possible.
I ran pybench.py on a "quiet" (i.e. not-busy) machine sitting in a remote datacenter not too far from Novell, the machine is a Pentium III (i386) based machine running openSUSE 10.3. Since IronPython reports it's "implementation version" as Python 2.4.0, I decided to build and run CPython 2.4 against it. IronPython is running on top of the recently released Mono 1.2.6 which I also built from source (I got IronPython from the IPCE package in YaST however). pybench reported the various implementation details for both as such:
CPython
Implementation: 2.4.4
Executable: /home/tyler/basket/bin/python
Version: 2.4.4
Compiler: GCC 4.2.1 (SUSE Linux)
Bits: 32bit
Build: Dec 18 2007 23:00:48 (#1)
Unicode: UCS2
IronPython
Implementation: 2.4.0
Executable: /usr/lib/IPCE/ipy.exe
Version: 2.4.0
Compiler: .NET 2.0.50727.42
Bits: 32bit
Build: (#)
Unicode: UCS2
IronPython did alright, but it got pretty thrashed on a lot of the benchmarks. Unfortunately it's hard to tell whether it's Mono getting beaten up, or whether it's IronPython itself that's losing the battle here, running similar tests on the .NET 2.0 CLR would be beneficial but not something I am curious enough to boot a Windows virtual machine for. Regardless, here are the results, I've highlighed the rows where IronPython performs better than CPython.
| Test | Minimum Run-time | Average Run-time | ||||
|
| CPython | IronPython | Diff | CPython | IronPython | Diff |
| BuiltinFunctionCalls: | 448ms | 357ms | +25.4% | 450ms | 405ms | +11.0% |
| BuiltinMethodLookup: | 530ms | 1329ms | -60.1% | 536ms | 1390ms | -61.4% |
| CompareFloats: | 380ms | 129ms | +194.3% | 381ms | 132ms | +187.7% |
| CompareFloatsIntegers: | 377ms | 93ms | +306.1% | 378ms | 97ms | +291.2% |
| CompareIntegers: | 436ms | 160ms | +172.5% | 437ms | 161ms | +170.6% |
| CompareInternedStrings: | 425ms | 443ms | -4.1% | 426ms | 445ms | -4.3% |
| CompareLongs: | 360ms | 292ms | +23.3% | 361ms | 293ms | +23.0% |
| CompareStrings: | 423ms | 330ms | +28.0% | 423ms | 337ms | +25.6% |
| CompareUnicode: | 377ms | 243ms | +54.7% | 377ms | 245ms | +54.2% |
| ConcatStrings: | 726ms | 9452ms | -92.3% | 823ms | 10071ms | -91.8% |
| ConcatUnicode: | 711ms | 5687ms | -87.5% | 756ms | 6039ms | -87.5% |
| CreateInstances: | 508ms | 761ms | -33.2% | 518ms | 815ms | -36.4% |
| CreateNewInstances: | 451ms | 3475ms | -87.0% | 458ms | 3581ms | -87.2% |
| CreateStringsWithConcat: | 473ms | 2650ms | -82.1% | 475ms | 2833ms | -83.2% |
| CreateUnicodeWithConcat: | 482ms | 1008ms | -52.1% | 508ms | 1092ms | -53.4% |
| DictCreation: | 405ms | 2944ms | -86.2% | 407ms | 3057ms | -86.7% |
| DictWithFloatKeys: | 552ms | 934ms | -40.9% | 553ms | 944ms | -41.5% |
| DictWithIntegerKeys: | 423ms | 1118ms | -62.2% | 426ms | 1137ms | -62.5% |
| DictWithStringKeys: | 413ms | 1186ms | -65.1% | 414ms | 1317ms | -68.6% |
| ForLoops: | 412ms | 189ms | +118.5% | 413ms | 217ms | +90.7% |
| IfThenElse: | 372ms | 128ms | +191.8% | 374ms | 141ms | +165.8% |
| ListSlicing: | 311ms | 4033ms | -92.3% | 315ms | 4230ms | -92.6% |
| NestedForLoops: | 488ms | 349ms | +39.7% | 489ms | 382ms | +28.1% |
| NormalClassAttribute: | 430ms | 1080ms | -60.2% | 432ms | 1104ms | -60.9% |
| NormalInstanceAttribute: | 401ms | 427ms | -6.1% | 404ms | 442ms | -8.7% |
| PythonFunctionCalls: | 393ms | 302ms | +30.1% | 402ms | 352ms | +14.3% |
| PythonMethodCalls: | 478ms | 643ms | -25.7% | 536ms | 673ms | -20.3% |
| Recursion: | 547ms | 158ms | +245.9% | 659ms | 159ms | +313.6% |
| SecondImport: | 476ms | 1383ms | -65.6% | 481ms | 1432ms | -66.4% |
| SecondPackageImport: | 501ms | 1425ms | -64.8% | 503ms | 1482ms | -66.1% |
| SecondSubmoduleImport: | 589ms | 1916ms | -69.3% | 592ms | 1990ms | -70.2% |
| SimpleComplexArithmetic: | 475ms | 729ms | -34.9% | 476ms | 758ms | -37.3% |
| SimpleDictManipulation: | 424ms | 1009ms | -58.0% | 427ms | 1020ms | -58.2% |
| SimpleFloatArithmetic: | 416ms | 455ms | -8.7% | 422ms | 480ms | -12.0% |
| SimpleIntFloatArithmetic: | 345ms | 161ms | +113.8% | 346ms | 162ms | +112.9% |
| SimpleIntegerArithmetic: | 345ms | 161ms | +114.7% | 345ms | 161ms | +113.9% |
| SimpleListManipulation: | 346ms | 497ms | -30.4% | 350ms | 501ms | -30.1% |
| SimpleLongArithmetic: | 402ms | 1120ms | -64.1% | 403ms | 1130ms | -64.3% |
| SmallLists: | 417ms | 1693ms | -75.4% | 421ms | 1717ms | -75.5% |
| SmallTuples: | 450ms | 3839ms | -88.3% | 453ms | 3915ms | -88.4% |
| SpecialClassAttribute: | 431ms | 1104ms | -60.9% | 432ms | 1133ms | -61.8% |
| SpecialInstanceAttribute: | 608ms | 423ms | +43.8% | 610ms | 437ms | +39.5% |
| StringMappings: | 443ms | 2255ms | -80.3% | 448ms | 2311ms | -80.6% |
| StringPredicates: | 503ms | 1058ms | -52.5% | 504ms | 1066ms | -52.7% |
| StringSlicing: | 527ms | 2880ms | -81.7% | 562ms | 3008ms | -81.3% |
| TryExcept: | 418ms | 21ms | +1905.2% | 418ms | 39ms | +985.6% |
| TryRaiseExcept: | 587ms | 6670ms | -91.2% | 591ms | 6733ms | -91.2% |
| TupleSlicing: | 390ms | 1817ms | -78.5% | 397ms | 1863ms | -78.7% |
| UnicodeMappings: | 362ms | 1323ms | -72.7% | 365ms | 1347ms | -72.9% |
| UnicodePredicates: | 438ms | 860ms | -49.0% | 439ms | 912ms | -51.8% |
| UnicodeProperties: | 400ms | 0ms | n/a | 401ms | 0ms | n/a |
| UnicodeSlicing: | 624ms | 2491ms | -75.0% | 666ms | 2638ms | -74.7%
|
The results are disappointing but not all that surprising, especially with regards to string manipulation. I attempted to run the same pybench.py tool on top of Jython but Jython doesn't appear to support the "platform" module, so I don't have a really good baseline for "managed/virtual machine-based Python implementations" right now. However, given the lack of evidence otherwise, I'll just go ahead and assume IronPython blew the doors off of Jython :). In general though this isn't the be-all end-all benchmark for IronPython, especially on Mono, but it does give a nice hint of where some improvements could be made both in the Mono runtime and IronPython. I'll have to run the benchmarks again with the newer versions of both implementations of Python to see where they're improving or degrading but by all means don't let this deter you from checking out IronPython! I'll be writing up a few code samples over the next couple weeks that I hope will be helpful to those "unenlightened" among us; dynamic languages on the CLR, what has the world come to.
My new startup
I was talking to Dennis about quitting Palantir and coming to work for my startup which has no funding, and no time, but lots of brilliant ideas, when I realized I don't have a name for the startup yet.
So effectively immediately my I'm naming my startup TY-Combinator and wouldn't you know it! We're currently accepting angel funding, demon funding, and picnic baskets filled with those little sandwiches cut into triangles.
Also effectively immediately, I'm still going to work at Slide.
So effectively immediately my I'm naming my startup TY-Combinator and wouldn't you know it! We're currently accepting angel funding, demon funding, and picnic baskets filled with those little sandwiches cut into triangles.
Also effectively immediately, I'm still going to work at Slide.
Mono and FastCGI. An awkward relationship.
I've spent the week tweaking and adjusting my lighttpd configuration to where it cooperates better with Mono's FastCGI server, and I finally feel confident enough with the configuration to share.
Around thursday morning or so (maybe it was wednesday) the site was spewing so many 500 errors that somebody who I'm not sure I know where I know them from, emailed me saying "dude, site's broke." After checking the error logs, I found a lot of errors that were all like this:
After diagnosing the problem and kicking the server again, I decided that a couple of tips on the wiki page for Mono's FastCGI & Lighttpd had done me in, the first being about the FastCGI handler's max-procs configuration variable:
I also went against some of the other advice on the wiki page
The base configuration for one of my virtual hosts (Urlenco.de) turned out something like this:
Specifying the "application path" is somewhat of a pain, as now I more or less need a separate FastCGI configuration, which means they'll also need separate FastCGI servers, so another virtual host in the configuration (pineapple.monkeypox.org) has the following setup:
Since the virtual host pineapple.monkeypox.org barely runs any ASP.NET code at all, I decided to only give it one Mono FastCGI process. Also of note is that the "socket" is different from the other FastCGI handler, if you try to use the same socket, the first Mono FastCGI process will take it over and both FastCGI handlers will return the same code, returned from the first handler.
Feel free to bug me with any questions, this is my first foray into using Lighttpd and I'm already pleased as punch with it (compared to Apache) but there are certainly some caveats and bits of black magic involved with Mono, FastCGI and Lighttpd. That said, it still feels less sticky than running Apache 2 and mod_mono (not that they're not great and all). Hopefully web traffic will increase and give me a good test-bed for figuring out "the right stuff" to scale Mono on Lighttpd.
Scary thought isn't it? :)
Around thursday morning or so (maybe it was wednesday) the site was spewing so many 500 errors that somebody who I'm not sure I know where I know them from, emailed me saying "dude, site's broke." After checking the error logs, I found a lot of errors that were all like this:
fcgi-server re-enabled: 0 /tmp/fastcgi-mono-server
backend is overloaded; we'll disable it for 2 seconds and
send the request to another backend instead: reconnects: 0 load: 130
fcgi-server re-enabled: 0 /tmp/fastcgi-mono-server
backend is overloaded; we'll disable it for 2 seconds and
send the request to another backend instead: reconnects: 0 load: 130
After diagnosing the problem and kicking the server again, I decided that a couple of tips on the wiki page for Mono's FastCGI & Lighttpd had done me in, the first being about the FastCGI handler's max-procs configuration variable:
"max-procs" specifies the maximum number of servers to spawn. Because ASP.NET stores session specific objects, I am unsure of how applications would react if switching from one server to another, or if Lighttpd bonds a single server to a client. As such, I highly recommend keeping this value as "1" to avoid any conflicts.
Fortunately Urlenco.de doesn't really need any session information, so I did what Emeril and Apache admins are both familiar with doing, I kicked it up a notch (to about 10). After kicking the server one more time, this time with "max-procs" > 10 I watched the load on my little 1U server spike up to 30. While every terminal I had became so sluggish I could barely interact with the machine, I managed to open up "top(1)" and see what processses were royally screwing my machine. Turns out it was 10 instances of Mono, all trying to digest an ASP.NET site at once, all competing for the meager resources available. It seems that the Mono FastCGI server will process and compile your entire ASP.NET web application as soon as the FastCGI server is bootstrapped and accepting requests. Fortunately pushing new code to the site gets updated on the next HTTP request, so the number of times you'll have to kick (i.e. restart) the Lighttpd server should be minimal and you won't have to incur the huge performance penalty that often (I've since changed max-procs to 4).
I also went against some of the other advice on the wiki page
To overcome these problems, the recommended method for processing files is to send all requests directly to the FastCGI Mono Server.
By effectively passing every single request off to the Mono FastCGI Server you can avoid exposing some internal ASP.NET resources that should be interpreted and not sent over the wire, this seems to be poor practice as far as Lighttpd and FastCGI are concerned. Lighttpd is a very good, high performance HTTP server and should be allowed to do it's job, whereas FastCGI servers merely serve as a means for executing server-side pages, returning markup, etc. To avoid passing every single request off to the FastCGI server, I merely setup the FastCGI handler for .aspx pages and then mapped other ASP.NET extensions to that handler as was appropriate:fastcgi.map-extensions = (
".asmx" => ".aspx",
".ashx" => ".aspx",
".asax" => ".aspx",
".ascx" => ".aspx",
".soap" => ".aspx",
".rem" => ".aspx",
".axd" => ".aspx",
".cs" => ".aspx",
".config" => ".aspx",
".dll" => ".aspx"
)
The base configuration for one of my virtual hosts (Urlenco.de) turned out something like this:
$HTTP["host"] == "urlenco.de" {
fastcgi.server = (
".aspx" => ((
"socket" => "/tmp/fastcgi-mono-server",
"bin-path" => "/usr/local/bin/fastcgi-mono-server2",
"bin-environment" => (
"MONO_FCGI_APPLICATIONS" => "/:/serv/www/domains/urlenco.de/htdocs",
"MONO_FCGI_LOGLEVELS" => "Standard", #All", #Debug",
"MONO_FCGI_LOGFILE" => "/var/log/lighttpd/mono.log",
),
"max-procs" => 4,
"check-local" => "disable"
))
)
}
Specifying the "application path" is somewhat of a pain, as now I more or less need a separate FastCGI configuration, which means they'll also need separate FastCGI servers, so another virtual host in the configuration (pineapple.monkeypox.org) has the following setup:
$HTTP["host"] == "pineapple.monkeypox.org" {
fastcgi.server = (
".aspx" => ((
"socket" => "/tmp/fastcgi-mono-server-pineapple",
"bin-path" => "/usr/local/bin/fastcgi-mono-server2",
"bin-environment" => (
"MONO_FCGI_APPLICATIONS" => "/:/serv/www/domains/pineapple.monkeypox.org/htdocs",
"MONO_FCGI_LOGLEVELS" => "Standard", #All", #Debug",
"MONO_FCGI_LOGFILE" => "/var/log/lighttpd/mono.log",
),
"max-procs" => 1,
"check-local" => "disable"
))
)
}
Since the virtual host pineapple.monkeypox.org barely runs any ASP.NET code at all, I decided to only give it one Mono FastCGI process. Also of note is that the "socket" is different from the other FastCGI handler, if you try to use the same socket, the first Mono FastCGI process will take it over and both FastCGI handlers will return the same code, returned from the first handler.
Feel free to bug me with any questions, this is my first foray into using Lighttpd and I'm already pleased as punch with it (compared to Apache) but there are certainly some caveats and bits of black magic involved with Mono, FastCGI and Lighttpd. That said, it still feels less sticky than running Apache 2 and mod_mono (not that they're not great and all). Hopefully web traffic will increase and give me a good test-bed for figuring out "the right stuff" to scale Mono on Lighttpd.
Scary thought isn't it? :)
"Fun" way to crash Leopard #159
Earlier this week I noticed that the Facebook home page would not stop loading, in the sense that the entire page would load and render, but one resource would continue to load. As I popped open Safari's Activity Monitor I found that the "one resource" was a server-generated image that was effectively streaming to my browser window, since the server would not stop sending data for the file.
Curious as to what the image was, I downloaded it via Safari, which downloads to the "Downloads" folder which has a convenient "stack" icon in the dock. If you're not familiar with "stacks" in Leopard,
they essentially are a nifty way to navigate to folders straight from the dock, and also offer an iconic preview of the most-used/latest item in the folder.
Unknowingly, this image that I had downloaded was completely corrupted, but I had just downloaded it to the "Downloads" folder, and the Dock started to try to render a preview in the Dock of the corrupted image. Doing so set off a looping chain-reaction that was a wonderful sight to see, and ended up in a hard-restart of the machine as I couldn't get control of it. First the Dock crashed, following the Dock, Finder restarted and then Spaces crashed entirely. Sitting looking at a Dock that kept restarting and crashing and Spaces that had completely abandoned 5 other "spaces" full of windows, and an unresponsive Finder I made like a Windows ME user and rebooted my machine.
After the machine started up again I got to the Downloads folder and deleted the image before the cycle could start again and managed to restore the machine to a usable state again.
According to Apple, Mac OS X Version 10.5.1 is a full-fledged release, but it still feels like a release candidate depending on the day of the week, the amount of sunshine outside, or any one of a large number of arbitrary variables.
Curious as to what the image was, I downloaded it via Safari, which downloads to the "Downloads" folder which has a convenient "stack" icon in the dock. If you're not familiar with "stacks" in Leopard,
they essentially are a nifty way to navigate to folders straight from the dock, and also offer an iconic preview of the most-used/latest item in the folder.
Unknowingly, this image that I had downloaded was completely corrupted, but I had just downloaded it to the "Downloads" folder, and the Dock started to try to render a preview in the Dock of the corrupted image. Doing so set off a looping chain-reaction that was a wonderful sight to see, and ended up in a hard-restart of the machine as I couldn't get control of it. First the Dock crashed, following the Dock, Finder restarted and then Spaces crashed entirely. Sitting looking at a Dock that kept restarting and crashing and Spaces that had completely abandoned 5 other "spaces" full of windows, and an unresponsive Finder I made like a Windows ME user and rebooted my machine.
After the machine started up again I got to the Downloads folder and deleted the image before the cycle could start again and managed to restore the machine to a usable state again.
According to Apple, Mac OS X Version 10.5.1 is a full-fledged release, but it still feels like a release candidate depending on the day of the week, the amount of sunshine outside, or any one of a large number of arbitrary variables.
Urlenco.de: Mono, Lighttpd, and PostgreSQL.
During the nigh 12 hour break I had between regular work over Thanksgiving, I spent about four hours writing a little utility that I wanted to use instead of TinyURL, and found a fantastic domain name for it too: Urlenco.de. I also wanted to use the opportunity to explore using Npgsql, the .NET connector for PostgreSQL, which was a very pleasant experience after using the MySQL .NET connector (part of the pleasant experience was using PostgreSQL itself, of course). Another new thing to explore was the FastCGI support for Mono/ASP.NET, I'll be sure to jot down my experiences with Mono's FastCGI support in a later post since my brain is too fried to talk about it coherently in detail.
The most important part of the entire project was further refining my rapid-development process for Mono and ASP.NET so I can do quick little projects like this and push them to a live webserver in a matter of hours instead of days (of time I don't have). This mostly consists of boiler-plate project templates for some basic database code, page templates, and a NAnt build script that facilitates the building and testing of the site using xsp2 on localhost. Nothing spectacular, just having a toolkit of necessities to take from one project to the next, especially when time is at such a premium, is a minor but important difference from how I work now as opposed to how I used to work (when I had expendable time).
One of my favorite parts of the entire Urlenco.de project was setting up a Urlenco.de API for both encoding (tiny'ing) and decoding (untiny'ing) URLs to and from Urlenco.de, all in under 10 minutes after a suggestion from my friend Dennis at Palantir. After another suggestion, I also wrote a Urlenco.de stats page using the
The most important part of the entire project was further refining my rapid-development process for Mono and ASP.NET so I can do quick little projects like this and push them to a live webserver in a matter of hours instead of days (of time I don't have). This mostly consists of boiler-plate project templates for some basic database code, page templates, and a NAnt build script that facilitates the building and testing of the site using xsp2 on localhost. Nothing spectacular, just having a toolkit of necessities to take from one project to the next, especially when time is at such a premium, is a minor but important difference from how I work now as opposed to how I used to work (when I had expendable time).
One of my favorite parts of the entire Urlenco.de project was setting up a Urlenco.de API for both encoding (tiny'ing) and decoding (untiny'ing) URLs to and from Urlenco.de, all in under 10 minutes after a suggestion from my friend Dennis at Palantir. After another suggestion, I also wrote a Urlenco.de stats page using the
Missed Spain :(
I hope everybody enjoyed their stay this past week in Madrid for the Mono Summit 2007.
Unfortunately, it's been too hectic a month to take the week off and go to Madrid, so I'm incredibly jealous of all of you. Grumble.

Unfortunately, it's been too hectic a month to take the week off and go to Madrid, so I'm incredibly jealous of all of you. Grumble.

iChatAgent leaks in Leopard
I really don't have much that I can say about this, I came into the office after leaving my Mac on (as per usual) for about 12 hours and found that I was out of space on my startup disk, out of all available system memory, and things were crashing left and right.
What the fuck right?
Well, after I recovered the system enough to pop open "Activity Monitor" I found the exact culprit.
What the fuck right?
Well, after I recovered the system enough to pop open "Activity Monitor" I found the exact culprit.
Turning Famousosity Up To 11.
Sergio, one of our talented web monkeys, sent an email out today that started with "OMFGBBQ!"
As it turns out, Sergio is a much more religious reader of Penny Arcade than the rest of us (a public shaming and revocation of some geek cards is in order) since he was the first to notice this:

Click to view the image fullsize
Hell yes.
As a side note, I have Sergio to thank for the sweet drag-and-drop interface on the Top Friends edit page and now for bringing some Gabe and Tycho love to our attention.
As it turns out, Sergio is a much more religious reader of Penny Arcade than the rest of us (a public shaming and revocation of some geek cards is in order) since he was the first to notice this:

Click to view the image fullsize
Hell yes.
As a side note, I have Sergio to thank for the sweet drag-and-drop interface on the Top Friends edit page and now for bringing some Gabe and Tycho love to our attention.
Facebook Flyers Make My Eyes Bleed.
As part of what consists of my day/night/weekend job, developing Facebook applications like Top Friends, I spent a lot of time on Facebook (mostly losing games of Scrabulous to other developers). Since I spend anywhere between 20 and 30 hours a day on Facebook, I see a lot of Facebook's ads, and in particular, Facebook's "Flyers".
The concept at it's most basic level is a novel one, allow posting a flyer, similar to stapling a "Free Couch" flyer to a bulletin board, except on Facebook. In practice however, they suck. They suck bad. Really bad. I have a much higher respect for advertisers that can come up with ads that are either intriguing, or at the very least, not absolutely painful to see.
Over the past week I've been quietly taking screenshots of the absolutely worst Flyers that I've seen that have brought me close to sending a flaming bag of poo down to Palo Alto. Think about the lame kind of spam you get in your inbox, that's about the level that Facebook's Flyers seem to be, except I can't fix it with aggressive spamassasin rules.
Isn't this supposed to be targeted? These all seem to target single, stupid, bi-curious, poor, gullible, and desperate people, and I'm pretty sure I only fall in, at most, three of those categories.
Seriously, what the fuck.
The concept at it's most basic level is a novel one, allow posting a flyer, similar to stapling a "Free Couch" flyer to a bulletin board, except on Facebook. In practice however, they suck. They suck bad. Really bad. I have a much higher respect for advertisers that can come up with ads that are either intriguing, or at the very least, not absolutely painful to see.
Over the past week I've been quietly taking screenshots of the absolutely worst Flyers that I've seen that have brought me close to sending a flaming bag of poo down to Palo Alto. Think about the lame kind of spam you get in your inbox, that's about the level that Facebook's Flyers seem to be, except I can't fix it with aggressive spamassasin rules.
![]() | ![]() | |
![]() | ![]() | |
![]() | ![]() | |
![]() | ![]() | |
![]() | ||
Isn't this supposed to be targeted? These all seem to target single, stupid, bi-curious, poor, gullible, and desperate people, and I'm pretty sure I only fall in, at most, three of those categories.
Seriously, what the fuck.
"Why are you awesome?" meet Mono
When I originally wrote the Facebook demo application "Why are you awesome?"
I wrote it in PHP4 in about 3 hours and hated myself for every one of those miserable 180 minutes. Since then however, I've been slowly and methodically working on a new, JSON-based, Facebook client library (Mono.Facebook.Platform) specifically to bring together some of the aspects of pyfacebook, the PHP client, and the Facebook Toolkit that I like (implementation progress can be found in the NOTES). After getting some of the key Facebook calls implemented to support "Why are you awesome?" I figured I might as well give it a whirl and see if a "real" application would work on top of the library (it does).
Thus far, all that were needed as far as library calls were:
A couple of the things I've found thus far in my work have been, that writing a library that you have to use forces you to think about what you add and what you remove a lot more and focus on simplicity and extensibility; secondly, JSON is much faster, meaning I can do things with the Mono.Facebook.Platform library that I couldn't with the XML-based PHP4/5 library. Operations like fetching the user IDs of all 700 friends of mine complete in a timely fashion under the JSON library, whereas they typically timeout with the XML-based libraries.

The Mono.Facebook.Platform library isn't even alpha, it's in negative greek letters right now, there's not enough of the API implemented, and it doesn't handle errors very well at all, so don't use it. When it's finished however, I intend to support over 90% of the Facebook calls, and offer it up as a faster, viable option, for ASP.NET developers on Windows and on Mono.
Of course if you want to check out "Why are you awesome?", head on over to the application page and install it.
I wrote it in PHP4 in about 3 hours and hated myself for every one of those miserable 180 minutes. Since then however, I've been slowly and methodically working on a new, JSON-based, Facebook client library (Mono.Facebook.Platform) specifically to bring together some of the aspects of pyfacebook, the PHP client, and the Facebook Toolkit that I like (implementation progress can be found in the NOTES). After getting some of the key Facebook calls implemented to support "Why are you awesome?" I figured I might as well give it a whirl and see if a "real" application would work on top of the library (it does).
Thus far, all that were needed as far as library calls were:
- feed.publishActionOfUser
- notifications.send
- fql.query
- profile.setFBML
A couple of the things I've found thus far in my work have been, that writing a library that you have to use forces you to think about what you add and what you remove a lot more and focus on simplicity and extensibility; secondly, JSON is much faster, meaning I can do things with the Mono.Facebook.Platform library that I couldn't with the XML-based PHP4/5 library. Operations like fetching the user IDs of all 700 friends of mine complete in a timely fashion under the JSON library, whereas they typically timeout with the XML-based libraries.

The Mono.Facebook.Platform library isn't even alpha, it's in negative greek letters right now, there's not enough of the API implemented, and it doesn't handle errors very well at all, so don't use it. When it's finished however, I intend to support over 90% of the Facebook calls, and offer it up as a faster, viable option, for ASP.NET developers on Windows and on Mono.
Of course if you want to check out "Why are you awesome?", head on over to the application page and install it.
Bug Number Seven
One of my favorite Facebookers, Ari Steinberg, just resolved bug #7 in Facebook's bugzilla.
When used correctly, LIMIT, OFFSET and ORDER BY can really make writing application-level code much easier, because you're offloading a lot more onto Facebook. For example, instead of fetching an entire list of people (presumably friends) and then sorting by their name, you can perform a query like:
This query will fetch an alphabetically sorted list of $UID's friends along with their uid, preventing any sorting you might need to do.
Make sure you check the FQL documentation for which "columns" are keyed such that you perform the most optimal queries possible. Of course, you should already make sure you're selecting as often as possible on keyed "columns" in FQL, but when you're offloading large amounts of sorting onto Facebook's API servers, it becomes more important to form optimal queries to make sure that you can fetch data from Facebook as fast as possible and render your application's pages.
Another fun query that becomes more fun with ORDER BY is fetching events for a particular user:
This of course is using ORDER BY on the event.name "column" which is not keyed so it will theoretically perform slower than the example above, but it's far less likely that a user will have thousands of events versus thousands of friends, so the real-world performance hit will be negligible.
As as side note, Ari was on stage with me at Graphing Social, helping me give the Facebook App Development 101 workshop a few weeks ago. You can regularly find him cruising through bugzilla and every so often on the #facebook channel on Freenode.
LIMIT, OFFSET, and ORDER BY are all implemented.
docs at http://developers.facebook.com/documentation.php?v=1.0&doc=fql are updated. enjoy guys, and let me know if there are any problems with it. tyler, don't go too crazy with it...keep in mind that order by in particular can be an expensive operation (but do try it out - when used in the appropriate ways it could lead to a savings)
When used correctly, LIMIT, OFFSET and ORDER BY can really make writing application-level code much easier, because you're offloading a lot more onto Facebook. For example, instead of fetching an entire list of people (presumably friends) and then sorting by their name, you can perform a query like:
SELECT uid,name FROM user WHERE uid IN (SELECT uid1 FROM friend WHERE uid2 = $UID) ORDER BY name
This query will fetch an alphabetically sorted list of $UID's friends along with their uid, preventing any sorting you might need to do.
Make sure you check the FQL documentation for which "columns" are keyed such that you perform the most optimal queries possible. Of course, you should already make sure you're selecting as often as possible on keyed "columns" in FQL, but when you're offloading large amounts of sorting onto Facebook's API servers, it becomes more important to form optimal queries to make sure that you can fetch data from Facebook as fast as possible and render your application's pages.
Another fun query that becomes more fun with ORDER BY is fetching events for a particular user:
SELECT eid, name FROM event WHERE eid IN (SELECT eid FROM event_member WHERE uid = $UID) ORDER BY name
This of course is using ORDER BY on the event.name "column" which is not keyed so it will theoretically perform slower than the example above, but it's far less likely that a user will have thousands of events versus thousands of friends, so the real-world performance hit will be negligible.
As as side note, Ari was on stage with me at Graphing Social, helping me give the Facebook App Development 101 workshop a few weeks ago. You can regularly find him cruising through bugzilla and every so often on the #facebook channel on Freenode.
Building Mono on Leopard
I figured I'd write up a guide to building Mono from Subversion in preparation of the upcoming 1.2.6 release, on a site I've neglected since I set it up, mononews.org (I hope to get back to writing tutorials and "newsy" stuff with the 1.2.6 release).
Anyways, if you've got Leopard installed, Geoff Norton did a great job in helping me track down the remaining Leopard/i386 bugs earlier today, so now you can build and run Mono relatively easily from Subversion on your fancy smancy new OS.
Building Mono on Leopard
Anyways, if you've got Leopard installed, Geoff Norton did a great job in helping me track down the remaining Leopard/i386 bugs earlier today, so now you can build and run Mono relatively easily from Subversion on your fancy smancy new OS.
A note to my Graphing Social "students"
I figured I'd inform anybody that attended my Facebook App Development 101 workshop at Graphing Social, that I have finally deleted the workshop test accounts that were located at workshop.monkeypox.org.
I have made a backup of the database that we used to play around with the "Why are you awesome?" source code, and I have also backed up the files, so in case you forgot to get your modified files drop me a line and I'll fish your data out.
I have made a backup of the database that we used to play around with the "Why are you awesome?" source code, and I have also backed up the files, so in case you forgot to get your modified files drop me a line and I'll fish your data out.
Your order has been completed
Yesterday while shopping around for a new cell plan I figured it'd be a good time to get a new phone as well. The cell phone I currently have is the only cell phone I've ever owned, I believe the model is an LG Piezza-shit.
After browsing around Cingular's site, I found a good deal on a Blackberry Pearl and decided I really want to be tethered to my email more than I am now. Not that I ever had any free time to begin with, but I'm a sucker, I liked the placebo effect; that's all over now, I'm getting a Blackberry.

I'm fucked
After browsing around Cingular's site, I found a good deal on a Blackberry Pearl and decided I really want to be tethered to my email more than I am now. Not that I ever had any free time to begin with, but I'm a sucker, I liked the placebo effect; that's all over now, I'm getting a Blackberry.

I'm fucked








