Welcome to my blog where I write about software
development, cycling, and other random nonsense. This is not
the only place I write, you can find more words I typed on the Buoyant Data blog, Scribd tech blog, and GitHub.
miscellaneoussoftware developmentlinuxhudson
I've been using a Gnome-based desktop for about the past 8-9 months and one of the things I've come to really appreciate is that most Gnome applications integrate with "libnotify". Libnotify is a simple Windows taskbar-like notification system that presents status messages at the bottom of your screen. Like all great pieces of software, it has a solid Python interface which allows for incorporating it in those little 10-minutes scripts I find myself writing every now and again.
One of the things I wanted to script was the notification of the build status of the numerous jobs that we're running in our Hudson instance here at Slide. Using the Universal Feed Parser and pynotify (listed under "notify-python"), I had a good little Gnome Hudson Notifier running in less than 10 minutes.
Source code after the jump.
import feedparser
import pynotify
import time
BASE_TITLE = 'Hudson Update!'
def success(job):
n = pynotify.Notification(BASE_TITLE,
'"%s" successfully built :)' % job,
'file:///usr/share/pixmaps/gnome-suse.png')
n.set_urgency(pynotify.URGENCY_LOW)
return n
def failure(job):
n = pynotify.Notification(BASE_TITLE,
'"%s" failed!' % job,
'file:///usr/share/pixmaps/gnome-suse.png')
n.set_urgency(pynotify.URGENCY_CRITICAL)
return n
def main():
pynotify.init('Hudson Notify')
old_items = []
while True:
feed = feedparser.parse('http://hudson/rssLatest')
items = [t['title'] for t in feed['entries']]
new_items = list(set(old_items).difference(items))
for i in new_items:
i = i.split(' ')
job, build, status = (i[0], i[1], i[2])
status = status.replace('(', '').replace(')','')
if status == 'SUCCESS':
success(job).show()
elif status == 'UNSTABLE':
unstable(job).show()
elif status == 'FAILURE':
failure(job).show()
old_items = items
time.sleep(60)
if __name__ == '__main__':
main()
It's pretty basic right now, but does everything I really wanted it to do. I may add it into a public Git repository in the near future if I spend any more time on the project. Hope you like it :)
opinion
I've been getting voice-mails from Chase Auto-Finance recently bugging me to pay them some money (turns out they're strapped for cash lately, something silly about irresponsible lending).
All is well and good, I normally call Chase up once a month and navigate through increasingly painful phone menus and give Chase some of my money. As luck would have it, sometime between my last payment, and my current payment, Chase decided that you should really talk to a representative to make a payment. In effect, I have to talk to some poor soul working in a shitty 9-5 call center job to pay a car payment that I've paid for the past two years via an automated system. Hooray progress.
Back to the voice mails, each one I receive I normally receive when I am at work, turns out I am receiving the voice mails because I'm too busy working to answer the phone. Unfortunately the voice mails always contain some poor soul working in a shitty 9-5 call center job asking me to call a Chase representative back to resolve my outstanding payment issue.
Why is my bank making it so damned hard to give them money?
In the future I intend on staying with my other bank for my loans since not only do they have reasonable customer service representatives, but they make it incredibly easy to give them money.
monomiscellaneousjavascript
I found myself talking to Jason today about the virtues of getattr(), setattr(), and hasattr() in Python and "abusing" the dynamic nature of the language which reminded me of some lazy-loading code I wrote a while back. In February I found the need to have portions of the logic behind one of our web applications fetch data once per-request. The nature of the web applications we're building on top of the MySpace, Hi5 and Facebook platforms require some level of network data-access (traditionally via REST-like APIs). This breaks our data access model into the following tiers:
Working with network-centric data resources is difficult in any scenario (desktop, mobile, web) but the particularly difficult thing about network data access in the mod_python-driven request model is that it will be synchronous (mod_python doesn't support "asynchronous pages" like ASP.NET does). This means every REST call to Facebook, for example, is going to block execution of the request handler until the REST request to Facebook's API tier completes.
def request_handler(self, *args, **kwargs):
fb_uid = kwargs.get('fb_sig_user')
print "Fetching the name for %s" % fb_uid
print time.time()
name = facebook.users.getInfo(uid=fb_uid)
### WAIT-WAIT-WAIT-WAIT-WAIT
print time.time()
### Continue generating the page...
There is also a network hit (albeit minor) for accessing cached data or data stored in databases. The general idea is that we'll need to have some level of data resident in memory through-out a request that can differ widely from request-to-request.
Lazy loading in Python
To help avoid unnecessary database access or network access I wrote a bit of class-sugar to make this a bit easier and more fail-proof:
class LazyProgrammer(object):
'''
LazyProgrammer allows for lazily-loaded attributes on the subclasses
of this object. In order to enable lazily-loaded attributes define
"_X_attr_init()" for the attribute "obj.X"
'''
def __getattr__(self, name):
rc = object.__getattribute__(self, '_%s_attr_init')()
setattr(self, name, rc)
return rc
This makes developing with network-centric web applications a bit easier, for example, if I have a "friends" lazily-loading attribute off the base "FacebookRequest" class, all developers writing code subclassing FacebookRequest can simply refer to self.friends and feel confident they aren't incurring unnecessary bandwidth hits, and the friends-list fetching code is located in once spot. If one-per-request starts to become too resource intensive as well, it'd be trivial to override the _friends_attr_init() method to hit a caching server instead of the REST servers first, without needing to change any code "downstream."
Lazy loading in C#
Since C# is not a dynamically-typed language like Python or JavaScript, you can't implement lazily-loaded attributes in the same fashion (calling something like setattr()) but you can "abuse" properties in a manner similar to the C# singleton pattern, to get the desired effect:
using System;
using System.Collections.Generic;
public class LazySharp
{
#region "Lazy Members"
private Dictionary _names = null;
#endregion
#region "Lazy Properties"
public Dictionary Names
{
get {
if (this._names == null)
this._names = this.SomeExpensiveCall();
return this._names;
}
}
#endregion
}Admittedly I don't find myself writing Facebook/MySpace/Hi5 applications these days on top of ASP.NET so I cannot say I actually use the class above in production, but conceptually it makes sense.
Lazy loading attributes I find useful in the more hodge-podge situations, where code and feature-sets have both grown organically over time, they're not for everybody but I figured I'd share anyways.
monojavascript
Two things happened in such short proximity time-wise that I can't help but thing they're somehow related to the larger shift to interpreters. Earlier this week Miguel introduced csharp shell which forced me to dust off my shoddy Mono 1.9 build and rebuild Mono from Subversion just because this is too interesting to pass up on.
One of my favorite aspects of using IronPyhton, or Python for that matter is the interpreter which allows for prototyping that doesn't involve creating little test apps that I have to build to prove a point. For example, I can work through fetching a web page in the csharp shell really easily, instead of creating a silly little application, compiling, fixing errors, and recompiling:
tyler@pineapple:~/source/mono-project/mono> csharp
Mono C# Shell, type "help;" for help
Enter statements below.
csharp> using System;
csharp> Console.WriteLine("This changes everything.");
This changes everything.
csharp> String url = "http://tycho.usno.navy.mil/cgi-bin/timer.pl";
csharp> using System.Web;
csharp> using System.Net;
csharp> using System.IO;
csharp> using System.Text;
csharp> HttpWebRequest req = HttpWebRequest.Create(url);
(1,17): error CS0266: Cannot implicitly convert type `System.Net.WebRequest' to `System.Net.HttpWebRequest'. An explicit conversion exists (are you missing a cast?)
csharp> HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest;
csharp> HttpWebResponse response = req.GetResponse() as HttpWebResponse;
csharp> StreamReader reader = new StreamReader(req.GetResponseStream() as Stream, Encoding.UTF8);
(1,45): error CS1061: Type `System.Net.HttpWebRequest' does not contain a definition for `GetResponseStream' and no extension method `GetResponseStream' of type `System.Net.HttpWebRequest' could be found (are you missing a using directive or an assembly reference?)
csharp> StreamReader reader = new StreamReader(response.GetResponseStream() as Stream, Encoding.UTF8);
csharp> String result = reader.ReadToEnd();
csharp> Console.WriteLine(result);
I really think Miguel and Co. have adding something infinitely more useful in this Hackweek project than anything I've seen come out of recent hackweeks at Novell. The only feature request that I'd add along to the csharp shell would be "recording", i.e.:
tyler@pineapple:~/source/mono-project/mono> csharp
Mono C# Shell, type "help;" for help
Enter statements below.
csharp> Shell.record("public void Main(string[] args)");
recording...
csharp> using System;
csharp> Console.WriteLien("I prototyped this in csharp shell!");
(1,10): error CS0117: `System.Console' does not contain a definition for `WriteLien'
/home/tyler/basket/lib/mono/2.0/mscorlib.dll (Location of the symbol related to previous error)
csharp> Console.WriteLine("I prototyped this in csharp shell!");
csharp> Shell.save_record("Hello.cs");
recording saved to "Hello.cs"
Which could conceptually generate the following file:
using System;
public class Hello
{
public void Main(string[] args)
{
Console.WriteLine("I prototyped this in csharp shell!");
}
}
JavaScript Shell
In addition to the C# shell, I've been playing with v8, the JavaScript engine that powers Google Chrome. The V8 engine is capable of being embedded easily, or running standalone, one of the examples they ship with is a JavaScript shell. I've created a little wrapper script to give me the ability to load jQuery into the V8 shell to prototype jQuery code without requiring a browser to be up and running:
tyler@pineapple:~/source/v8> ./shell
V8 version 0.3.0
> load("window-compat.js");
> load("jquery.js");
> $ = window.$
function (selector,context){return new jQuery.fn.init(selector,context);}
> x = [1, 5, 6, 12, 42];
1,5,6,12,42
> $.each(x, function(index) { print("x[" + index + "] = " + this); });
x[0] = 1
x[1] = 5
x[2] = 6
x[3] = 12
x[4] = 42
1,5,6,12,42
>
The contents of "window-compat.js" being:
/*
* Providing stub "window" objects for jQuery
*/
if (typeof(window) == 'undefined') {
window = new Object();
document = window;
self = window;
In general I don't really have anything insightful or especially interesting to add, but I wanted to put out my "+1" in support of both of these projects. Making any language or API more easily accessible through these shells/interpreters can really help developers double-check syntax, expected API behavior etc. Thanks Novell/Google, interpreters rock!
slidesoftware developmenthudson
I recently wrote about "one-line automated testing" by way of Hudson, a Java-based tool that helps to automate building and test processes (akin to Cruise Control and Buildbot). If you were to read this blog regularly, you'd be well aware that I work primarily with Python these days, at a web company no less! What does a web company need with a continuous integration tool? Especially if they're not using a compiled language like Java or C# (heresy!).
As any engineering organization grows, it's bound to happen that you reach a critical mass of developers and either need to hire an equitable critical mass of QA engineers, or start to approach quality assurance from all sides. That is to say, automated unit testing and automated integration testing becomes a requirement for growing both as a engineering organization but as a web application provider (user's don't like broken web applications). With web products like Top Friends, SuperPoke! and Slide FunSpace we have a large amount of ever-changing code, that has been in a constant state of flux for the past 16-18 months. We can accomodate for ever-changing code on the backend for the past year and half with PyUnit and development discipline.
How do you deal with months of ever changing code for the aforementinoned products' front-ends? Your options are pretty slim, you can hire a legion of black-box QA engineers to manually go through regression tests and ensure your products are in tip-top shape, or you can hire a few talented black-box QA engineers to conscript a legion of robots to go through regression tests and ensure your products are in tip-top shape. Enter Windmill. Windmill is a web browser testing framework not entirely unlike Selenium or Watir with two major exceptions: Windmill is written in Python and Windmill has a great recorder (and lots of other features). One of my colleagues at Slide, Adam Christian has been working tirelessly to push Windmill further and prepare it for enterprise adoption, the first enterprise to use it, Slide.
Adam and I have been working on bringing the two ends of the testing world together with Hudson. About half of the jobs currently running inside of our Hudson installation are running PyUnit tests on various Subversion and Git branches. The other half of the jobs are running Windmill tests, and reporting back into Hudson by way of Adam's JUnit-compatible reporting code. Thanks to the innate flexibility of PyUnit and Windmill's reporting infrastructure we were able to tie all these loose ends together with a tool like Hudson that will handle Jabber-notifications or email notifications when test-runs fail and include details in it's reports.
We're still working out the kinks in the system, but to date this set up has helped us fix at least one critical issue a week (with a numerous other minor issues) since we've launched the Hudson system, more often than not before said issues reach the live site and real users. If you've got questions about Windmill or Hudson you can stop by the #windmill or the #hudson channels on Freenode.
Automated testing is like a really good blend of coffee, until you have it, you think "bah! I don't need that!" but after you start with it you can't help but wonder how you could tolerate the swill you used to drink.
Did you know!Slide is hiring! Looking for talented engineers to write some good Python and/or JavaScript, feel free to contact me at tyler[at]slide
slideopinionsoftware developmenthudson
For about as long as my development team has been a number larger than one, I've been on a relatively steady "unit test" kick. With the product I've worked on for over a year gaining more than one cook in the kitchen, it became time to start both writing tests to prevent basic regressions (and save our QA team tedious hours of blackbox testing), but also to automate those tests in order to quickly spot issues.
While I've been on this pretty steadily lately, I'm proud to say that automated testing was one of my first pet projects at Slide. If you ever crack into the Slide corporate network you can find my workstation under the name "ccnet" which is short for Cruise Control.NET, my first failed attempt at getting automated testing going on our now defunct Windows desktop client. As our development focus shifted away from desktop applications to social applications the ability to reliably test those systems plummeted; accordingly our test suite for these applications became paltry at best. As the organization started to scale, this simply could not stand much longer else we might not be able to efficiently push stable releases on a near-nightly schedule. As we've started to back-fill tests (test-after development?) the need to automate these tests has arisen to which I started digging aronud for something less painful to deal with than Cruise Control, enter Hudson.
Holy Hudson Batman!
I was absolutely astounded that I, nor anybody I knew, was aware of the Hudson project. Hudson is absolutely amazing as far as continuous integration systems go. The only major caveat is that the entire system is written in Java, meaning I had to beg one of our sysadmins to install Java 1.5 on the unit test machine. Once that was sorted out, starting the Hudson instance up was incredibly simple:
java -jar hudson.war
In our case the following to keep the JVM within manageable virtual memory limits:
java -Xmx128m -jar hudson.war --httpPort=8888
Once the Hudson instance was up and rnuning, I simply had to browse to http://unittestbox:8888/ and the entire rest of the configuration was set up from the web UI. Muy easy. Muy bueno.
Plug-it-in, plug-it-in!
One of the most wonderful aspects of Hudson is it's extensible plugin architecture. Adding plugins like "Git", "Trac" and "Jabber" means that our Hudson instance is now properly linking to Trac revisions, sending out Jabber notifications on "build" (read: test run) failures and monitoring both Subversion and Git branches for changes. From what I've seen from their plugin architecture, it would be absolutely trivial to extend Hudson with Slide-specific plugins as the needs arise.
With the integration of the PyUnit XMLTestRunner (found here) and working an XML output plugin into Windmill we can easily automate testing of both our back-end code and our front-end.
Hudson in action
And all with one simple java command :)
Did you know!Slide is hiring! Looking for talented engineers to write some good Python and/or JavaScript, feel free to contact me at tyler[at]slide
miscellaneousmedia
Since I've started to spend such an enormous amount of my time with work and settling into a new apartment, I've had literally no time to discover new music. Because of this utter lack of time on my part, I've been pondering this idea for about the past month or two on a daily basis, I want to participate in an iPod Foreign Exchange Program.
I currently own a 30GB Video iPod (black) that has about 28GB of music on it with a few assorted podcasts here and there.
Here's what I'm thinking would constitute a good set of rules for swapping an iPod to "walk a mile in somebody's shoes" (musically).
We can be acquaintances, but not friends. I know what my friends listen to and can steal their iPods myself :)
The period to swap iPods would last one week
Both parties would make sure to un-sync their address book and calendars from the iPod, but not change any of the music (no trying to impress people)
The iPod swap is accompanied with a business card or means to coordinate a swap-back
Both parties must be respectful of the others' tastes, even if it's really weird (you know who you are)
I went ahead and removed my calendars and contacts from my iPod just in case I run into somebody on the train that has read this post and wants to swap right away, but failing that, if you're around San Francisco, let's swap iPods :)
slideopinionsoftware developmentgit
For the past two months I've been experimenting with varying levels of success with Git inside of Slide, Inc.. Currently Slide makes use of Subversion and relies heavily on branches in Subversion for everything from project specific branches to release branches (branches that can live anywhere from under 12 hours to three weeks). There are plenty of other blog posts about the pitfalls of branching in Subversion that I won't go into here, suffice to say, it is...sub-par. Below is a rough diagram of our general current workflow with Subversion (I've had some other developers ask me "why don't you just work in trunk?" to which I usually wax poetic about the chaos of trunk when any project gets over 5 active developers (Slide engineering is somewhere between 30-50 engineers)).
There are three major problems we've run up against with utilizing Subversion as our version control system at Slide:
Subversion's "branches" make context switching difficult
Depending on the age of a branch cut from trunk/, merges and maintainence is between difficult and impossible
Merging Subversion branches into each other causes a near total loss of revision history
Given that branches are a critical part of Slide's development process, we've historically looked at branch-strong version control systems as alternatives, such as Perforce. Before I joined Slide in April of 2007, I was a heavy user of Perforce for my own consulting projects as well as for some of my work with the FreeBSD project as part of the Summer of Code program. In fact, my boss sent out a "Perforce Petition" to our engineering list on my third day at Slide...we still haven't switched away from Perforce.
Up until earlier this year I hadn't given it a second thought until the team I was working with grew and grew such that between me and four other engineers we were pushing a release anywhere from once to three times a week. That meant we were creating a Subversion "branch" multiple times a week, and a significant part of my daily routine became merging to our release branch and refreshing project branches from trunk/. All of a sudden Git was looking prettier and prettier, despite some of its warts. At this point in time I was already using Git for some of my personal projects that I never have time for, so I knew at the bare minimum that it was functional. What I didn't know was how to deploy and use it with a large engineering team that works in very high churn short iterations, like Slide's.
Subversion at Slide
Moving our source tree over into a system other than Subversion, from Subverison, was destined to be painful. The tree at Slide is deceptively large, we have a substantial amount of Python running around (as Slide is built, top-to-bottom, in Python) and an incredible amount of Adobe Flash assets (.swf files), Adobe Illustrator assets (.ai files) and plenty of binary files, like images (.png/gif/jpeg). Currently a full checkout of trunk/ is roughly 2.5GB including artwork, flash, server and web application code. We also have roughly 88k revisions in Subversion, the summation of three years of the company's existence. Fortunately somebody along the line wrote a script (in Perl however) called "git-svn(1)" that is designed to do exactly what I needed, move a giant tree from Subversion to Git, from start to finish (similar to svn2p4 in Perforce parlance).
After raising this enough times, I finally caught spearce who was able to identify the problem and supply a patch that fixed the memory allocation issues with Git and a repository of Slide's size. First obstacle overcome, now I could actually test a Git workflow inside of Slide.
If you are looking to deploy Git for a larger audience in a corporate environment, I highly recommend Gitosis. What Gitosis does is allows for SSH to be used as the transport protocol for Git, and provides authentication by use of limited-shell user accounts and SSH keys; it's not perfect but it's the closest thing to maintainable for larger installations of Git (in my opinion).
So far the experimenting with Git at Slide is pretty localized to just my team, but with a combination of Gitosis, git-svn(1) and some "best practices" defined for handling the new system we've successfully continued development for over the past month without any major issues.
As this post is already quite lengthy, I'll be discussing the following two parts of our experimenting in subsequent posts:
monomiscellaneoussoftware development
Most of my personal projects are built on top of ASP.NET, Mono and Lighttpd. One of the benefits of keeping them all running on the same stack (as opposed to mixing Python, Mono and PHP together) is that I don't need to maintain different infrastructure bits to keep them all up and running. Two key pieces that keep it easy to dive back into the the side-project whenever I have some (spurious) free time are my NAnt scripts and my push scripts.
NAnt
I use my NAnt script for a bit more than just building my web projects, more often than not I use it to build, deploy and test everything related to the site. My projects are typically laid out like:
bin/ Built DLLs, not in Subversion
configs/ Web.config files per-development machine
libraries/ External libraries, such as Memcached.Client.dll, etc.
schemas/ Files containing the SQL for rebuilding my database
site/ Fully built web project, including Web.config and .aspx files
sources/ Actual code, .aspx.cs and web folder (htdocs/ containing styles, javascript, etc)
Executing "nant run" will build the entire project and construct the full version of the web application in the site/ and finally fire up xsp2 on localhost for testing. The following NAnt file is what I've been carrying from project to project.
The Push Script
Since I usually build and deploy on the same machine, I use a simple script called "push.sh" to handle rsyncing data from the development part of my machine into the live directories.
#!/bin/bash
###############################
## Push script variables
export NANT='/usr/bin/nant'
export STAGE=`hostname`
export SOURCE='site/'
export LIVE_TARGET='/serv/www/domains/myproject.com/htdocs/'
export BETA_TARGET='/serv/www/domains/beta.myproject.com/htdocs/'
export TARGET=$BETA_TARGET
###############################
###############################
## Internal functions
function output {
echo "===> $1"
}
function build {
${NANT} && ${NANT} site
}
###############################
###############################
## Build the site first
output "Building the site..."
build
if [ $? -ne 0 ]; then
output "Looks like there was an error building! abort!"
exit 1
fi
###############################
## Start actual pushing
if [ "${1}" = 'live' ]; then
output " ** PUSHING THE LIVE SITE ***"
export TARGET=$LIVE_TARGET
else
output "Pushing the beta site"
fi
output "Using Web.config-${STAGE}"
output "Pushing to: ${TARGET}"
Depending on the complexity of the web application I might change the scripts up on a case-by-case basis, but for the most part I have about 5-6 projects out "in the ether" that are built and deployed with a derivative of the NAnt script and push.sh listed above. In general though, they provide a good starting point for the tedious bits of non-Visual Studio-based web development (especially if you're in an entirely Linux-based environment).
slidemiscellaneoussoftware development
A while ago I jotted down about seven or so ideas of stuff that I thought would make good blog posts, somehow "markup parsers in Python" is next on the list, so I might as well spill the beans on how incredibly easy it is to process (X)HTML with Python and a little built in class called HTMLParser.
There have been a few occasions when I needed a quick (and dirty) way to perform transforms on some chunk of HTML or merely "search and replace" parts of it. While it might be cleaner to do something with XSLT or the likes, using them doesn't even begin to match the speed of development of an HTMLParser-based class in Python.
Getting Started
One major thing to keep in mind when working with HTMLParser, especially if you're newer to Python, is that it is what's referred to as an "old styled" object, meaning subclassing it is a bit different than "new styled" classes. Since HTMLParser is an old-styled object, any time you'd want to call a super-class defined method you would need to perform HTMLParser.superMethod(arg) instead of super(SubHTMLParser, self).superMethod(arg)
Creating the HTML parser
For the purposes of this example, I want something simple, so we're just going to take a block of markup and "tweak" all the <a> tags within it to be "sad" (whereas "sad" means they'll be bold, blue, and blinkey). The actual code to do so is only 50 lines long and is as follows: import HTMLParser
class SadHTML(HTMLParser.HTMLParser):
'''A simple HTML transform-class based upon HTMLParser. All links shall be bold, blue and blinky :('''
The actual ins-and-outs of the parser are very simple; markup like "<a href="#">Hello</a><br/>" would execute accordingly:
handle_starttag('a', [('href', '#')])
handle_data('Hello')
handle_endtag('a')
handle_startendtag('br', [])
Since HTMLParser just gives you element tag names, and there attributes, SadHTML simply builds a list of strings out of what data is passed to it via the super class and then when everything is finished, ties the list back together with: ''.join(list_of_tags).
Executing the SadHTML.depreshun method on the contents of my last blog post is a good example, part of the post was:
An informal poll at the Slide offices this past week yielded these interesting results: at Slide.com, nearly 100% of white people seem to like "Stuff White People Like".
After running it through "SadHTML", the following markup is generated instead:
An informal poll at the offices this past week yielded these interesting results: at Slide.com, nearly 100% of white people seem to like .
If you're curious as to how much more you can do with HTMLParser, do check out the documentation. It's far more lenient than using eXpat for parsing HTML, and it's still fast enough to be used on longer documents (there's also htmllib available for Python but I've not used it yet).
miscellaneous An informal poll at the Slide offices this past week yielded these interesting results: at Slide.com, nearly 100% of white people seem to like "Stuff White People Like".
It's so easy to get caught up in the flurry of things going on here in Silicon Valley (not to mention just at Slide), but I figured that Hi5 deserved being mentioned. I'd like to congratulate Lou, Anil, Paul, Zack and the rest of the Hi5 Platform team on being (from what I can tell) the first social network to turn their OpenSocial-based platform on 100% to users. As of last friday they finally ramped up to 100%, meaning every user on Hi5 can add OpenSocial applications that have been approved and added to the Hi5 applications gallery.
The past couple weeks I've been lurking on the #Hi5dev channel on Freenode, where most of the Hi5 team has been as well, dutifully answering questions and getting general developer feedback. I highly recommend following their developer blog where Lou (pictured here) has been posting regular updates and all the important things that you need to do in order to get your application viral, approved and reaching Hi5's users.
Some of the applications we've launched include: Top Friends, Slide TV and SuperPoke. Of course, if all you want to do on Hi5 is be friends with me, you can find me here :).
Overall the OpenSocial/Hi5 platform has been an interesting experience, moving more of the application into the realm of JavaScript as opposed to what I've become used to on the Facebook platform has made me think harder about the separation of front-end code from back-end code and where you actually draw the line when both are written in the same language. One down, only twoto go!
miscellaneous
If you're subscribed to my RSS feed you wouldn't have noticed, but if not you probably already know by now about the change in the look of unethicalblogger.com. I got tired of the old (boring) red theme and dug around on drupal.org until I found one I liked and then customized it to suit my needs.
Other than that, I'm very proud to announce that this site is the 7th hit on Google for the query "unethical" (check it out). Besides the obvious tactics (kicking puppies), I'm wondering how I can reach the #1 result for "unethical".
slidemiscellaneousjavascript
Since I've come from the land of desktop application development, there are a few concepts that I don't think quite "made the voyage" from desktop/thick-client development to web/thin-client development. The concept of "data binding" is completely lost in my opinion in the land of Javascript and HTML (not to mention the concept of "controls" to begin with).A few weeks ago while exploring a couple other concepts for how to improve our overall frontend development at Slide I prototyped a means of "databinding" controls, or at the very least DOM elements to data-providing Javascript functions.
I've posted an example here of some of the data binding code I've written for experimentation purposes. In the example page linked, there is a <ul> tag that is "bound" to a Javascript function, the Javascript function creates an array of associative arrays inline (it could very well be powered by some AJAX-oriented Javascript with minor adjustments). Using the results of the "databind" function specified on the bindable element, it creates a set of child nodes to attach to the parent list. In effect, the following code:
Will generate the following DOM tree, after the "bind()" function has been run on page load:
List Item #1 OMG
List Item #2
List Item #3
Since the code is relatively simple (in my opinion) I figured I would throw it out there in all it's minimalistic glory and get some general feedback on the concept before I go "all out" and create a full-on jQuery extension based off of the linked prototype above. I'm trying to think of ways to make it more powerful as well, built-in support for binding to the results of an asynchrounous call to a URL that returns JSON that would then create the elements is at the top of my TODO list at this point. Feedback, flames and actual useful critiques are all welcome; I'll be sure to post again once I have the time to create the jQuery extension for binding, this however is more experimental quality (i.e. don't use it, i'm not).
What do you think?
Did you know!Slide is hiring! Looking for talented engineers to write some good Python and/or JavaScript, feel free to contact me at tyler[at]slide
miscellaneouslinux
With the release of Mac OS 10.5 (Leopard) I found myself in a tough spot, I liked certain features added into Leopard, but I couldn't stand some of the stability issues I was having and the other bugs that would interrupt my normal workflow during the day. In an effort to alleviate some of my frustrations with Leopard, I experimented for a week with running Gnome (with Compiz) on my openSUSE workstation. In general all the important bits were there, By this point, I had already switched from using any sort of GUI editor to work, but instead had switched over to using vim on a shared development server here at the office. Given that Drosera still wasn't fast enough for my normal day-to-day web development debugging, I was also using Firefox and Opera for most of my web browsing as well. Chat was covered by Adium, so using it's Linux/Windows counter-part, Pidgin was no trouble at all.
After switching for a week and falling in love with Compiz and some of the features it offers, I feel like I can accomplish far more now on Linux than I was on Mac OS X. For me Mac OS X became the new Windows, I was fighting the system to work almost as much as I was do actual work (between the IMAP code in Mail.app crashing and Safari leaking, I was not a happy camper). The one missing feature however was Dashboard. I'm not a religious user of Dashboard, but I always used it to keep little chunks of information stored away either in post-it notes, via clocks, or tickers, etc. I'd not found a good solution until recently, by way of Opera and a combination of widgets and one of the default Compiz Fusion plugins.
(click to enlarge)
The concept behind Opera Widgets is the exact same as behind Dashboard widgets, tiny little web applications running on your desktop, by default however they run on your desktop. This wasn't going to work for me, I like to stash widgets away, and access them through the trusty F12 button as per usual with Dashboard.
In enters the "Widget Layer" plugin for Compiz Fusion , which allows you to assign rules for placing regular windows sitting in the window manager, into this special widget layer that appears just like Dashboard does on Mac OS X (with the actual desktop faded out in the background). In order to group all Opera Widgets in the Widget Layer, you can set the "Widget Windows" field to:
role=opera-widget
Which will cause all enabled Opera Widgets to be availabe at a keypress of F12, if you're on Linux, I highly suggest you try it out, it's extremely useful, especially if you grab some of the more developer focused widgets from the widget directory
Of course, there's plenty of reasons to use Compiz. One of my favorite plugins is the "Annotate" plugin that allows you to draw on your screen, which comes in handy for going over interfaces with coworkers.
In general the addition of Compiz to the Linux desktop I feel is an important one, it drastically improves the rendering of windows since it's essentially doing what Quartz Extreme is doing on Mac OS X in terms of offloading some rendering on the graphics card's GPU. Having really bitching eye-candy certainly doesn't hurt either, So far with Compiz I have what equates to "Spaces", "Expose", and "Dashboard" from Leopard, along with a myriad of other goodies like "Wobbly Windows", true transparency and reflections on arbitrary UI elements and of course, a fish tank inside my desktop. (If you're using openSUSE, the one-click packages for Compiz Fusion can be found here on the wiki).
When addressing something as big and scary as say, a platform built on Javascript, it forces you into looking at Javascript in a way different than how I think most developers (myself included) have looked at Javascript. Most Javascript that I've seen has been hideous. Gobs and gobs of functions and procedural garbage thrown into a series of files that kinda makes sense, but really doesn't. It would seem that most developers charged with writing Javascript don't understand how to write object-oriented Javascript. In fact about two or three months ago when considering topics to discuss in a front-end developers meeting here at Slide, I bit the bullet, raised my hand and said "Can you explain how to do object-oriented Javascript? Because I honestly don't have a fucking clue."
In the past Javascript that I've written has been to compliment existing backend web-application code and front-end code, i.e. I wasn't looking at Javascript as one of the building blocks of my application, I was looking at it as a bit of mortar spread between the cracks to smooth out the surface of the application. The difference in how you start to use Javascript in a web application makes an enormous difference 6 months to a year down the road. How terrible your code (this isn't actually segregated to Javascript) is becomes far more apparent when other developers start to work with your code as well, it's tremendously embarrassing to have to answer questions like "where's the code that generates that one DOM element?" As a general rule, coding all by your lonesome, especially with a tight schedule, will produce less than clean results (unfortunately Javascript is one of the languages I've found where this is more of the norm than the exception).
A lot of what's driven the change from my Javascript being the mortar to being the bricks in my work has been the adoption of jQuery which I highly recommend along with the jQuery.ui library. jQuery makes developing Javascript feel like actual programming, instead of hackish-scripting, which means you'll start to view your Javascript code differently too. Dealing with scoping issues, and prototype-based programming in Javascript isn't all rainbows and butterflies but "doing it right" will help you sleep at night and help reduce the amount of embarrassing questions you'll have to answer to the next poor unfortunate soul that inherits your code.
Some of the resources I've found useful in getting over the barrier to object-oriented Javascript have been:
slidemiscellaneoussoftware developmentjavascript
I've been doing work with OpenSocial recently and have used the opportunity to bring my tolerance talent in Javascript up a notch or two. In doing so, I've been slowly but surely running into a myriad of browser-specific quirks along with a few cross-browser gems that have left me thinking about putting some browser developers on my "To Anonymously Beat Up In Alleyway" list (so far, James Gosling, and this man top the list).
After working on a few "classes" tonight (the notion that Javascript is object-oriented still makes me chuckle) I ran into an interesting problem with some of my global-level "constants" defined in the same file that I was working in, that my "class" just so happened to make use of. As I tend to do when I fall into situations like this to where I can't tell if I'm hallucinating or if something with Javascript has gone awry, I called over Sergio (in-house CSS master and Javascript Lvl. 60 Mage).
Some background to how Javascript works
Javascript engines essentially have two "modes" that it runs over your code that you can spot errors in. The first mode, "parsing", is where you'll find syntax errors spewing into the Javascript console. If you've used any interpreted language before (Python, Java, C#, Ruby), this is really just "compilation". Using Python as an example, when you import a module (i.e. import some_module) the Python interpreter actually compiles your code into Python byte-code to be executed at a later date. The second mode, "execution", is where you'll run into your run-time errors, using an accessing an undefined object property, overrunning an array index, etc. In Python/Java terms, this is where your compiled byte-code is actually being run in the Python/Java virtual machine.
The gripe
The crux of the problem comes down to two different ways to declare an associative array in Javascript, the following two notations are both correct and both "work":
Notation #1
var mapped_values = {};
mapped_values['key'] = 'value';
Notation #2
var mapped_values = {'key' : 'value'};
Everything looks correct yes? (hint: say yes)
Incorrect, because of the point at which the two are evaluated. The first example will be notated at run-time, whereas the second example will be evaluated at parse/compile-time. Who cares right? The distinction becomes much more apparent when you start to use references to other code for your keys. Keep in mind, with both notations it is actually valid to have a key "undefined" when you declare your associative array. For example:
Notation #1
/* the variable "foo" is not defined */
var mapped_values = {};
mapped_values[foo] = 'value';
Notation #2
/* the variable "foo" is not defined */
var mapped_values = { foo : 'value' };
This still works perfectly fine, both at parse/compile-time and at run-time. Since I mentioned I'm working on OpenSocial, chances are I'm going to need to reference some of the OpenSocial code. So for the next example let's say I need to create an associative array with one of the keys defined by the OpenSocial container, using the two different notations I would write something like:
Notation #1
var mapped_values = {};
mapped_values[opensocial.DataRequest.PeopleRequestFields.FILTER] = opensocial.DataRequest.FilterType.ALL;
Notation #2
var mapped_values = {opensocial.DataRequest.PeopleRequestFields.FILTER : opensocial.DataRequest.FilterType.ALL};
Because of the different points in time at which the two notations above will be evaluated, #1 will properly "compile" and then execute correctly when called (regardless of the scope-level at which it is defined). The second one however, will fail to "compile" when the browser's rendering engine is loading the Javascript (also regardless of the scope-level at which it is defined), and will result in the following error at load-time:
"missing : after property id" (verified in both IE6 and Firefox)
The issue here is that the variable "opensocial" is not defined at "compile-time" as far as the Javascript engine is concerned (which is fine) but the code attempts to access a property of that object, which causes the error at "compile-time". This will error at at *any* level (as far as I've tried) so it's not a scoping issue, just an unfortunate fact of life with how Javascript is loaded and eventually executed in the browser.
Sergio and I tried a few more examples (the scope of which was in side a function, not at the global level):
// Works in IE/FF
var test = { magic : 'thing' };
var test2 = {};
test2[rick.roll] = 'thing';
test2[alert] = 'somethingelse';
var test3 = {};
test3[function() { alert('sux'); }] = 'test';
// Fail in IE/FF
var test4 ={ opensocial['DataRequest']['PeopleRequestFields']['FILTER'] : opensocial['DataRequest']['FilterType']['ALL']} ;
var test5 = { rick['roll'] : 'thing'};
var test6 = { rick.roll : 'thing'};
var test7 = { function() { alert('sux'); } : 'test'};
Object accessor calls work in "test2" for example because that's going to be evaluated at run-time instead of at compile-time, as is happening in "test5" and "test6". I would love to be proven wrong on our analysis of the issue here (our tests were less than scientific, and there may have been Corona involved) but switching from inline-declarations for associative arrays (var t = {'k' : 'v'};) to the more sequential alternative (var t = {}; t['k'] = 'v';) solved the issue of the Javascript engine's parser spewing errors on the loading of the Javascript.
slidefacebook
It's been almost a whole entire week since I left Austin, but it certainly seems better late than never.
So far this month I've been fortunate enough to have been invited to speak at a few events, and one or two I just happened to wedge myself into anyways.
I spoke at Graphing Social Patterns West in San Diego the first week in March on a concept I feel I didn't have the time to really explain sufficiently, "Social Portability" (pdf). Unfortunately most of the audience weren't developers, so I adjusted the presentation to shoot a bit higher level than usual.
Following GSP I spoke at BarCamp Austin 3 on building ASP.NET sites on top of the Mono stack (pdf). The session was relatively small, so it broke down into much more of a round table discussion (we were sitting at a literal round table) about some of our experiences with ASP.NET on Mono through Apache2/mod_mono and Lighttpd/Mono-FastCGI, etc.
Also while at SXSW I spoke at the Facebook Developer Garage Austin on the same concept as before, Social Portability, except this time the audience was far more developer oriented so I could dive into some nitty gritty bits of FBML/FBJS caveats, etc ((pdf). The Developer Garage was especially fun because the Zuckerborg was in attendance, and I met more than my fair share of interesting developer-types that were either Texans themselves, or in Texas for the event.
March is barely half over and I'm already exhausted.
slidefacebook
I'm not yet certain what kind of audience is going to be attending Graphing Social Patterns West, so I'm hoping I can help tip the scales in favor of developers because, to be frank, business people scare me.
I was told about AppNite and it seems like a good excuse to try to get more developers to make the trek down to San Diego to keep me company in a sea of marketers and business folk. Better yet, developers who enter the AppNite contests get 50% off the admission to the conference (enter here). Unfortunately I'm not going to enter my apps in the contest, but I do know a friend of mine Jason Rubenstein, of Just Three Words fame, has entered his app, to give you an idea of the stiff competition you'll be up against.
If that doesn't seal the deal for you, Virgin America flies to San Diego now, and round-trip flights from San Francisco to San Diego are only ~$85. Which means if you're a Silicon Valley Facebook/Bebo/OpenSocial developer you can come hang out at the conference for cheap, and if your application is good enough, get some killer exposure to potential investors, business contacts, and other developers (like me!).
Zach Allia (of Free Gifts fame), Jason Rubenstein (Just Three Words), Ryan Romanchuk (Dipity) as well as the developers on the speakers list will all be there, so it should be a fun meeting of the minds (for developers at least).
cocoaopinionsoftware development
Hate is such a strong word, but I think I can verifiably say that I hate Mac OS X (Leopard). In a past life I wrote Mac software on Mac OS X (Tiger) and everything was wonderful, I enjoyed using Mail, iCal, Xcode, Safari and even iTunes sometimes. I liked using my computer, I enjoyed using the tools handed to me by the gods on high in the mountains of Cupertino.
Now a couple months since upgrading to Leopard certain that everything was going to be even more awesome than before, I type this from my openSUSE 10.3 workstation, running Opera, Thunderbird, Sunbird, Banshee and Gnome Terminals open all over the place. The tipping point was an afternoon at a coffee shop with my lovely MacBook Pro (code named "cherry") when I closed Safari entirely because it was leaking memory, only to open it again for about an hour, and notice that it had started leaking again and in the course of an hour had a memory footprint of 1.3GB.
Using Mail.app in Leopard has been nothing but a complete and total nightmare, somehow Mail.app's internal IMAP implementation can lock up the entire machine causing the Finder, Safari and Terminal all to beachball while Mail.app takes 15 minutes only to end up crashing. Too many stack traces I've watched Mail.app emit have all been rooted in their IMAP support. Thunderbird is also a miserable piece of software, I'm convinced that everybody except the one engineer I know at Mozilla is a complete and utter idiot, but when Thunderbird locks up, I can still use the rest of my system. Somehow Apple has munged the lines between userland and kernel space so much that userland applications can take control of the machine leaving the user on the sidelines while applications compete for resources and bicker amongst themselves.
Time Machine, and Spaces are the only two redeeming features in Leopard for me, but after losing multiple modal dialogs in Spaces and watching iChat or Adium steal focus and rip focus from one space to another became too much to handle. Somehow every window manager on the planet has gracefully supported multiple workspaces for well over a decade and Apple was able to do it wrong.
I really like Apple's hardware, I own a 20" iMac, 13" MacBook, 12" PowerBook and have a 15" MacBook Pro at work, but either Apple let go, or lost some really good engineers in their famed deathmarches up to releases, especially up to the release of Mac OS 10.5. I'm sure the iPhone is a great revenue stream, the Apple TV is "cute" and iPods are still my preferred portable media devices, but I never realized that when "Apple Computer, Inc." became "Apple, Inc" that it was going to have such an effect on the quality of their products.
Now I have a split partition on my MacBook Pro, and I intend on putting openSUSE on the machine this weekend. My trust in the open source community, as low as it is, is now higher than my trust in Apple to release quality software. Grrrr.