don’t do this


Fatal error: Unknown column ‘u.status’ in ‘where clause’ query: SELECT u.*, s.* FROM users u INNER JOIN sessions s ON u.uid = s.uid WHERE s.sid = ‘d875bff2aebf8ef0709e517ce1f1d1c1’ AND u.status

questions: vim and wordpress interop

I like to edit text using vim. I’d like to use vim to work on my wordpress posts. I like using a textwidth of 78. However, if I leave the newlines in the source, wordpress interprets each one as a hard linebreak. If I don’t use a textwidth, it’s harder to navigate through my source. I’d actually prefer to fix wordpress’ behaviour. Anyone know how?

one way to improve amazon

This was actually Tantek’s idea, from a microformats discussion. Support “TITLE by AUTHOR” syntax. Use “by” to split the query.

one way to improve firefox 2 tabs

The “close tab” button in firefox 1.5 had a really nice feature: monotony. That button was always in the same place, no matter what. The tabs in firefox 2.0 have individual close tab buttons. While this helps us immerse ourselves in the interface thanks to the effects of direct manipulation, it makes it very frustrating to close several tab in serial. Usually I end up scrolling all the way to the right (thank goodness scroll works!!), and then for most tabs, the close buttons stays in the same area. Until there are too few tabs, and firefox decides not to stretch the tabs all the way across. Then I have to hunt around for the button.

To fix this, two approaches could be taken: always stretch tabs across the whole horizontal interface (thereby ensuring the placement of that close button in the same place every time). Or, I think it would also be creative to use the current sizing algorithm, but simply make the tabs right justified, instead of left justified. Right justified also helps: because of the placement strategy, the newest tabs are always to the right. I assume that when actively interacting with tabs, the more interesting ones are the newer ones, but some user research could verify this.

webforms 2 submission/validation model

I was reading some of the webforms 2 spec, when I stumbled across the steps for form submission. The first step is to check the form for validity, presumably using an implementation in the user agent. The 6th and 7th step deal with encoding and sending the form to the server, and step 8, the last one, deals with handling the response from the server.

I’m a bit wary of the value of spending energy implementing validation on the client. The server-side processing always needs to implement this anyway. As far as I can tell, there’s no mechanism for the application on the server to know that the content has been properly validated using a compliant agent in all instances. Therefore, the vo alue of any validation happening on the client is a shorter feedback loop for content-producers (users). However, in addition, in order for this feedback loop to be of use to users, they will need more information than the type and format of the correct input. The information most needed is why the particular application they are currently interacting with finds their input unacceptable.

I suggest creating protocol (perhaps based on atom publishing protocol) for exchanging messages between the client and application. This will allow users to continuously correct their input until the application responds that the impending submission will be processed faithfully. Then the users can commit their changes knowing that the application won’t simply reject it, and developers can focus implementation energies on a single authoritative implementation. The strong typing characterized by the web forms 2 spec can be implemented as a base class available for application developers to subclass more appropriately to their own applications.

len() calls to python strings

I’m pretty confused about the results I’m getting from a little python test involving len() on strings. Since strings are immutable, I was wondering what happens when len() is called, and what performance considerations I may need to keep in mind. I figure if it was immutable, calls to len() might get cached, and there may be very little performance impact. However, what I found was that over 1 million iterations, twice as many len() calls on a string is roughly 3 times slower than an iteration with only one call. When I called the string’s __len__() method instead, the two are much more comparable, and the one with a single call actually took more time than when calling len(). Weird. Can anyone further enlighten me? Perhaps my testing method is bad (I know it’s not a very good test… I’m just doing it because I can’t sleep at the moment.)

idea:schema2class using breve and metaclass

I’m sorry this renders so poorly. WordPress won’t let me use CSS, so I’m stuck, until I get a proper site of my own.

Try either http://deadbeefbabe.org/paste/4031 or http://pastebin.ca/399774 to see it better.

""" An experiment using breve xml library, and metaclass.

Author: Ben West
Email: bewest at gmail
blog: https://bewest.wordpress.com/

I hang out on freenode.

Breve is available at .
From the website:
  Breve is a Python template engine that is designed to be clean and elegant
  with minimal syntax.
  Like Stan (and unlike most Python template engines), Breve is neither an XML
  parser nor PSP-style regex engine. Rather, Breve templates are actual Python
  expressions. In popular parlance, Breve is an internal DSL.

This seemed like a really elegant idea to me.  I began to wonder about
generating classes that would allow easy manipulation of XML documents, given
a schema.  The simplest way to create a new xml tag is to call Proto(),
with the name of the tag as the argument.  I don't know if there is a way to
also control the attributes as you are generating the XML.  If there's not,
there should be.

Once, I got some simple hello world tests going with breve, I started fooling
around with ways to create new sets of tags.  I kept fooling around, trying to
reduce the number of steps necessary, until I came up with the current
contents of this experiment.

The basic idea is to reduce the amount of work required for an author to start
creating valid templates in breve for an known XML vocabulary that has a
schema.  This experiment goes far enough to show that this feature is
definitely possible.

The customtags class implements a callable.  The reason it is a class is so
that other authors might override certain aspects of it, such as findTags,
getDocString.  When an instance is called, it will construct a type (actually
a metaclass) in which the __dict__ contains all the breve tag's found by
findTags.  As a bonus, the docstring is also populated, providing the author
with rich documentation on that class, potentially coming directly from the

Using this technique, it would be possible to design a package, in which the
__init__.py inspects a directory for the presence of schema files.  At
runtime, this package could create all the classes necessary to start creating
templates for those schemas.  Any time the schema changes, the class also
changes, on the next import.  I believe this makes it more desirable over
code generators that create python code containing the tags on disk.

import sys
import breve
from breve import tags
from breve.tags.html import tags as T
import doctest

def hello_world():
  """Hello world example.

  >>> str( hello_world() )
hello world
' """ # breve takes care of html automatically. return T.div[ 'hello world' ] def simple_customtag(): """Simple custom tag hello world example. >>> str( simple_customtag() ) 'hello world' """ newtag = tags.Proto('foo') return newtag['hello world'] # first define the callable class I discussed above. It's instances will be # responsible for return a type representing an XML schema when called. class customtags(object): """This is an experiment to dynamically provision some tags. Anyway, this is neat because you can load this up in the python shell and get detailed help. This is useful for having a config file that specifies the location of some XML schema. You can change the schema, and your code will update automatically. XXX: This is probably bad practice because if something goes weird, it would be nearly impossible to debug ;-) However, this would faciliate importing schema definitions as a module, and getting rich help from the schema's documentation using the normal python methods. (Eg, imaging putting this in __init__.py, and creating a class for each schema in that directory.) For example: >>> class MyTags(customtags()()): pass >>> newtags = MyTags() # instantiate it >>> str( newtags.feed['hello world'] ) 'hello world' This example consisted of just plain calls with no parameters, so it may seem a bit over the top, without a full implementation of all the features previously described. """ def __call__(self, *args, **kwds): """Construct and then return a type representing a schema. """ doc = self.getDocString() class metaTagSet(type): def __new__(cls, classname, bases, classdict): classdict['__doc__'] = doc # add each element as a breve tag to the new class's __dict__ for e in self.tags.keys(): classdict[e] = tags.Proto(e) # TODO: is this preferred, or should I use super()? return type.__new__(cls, classname, bases, classdict) # create a quick wrapper, so we can use normal inheritance syntax. class TagSet(object): __metaclass__ = metaTagSet return TagSet def __init__(self): self.tags = self.findTags() def getDocString(self): """Returns the doc string for the class. """ doc = """A bunch of tags for use in breve, an s-expression xml generator. \n""" for e in self.tags.keys(): doc += "%s: %s\n" % (e, self.tags[e]) return doc def findTags(self): """In the future, this could simply parse a schema. """ return { 'feed' : """A feed elements contains comments, service, and collection. It is the root element.""", 'comments' : """comments is for containing a comment from one source.""", 'service' : """A service descripts a set of capabilities.""", 'collection' : """Useful for containing members, and related things."""} # for now, just a dumb dictionary of element names and a doc string. # wow... dynamically bind properties to a /class/ at runtime! # imagine an __init__.py that inspected the contents of a directory for schema # files, and then exported one of these for each schema. :-). # You get a nice literate programming environment, because your declaritive # documentation gets completely re-used in your procedural code, without you # repeating yourself at all. class CustomTags(customtags()()): pass def main(args): """A slightly more involved test... erm. basically if we get back xml the test works. (and it does) """ template = hello_world() #t = breve.Template ( tags.html.tags, doctype = '', xmlns = '' ) print template newtags = CustomTags() print newtags.feed[ [newtags.comments[ newtags.service, newtags.service ] for i in \ xrange(3)], newtags.collection, newtags.collection, "hello world" ] if __name__ == '__main__': main(sys.argv) doctest.testmod()

reminiscing: pdq bach and serpents

I started whistling the “O Serpent” song today, and felt compelled to inform my roommates of all the information I had on it. In researching, I came across Doug Yeo’s site, and his PDQ Bach web page. I started reading, expecting general information about serpents and PDQ Bach, but it was mostly about a specific concert. Moreover, it was a concert I performed in. In fact, I met Doug, and all of the people in the pictures. It was a really great concert. Imagine my surprise when I scrolled down to the very last picture and saw myself!!

If I can link to it: . This picture has William Walter (left), who is the stage manager for Peter Schikele. On the right is one of the serpentists(?), named Craig Kridel. In the middle of them, in the background, is me! This was taken at the end of December, 2005.

That was a really great concert.

TODO: Join a choir in San Francisco. At least one. Maybe two or three.

ec2 is nifty

I saw this http://www.zeroflux.org/blog/post/view?id=224 really neat ec2 setup to balance load across an ec2 vpn connected to a real “master”.