Polaris and Epsilon Gruis

One of my job responsibilities is interviewing candidates. Finding talent is a high priority so I take it seriously, but I also enjoy it since it also gives me a chance to think about fun puzzles and problems (algorithmic, or otherwise). However, interesting problems are by themselves not good interview questions. Good interview questions must achieve a very delicate and complex balance. They have to be simple enough for a person to have thoughts within a few seconds, but multifarious enough that you get a good “image” of a candidate’s capabilities, and thought process. A question which is good or clever might ultimately be a bad question to ask during an interview. I’m going to write about one such problem which I rejected as it was a poor interview question, but which I still find very interesting.

The question is this: Consider the star Polaris, the North Star. Its prominent cultural significance notwithstanding, Polaris ranks only 48 in apparent brightness to us, compared with other stars. Of course, in a universe with billions upon billions of stars, many more must seem to us less bright. Let’s consider one, incidentally named Epsilon Gruis, which just so happens to appear only one-fourth as luminous as Polaris. Can you estimate for me, please, where Epsilon Gruis ranks on that list of brightest stars?

Some may note I am using “apparent brightness” rather than “apparent magnitude.” Though the latter term is more common, my choice is deliberate since I could hardly expect most people to be conversant on astronomical terminology, and apparent magnitude seems so perverse to the uninitiated: A higher magnitude indicates lesser brightness! Further, it is logarithmically scaled so that if one object has magnitude one higher than a second object, the second object appears 5√100 times more luminous than the first. This definition evolved for historical and not practical reasons. When I “tested” this on my workmates, I found we spent more time clarifying this metric, which they as a rule quite understandably found absurd. By constructing this story about Polaris and Epsilon Gruis (which do in fact have the properties I describe), I remove that difficulty while retaining the essential feature of the problem. Apparent brightness is very easy to explain, as being the amount of energy reaching a certain area on account of that object.

So, let’s see about solving it. The “one-fourth brightness” is itself a hint. Imagine two identical bright objects. If one were twice as far away from us, we would consider that it has brightness roughly one fourth as bright. This can be readily apprehended by imagining an enclosing sphere. The amount of energy (which I express as “brightness”) passing out of that sphere cannot change depending on radius, via the laws of thermodynamics. However, if we double the radius of the enclosing sphere, we quadruple its surface area, and correspondingly cut in one fourth the expected number of photons which would reach a certain area of fixed size on either sphere. (In this case the “fixed size” is a pupil, and the amount of energy reaching that pupil can be thought of as brightness.) You can imagine degenerate cases, e.g., light is emitted with zero beam divergence along some dimension, but it’s impossible to imagine a star would obey that.

Not all stars are identical, and have wildly varying intrinsic brightness. The preceding “sphere” argument however posited two identical objects. The applicability of the preceding to this situation seems questionable, but we can abstract this away by imagining the stated problem as being composed of an infinite number of sub-problems, where each sub-problem is identical to the full problem, with the exception that it does not consider all stars, but only the stars with equal intrinsic brightnesses. Obviously, with identical intrinsic brightness for the stars in each one of these sub-problems, “distance” and “brightness” become easily relatable once again. The original problem is just an integration over those sub-problems. So, this detail can be safely ignored.

We only must believe that for each intrinsic brightness in these sub-problems the local region of space is not special. This is of course taking isotropy a bit too far – or I guess I should say a bit too close? – but this is probably true enough for a toy problem. If we lived in the middle of a cluster or something, more care may be required.

So, we merely have to consider the number of stars in one sphere, versus a sphere with a radius twice as large. A radius twice as large, implies a volume eight times as large. If we suppose stars are generally distributed in three-dimensional space, then we’d expect there to be eight times as many stars. Therefore we’d expect the rank of Epsilon Gruis to be 48×8=384.

However, while I said I resigned myself so isotropy, I lied. As anyone who has finished grade school knows, our sun is a star in a spiral galaxy named the Milky Way, which is easily visible in the night sky, at least, outside of cities. So, there is a distribution in a plane rather than uniformly filling a three dimensional space. If all observable stars (including our own) were co-planar, the stars would not be distributed through volume, but instead across area, as we increase radius. So, we would expect four time as many stars when we double the radius, so the rank of Epislon Gruis would be 48×4=192.

Of course neither of these extremes is true; the true fractal geometry is somewhere between two and three. If it were two we would see stars as one brilliant line across the sky and perfect blackness elsewhere, and if it were three there would be no obvious clumping. So what is a happy middle ground? Because I am human, in the face of uncertainty I have a strong, nearly overwhelming bias in favor of uniform priors, and because I am foolish I think the average is the only summary statistic. :) So, my guess is the average of 192 and 384 is 288, or 48×6=288. The real answer is 285. I am only three off. :)

That’s it. I never posed this question to anyone aside from (1) coworkers, (2) a couple interviewees that had passed the vanilla questions “early” and were keen to hear what I described as as interesting problem, and (3) my father-in-law. All were flummoxed even to the point of being unable to begin. The first is especially concerning, since obviously any interview question that would lead you to reject colleagues you value is obviously unacceptable.

Posted in Uncategorized | Comments Off on Polaris and Epsilon Gruis

The Implications of the Maximize Button

The maximize button in the Microsoft Windows UI is interesting. While I had obviously been aware that Windows had such an interface idiom, it was not until I’d been required to use Windows in my daily work, used Windows programs day in and day out, and seen the developer culture at Microsoft, that I began to appreciate its profound impact on application design and developer psychology. The fact that there is a button devoted to every window to allow it to take up the entire screen very far from being the ancillary detail. In many respects its presence (or absence) encapsulates the entire philosophy underlining application design on the platform, as well as shape the behavior of users.

In the interest of presenting a counterpoint, I will contrast Windows against the Mac OS interface. I choose Mac OS because it differs on this important aspect. None of the Linux UIs are considered because this is a discussion of UI philosophy; the design of Gnome and other desktop environments or window managers was not really driven by any sort of cohesive unifying UI philosophy at all, beyond “imitate Windows.”

So, let’s begin. Humans perform tasks best when free of distractions. A computer interface should let a user focus solely on his workspace. This is an uncontroversial statement everyone can agree with. The next step is the branching step where there is disagreement: If we suppose “workspace” and “window” are identical, then a maximization button makes sense; all other content is extraneous ex hypothesi, and unused space is wasted space. If we suppose a workspace can consist of multiple windows probably from multiple applications, then a maximization button does not make sense; it is harmful because it hides portions of the workspace.

The presence of such a button in turn informs much of application design. There are practical and psychological consequences for maximizing, or not maximizing.

The consequence for maximizing: The mental cost of switching to another application context is high, especially if that application is yet another maximized application. There is nothing visually connecting one world to the other. There is therefore pressure to engineer applications so that all tasks related to a user’s workflow can be accomplished within the context of a single application. The workflow is integrated.

For not maximizing: When workflow is spread out among multiple applications, there is not a pressure to make any individual program do everything, but there is more pressure to make applications play well with each other. Completing complex tasks requires not one application, but an ensemble of applications. (This bears a strong resemblance to the Unix philosophy of small tools with simple interfaces.) The workflow places value in interoperable applications.

Now, on Windows there are definitely small tools, and on Mac there are definitely all in one applications (increasingly and especially those tools made by Apple itself, the recent phenomenon of the so called Apple ecosystem) but there is a definite idiomatic preference for one type of tool or the other.

I’m going to use making a presentation as an example. On Windows the favored tool for making presentations is PowerPoint, and on the Mac it is Keynote. Keynote embodies this “Mac” philosophy of interoperable applications very clearly. Keynote’s native diagraming tools are, to put it kindly, minimal; but no matter, because one can simply insert diagrams made with OmniGraffle. Neither of these two tools tries to do what the other does – at least not in any meaningful way – nor are they especially aware of each other, but Keynote plays well with graphics produced by other programs. The result is that you can easily produce excellent diagrams in an excellent presentation, despite the fact that both of the involved tools are only excellent at one of those tasks, and minimal or nonfunctional with respect to the other task.

In contrast, PowerPoint, despite having superior native diagramming utilities, plays very poorly with external content. For example, it insists on rasterizing PDFs and other types of vector graphics. If I do the same exercise – copy and paste an OmniGraffle vector graphic into Mac PowerPoint 2008 – it rasterizes the diagram into a badly pixelated graphic with glaring compression artifacts. Why, PowerPoint, why? Windows PowerPoint plays well with Visio, but only on account of special effort between the two programs specifically, as they are part of the same “Office” platform and significant special effort was spent towards making those two programs interoperable between themselves, but without increasing their general interoperability.

On the other hand, often the integrated way has some really strong benefits. As someone whose work involves a fair amount of mathematics, my presentations are rather heavy on equations. Keynote has no equation editing capabilities. The preferred solution is to use a third party utility like LaTeXiT. However, as good as that program is (and it is quite excellent considering the constraints in which it is forced to operate), the overall experience is painfully awkward. Equations don’t even reflow in Keynote, because they’re considered floating graphics; if you have some equations “inline” with your text, you have to put spaces in the text and put the equation in the gap. If you add some text anywhere that moves your existing text, you have to readjust the equations. Further, because they’re simply graphics, they don’t change with, say, themes. (Not that I like themes, for those that do, but it would be nice if resetting the theme was at least possible without having to reset each and every equation, like these are the old days of the letterpress where we are dealing with movable type or something.) The whole experience is pretty awful. The method can’t really compare in ease of use to the general Office (including PowerPoint’s) method of hitting Win-Equals, typing your equation, and being done with it.

That said, the Keynote-LaTeXiT approach does have one very important advantage. Long hard-won experience has made my ability to type LaTeX equations reflexive. For example, in my numerical analysis course in grad school, I typed notes involving some fairly aggressive and visually complex equations as quickly as the professor was able to write them on the board. A LaTeX based solution allows me to draw upon substantial experience to perform a novel task. Moreover, I could use the same macros and special style I had developed for my academic papers in my presentations. The integrated equation editor in PowerPoint, while it does happen to respect a subset of LaTeX syntax for simple symbols, is otherwise a totally unfamiliar environment.

There is a larger point: the integrated solution has the flaw that identical subtasks within different containing tasks are often accomplished in completely different and unfamiliar ways. So, if your workflow involves various combinations of n subtasks, you might not have to learn n different ways of doing things as in the interoperable world, but rather as many as 2n or n!, depending on how tolerant the applications are of reordering of subtasks.

The dual of this from the developer’s perspective is that large chunks of development effort are bent towards simply duplicating functionality that exists elsewhere. This is inefficient; that effort could be spent making the core functionality of the program’s special functionality really efficient and excellently engineered. This pressure on the developer is increased by users of the platform, who are used to integrated solutions and demand them.

To phrase the argument slightly differently, the integrated approach, with its expectation that an application handle not only one task, but all tasks ancillary to that task, leads to lousy software. A single developers will do different things with varying competence and enthusiasm. For example, there are few languages (programming or formatting) that are content in their Windows distributions to be mere languages; they typically feel the need to bundle a specialized editor with them. I am thinking particularly of LaTeX distributions (which nearly always bundle an editor and viewer), and some programming languages (Python’s bundling of IDLE, the so-called Python IDE, far more prominent on Windows than on any other platform). Of course, the effort spent towards making an entirely brand new editor was effort that could have been spent towards improving the support of existing familiar general-purpose editors for that particular language.

Naturally, the counter argument is there is some functionality would have been difficult or impossible to achieve in existing editors, and can be accomplished by starting fresh with an integrated special purpose editor. However, for the user of these tools, they use these specialized advances at the cost of using an editor which is lousy at its primary function. It is rare that it is worth giving up the ability to actually edit stuff properly only on the strength of one or two clever gimmicks.

In the integrated world, because tasks are expected to take place within a single program or application, there is less pressure to make applications interoperable. This bias towards not allowing interoperability occurs even in places where it would be a really fantastic idea, and not hard at all to integrate.

For example, in OS X when you drag a file into a save dialog in any other application (e.g., I drag a file from the Finder, or from the icon in the title bar of a Window), the save dialog moves to the corresponding folder. Very simple, and very useful; often the place you want to save a file is exactly the folder where you are working in some other application. On a related note, if you drag a file to the Terminal, then you get the path of that file; this is fantastically useful as well. Also, there is the near non-existence of modal dialog boxes on the Mac, compared to Windows where modal dialog boxes are the norm – preventing a user from moving on to other tasks is far more poisonous and annoying in an interoperable environment than an integrated environment. These are really tiny petty things, but a truckload of tiny petty things adds up.

What is somewhat interesting is that the Mac platform is changing. Apple has in the last decade or so taken to making “integrated” applications. Consider iMovie and iPhoto, which support the entire workflow of viewing, categorizing, editing, exporting, and publishing either movies or photos, respectively. Perhaps this is a symptom of their growing success; as a developer grows more popular, the temptation to control every aspect of a task must become overwhelming. Naturally, in these applications, the “zoom” button (the green plus) acts like a maximize button.

Posted in Uncategorized | Leave a comment

The Awful Tychus Findlay

I’ve been enjoying the single player campaign of Starcraft II. It’s great fun. The tremendous variety of goals and mechanics of each level makes each level a very unique fresh experience. They are not so much “levels” per se, but more mini-games with rules all their own which just happen to use Starcraft units as gamepieces. The Mass-Effect-like mechanics of collecting resources which you spend on troop upgrades in between missions is likewise really engaging.

The story of that campaign is another matter entirely. Though the technical mechanics of the storytelling have advanced considerably beyond the disjoint unsynced-talking-heads-in-squares of yore, the actual writing is horrifyingly bad, a Transformers-2-esque grab-bag of cliches. It’s not that I expect high literature, but some small measure of coherence and sense would be appreciated. What is more, the shoddy writing is quite incongruous with the obvious care and love of craftsmanship that went into gameplay.

Things were rough from the start. When we first reacquaint ourselves with Jim Raynor, the game’s hero, he’s drinking hard liquor and watching TV. As it happens, one of the game’s villains is on TV, talking about him! This angers Jim, so we’re treated to a hearty “it ain’t over till it’s over you son of a bitch!” And he shoots the TV with a revolver. Wow.

That was actually one of the more tolerable sequences, if for no better reason than for the absence of our second hero, Jim’s old buddy Tychus Findlay. Like all the characters, Findlay’s conversational repertoire is mostly limited to trite one liners, but he is rendered all the more intolerable by his characterization; he’s a bulky muscle-headed half-wit who talks in a languid southern drawl. It’s somehow much more annoying than I just made it sound.

I list here a few gems of Tychus, prefixed by their context:

The zerg have attacked the human worlds. The death toll is in the billions (somehow billions of humans arose from forty thousand original settlers in the span of two hundred years; whatever). Tychus has just finished watching a video where Kerrigan herself rips apart a squad of armed soldiers with her bare hands. His appraisal of this threat? “Seems this queen of blades got everybody runnin’ scared! She don’t look so tough!” Right. You can take her Tychus! Please, go for it.

Raynor rescues survivors of a human colony. Their leader, a doctor who becomes your resident scientist, has seen the deaths of her fellow colonists, the destruction of her home, and other miscellaneous horror, doom, destruction, etc. How does Tychus comfort her? “I asked that sweet thang if she’d like to give me a physical.” Yeah… I think most people outgrow that chestnut by the time they reach their teens.

These selfsame colonists then settle on a new world, whereupon they are infested with the “zerg virus” which turns them into shambling zombies. His expert analysis? “Those colonists sure do have some zerg troubles!”

Oh, Tychus… Tychus Tychus. You insufferable jackass.

He’s utterly redundant, and from his introduction he’s clearly there so that he can betray Raynor at some critical point. I’m a little unclear on whether we’re supposed to know this or not; it seems really clear, but they keep foreshadowing in a way which suggests it’s supposed to be a mystery to us? I do wish they’d get it over with and be done with the beef slab. However, owing to the “multipath” mission structure, it would be technically awkward for it to happen anytime soon.


OK, I have now finished Starcraft II. The campaign missions remained delightfully fun and varied. They even managed to make the RPG squad based missions tolerable. Unfortunately, the writing remained execrable and senseless. I’m rather amused by how inept it is, so I think I’ll continue my screed.

Regarding the betrayal, Tychus waited to the absolute end to betray Raynor, so we had to suffer through his mouth breathing charm the entire game. Throughout the game it seemed clear the writer didn’t expect us to know Tychus was going to betray Raynor, which I found really confusing, given Findlay’s introduction with Mengsk talking about the price he’s going to pay for his freedom. The betrayal was: he had been blackmailed by Mengsk into killing Kerrigan. Yeah. So, I’m trying to comprehend Mengsk’s plan here. Mengsk wanted to kill Kerrigan; fair enough. He somehow felt that the best way to accomplish this was to let this random criminal Tychus go free, and blackmail Tychus so that, on the off chance he encountered Kerrigan, and Kerrigan was vulnerable, Tychus would shoot her. As far as plans go, that’s impossibly stupid; its success relies on a confluence of highly implausible events and happenstances that would have been impossible to anticipate at the time when Tychus was released. What is more, at the time Tychus was released, absolutely no one would have needed to be blackmailed into killing Kerrigan – anyone would have been only too eager to do so. The plan, and hence the plot, doesn’t make one iota of sense.

On the subject of Kerrigan, she was one of the more regrettable casualties of the terrible writing. Starcraft I’s writing and story were uneven, perhaps, but it often succeeded in making Kerrigan delightfully evil. Under the tender mercies of Starcraft II’s writers, though, she has been diminished to petulant incoherent outbursts. Here are two examples. She gives her first line in the campaign after you’ve stolen an alien artifact from under her nose: “I forgot how resourceful you were, Jim. I won’t make that mistake twice!” Aside from being trite, such a statement renders her impotent. Obviously you’re going to win, and continue being resourceful, so this declaration becomes nothing more than empty words. She clearly makes the mistake not just twice, but repeatedly again and again through the course of the campaign. So, we lead off by making our lead villain appear ineffective and weak. Nice. The last thing she says in the final mission is “you will pay for this treachery” after you beat back the last of her attacks. I found the comment bewildering. So… um, defense is treachery? This rather makes me suspect the writer doesn’t know the definition of treachery. This seems likely given the volume of other malaprops.

Then we have our hero’s characterization. Jim Raynor is supposed to be a pathetic drunk. We are treated to lots of scenes of him drinking, and in any one of these scenes he drinks a lot, gulping down in a single instant twice what I’d be hard pressed to imbibe over the course of a lengthy evening. Curiously, in no way is he ever affected at all by his drinking. Part of having a character be a pathetic drunk is that he is drunk. That’s kind of an important component of that characterization that they somehow missed. I wouldn’t have guessed it was even possible to make that oversight.

Aside from content, we have some weaknesses introduced by the technical aspects. The non-linear structure has some disconcerting effects. You go from discrediting Mengsk and fighting his primary military commander (the imaginatively named “General Warfield”), to fighting shoulder to shoulder alongside this same commander. He accepts your help without so much as a sidelong glance, and with Mengsk seemingly as powerful as ever. More broadly, the nonlinear progress often gets in the way of telling a cohesive story; the pieces of the story are necessarily modular, but a story itself is almost by definition non-modular; a modular part of a story is a part you can do without. Further, that you can talk to people in any order means that all conversations occur seemingly in a vacuum. The overall effect is a heightened sense of emotional flatness and meaninglessness, almost like everyone has been lobotomized, or that I was observing it through a dream. I think this might be a general problem with non-linear structure; I observed a similar uncanny effect with Mass Effect 2 – though strangely, Dragon Age had similar non-linear mechanics and avoided the uncanny effect somehow.

All this said, I’m perfectly delighted with Starcraft II. In the end I bought a game, not a novel, and the game itself is stellar. It’s not like story matters all that much in an RTS compared to, say, an RPG. I’m just rather amused the story is as bad as it is.

Posted in Uncategorized | Leave a comment

New Blog: Defaults and Sidebars

This is my new blog site.

When my homepage was booted from the Cornell CS department webservers and forced to make its own way in the world, I quickly gave it a home with a web hosting company. Their most basic plan offers far more capacity and features than I would probably ever use. My website is not terribly spectacular or sophisticated: it’s a motley collection of PHP and HTML I had thrown together to document the artifacts of my graduate career, an intentionally minimalist work mostly describing myself and my formal projects as briefly as I thought would do.

Nonetheless, with shiny new features and capabilities comes the temptation to use them. I may as well; I’m paying for it. So, I set up a blog using one of my ten allowed databases and one of my ten allowed subdomains. I already had a blog I started in May, but I wanted to maintain it all myself. However, my reasons for wanting to do it myself were rooted in technical interest, totally divorced from any practical considerations. Among these practical considerations:

  • I don’t have time. I work and then come home to help my wife take care of a baby. On a good day I have one hour of strained leisure time.
  • I am not really the blogging type. Many years ago while in undergrad when “blog” was still a novel term, I set one up only to discover I didn’t have anything I particularly wanted to say to the world. This hasn’t changed much.
  • Blog software is largely database driven; this makes backups irritating to the point where, frankly, I probably won’t bother. I’m going to regret this later when inevitably my webhost loses all my data. Also, given that I do not give much care for the “dynamic” features of a blog – comments, tracebacks, etc. – why not just have a series of static HTML files?

On the other hand, if one wishes to have the fun of toying with a lightweight database driven web application, a personal blog is a fairly ideal choice: there’s minimal cost to get to something working, and it requires neither a cohesive master plan, nor participating third parties.

Like everyone else and their dog I used WordPress, fortuitous timing as WordPress 3.0 just came out. I have no idea what that means and the release notes referenced features that meant nothing to me, but heck, 3.0 just looks prettier than 2.9. I wanted to use WordPress primarily because I read it was the software that integrated most easily with \LaTeX, good for mathematical content. Setup is easy; their claims of a “famous five-minute installation” are pretty much spot on. I spent more time randomly generating passwords than I did actually installing the thing. I had only four posts in my other blog, so transfer wasn’t difficult, images being the main annoyance.

The real problem though, and where I spent nearly all my effort, is personalization of appearance. Sticking with the default theme seemed kind of lame and lazy, but the free themes available either: violated my minimal aesthetic (though some were quite beautiful), were poorly implemented, or shockingly wasteful of screen real estate. The sad thing is, I really like the TwentyTen default theme. It’s the best theme by far. So, I decided to not replace it, but adapt it. (I discovered after I had finished my adaptation that best practice requires the use of child themes; I may refactor my changes later in my copious spare time.)

Adaptation involved a pleasing amount of futzing about in PHP files of the theme – I say pleasing because my primary motivation was technical interest and a desire to learn, and as I futzed about I found myself repeatedly pleasantly surprised at WordPress’s elegance and customizability. I didn’t have to touch anything outside of the theme directory.

I started with a number of modest changes: I changed the date/byline information under each title (“Posted on July 23, 2010 by John Doe”) to just contain the date and time. Clarifying that I’m the author on every post is clearly unnecessary. The “leave a comment” was changed to “no comments.” I did not intend to ever allow comments, so inviting them seemed odd. “Proudly powered by WordPress” in the footer seemed a bit much, so I took out the “Proudly,” though I do think it only fair to acknowledge WordPress.

Next I made the title and description inside the image, rather than simply above it – basically the title and description are now in a DIV with my header image as the background, rather than having the header image be in an img tag. I reproduced that leading black divider at the top of the image since I think it looks pretty sharp.

Getting rid of the right sidebar was a priority for me. The right sidebar is a ubiquitous feature in blogs, but I have always had a strong dislike of them. I always found them distracting. I especially never liked how space continued to be reserved for them below the navigational content, so space was wasted and your content was forever oddly off center, even when the reason for that wastage had long since been scrolled past. Also, simply wrapping around the associated div produced lines which were uncomfortably long.

This took the most work, not because removing it was the problem – though frankly it was far more trouble than it rightly should have been – but duplicating what functionality and content I wished to present to end users posted some small design challenges. For this I abused the header’s navigational bar. While this element is intended for the sole use of a main menu, with some light modifications, one can continue to use the left of the bar for the menu, and the right for some special elements, like the search box and the RSS feed. This also gave me a chance to customize the search box, which I did by putting the search button inside the form element a la Wikipedia or Bing. I also omitted the login and comment RSS feeds; these were totally extraneous for my purpose.

Archives and recent posts are typically in the right sidebar, and these are things I would like. I don’t need or want a monthly archive though. I don’t actually intend to write anything most months, and I need a monthly archive, somehow? My current solution is to just put a link to my posts under “Archive.” I’m ambivalent on this. “Recent posts” and “archive” both provide immediate means for people to easily see what is present beyond that first post. At best, I’ve moved that one click away. I’m unsure how I’ll solve this, but I emphatically do not want a right sidebar.

Posted in Uncategorized | Leave a comment

Circular/Radial Photography

Photographs are rectangular. Historically, this makes sense; when photography was first invented, it was the film (or rather, photographic plates) that cost the most money versus, say, the lens, and obviously one makes the best use of the raw material of plates by cutting it into rectangles, and later the lengthly rolls of plastic film were put to best use by partitioning it into rectangles. (Hexagons could potentially work for no wastage, but this would be far more awkward, especially when it came time to put them in a roll.)

Digital cameras don’t use plates or film any more, but reusable sensors. A single sensor is reused for many thousands of photos, and so there is no reason to efficiently partition a physical medium into many subparts as there is with film.

Lenses are circular, and correspondingly image degradation (whether vignetting or other distortion) increases roughly as the distance from some center on the photographic plane. If we suppose there’s some maximum distance beyond which one cannot get a good image, the “useful” area of a photographic plane is circular. To accomodate rectangular film without wasting any of this film, whatever portion of the circle we use is a rectangle inscribed in this useful circle, meaning we are wasting at least (1-2/π) ≈ 36% of the useful circle, and that’s only if the rectangle is a square. With my own camera’s 2:3 aspect ratio, that’s about 41% wastage. (As a practical matter, it is more like 63% or so for an EF lens, I think, since it’s a crop sensor; that’s astonishing waste.) As megapixel sensor density continues to increase, we shall feel the cost of the corresponding lens wastage keenly, since the amount of meaningful detail sensors can gather begins to exceed the capabilities of all but very expensive lenses.

So, now we come to my point: digital cameras have obviated film wastage, but the rectangular shape inherited from film cameras continues to waste lens resolution. Clearly this is wrong. It is time to revisit the concept of the “rectangular” photo, to literally think outside the box. Rather than having rectangular digital sensors, let us have circular ones, and spread out that sensor resolution to cover all of the useful circle of the photographic plane.

Obviously for many applications one will still want rectangular photos, but if you wish to subset and crop your photos to be rectangular, that will be your choice. In the present rectangular paradigm, one has no choice at all.

This “radial” mode of photography would have many advantages.

The primary advantage is that when you buy a lens, you get to use all of it, not just about half of it. We pay good money for the glass only to ignore a sizable chunk of it.

There are some practical cost reasons why one could not make a photographic plane cover the entire useful field. In such a case, even if your image sensor remains the same size but has been reformed into a circle, you have given up the lower quality corners for real estate much closer to the lens center, which presumably suffers from less distortion.

Second, a photograph is more or less identical no matter the orientation of the camera. There’s no need to think about whether to photograph a subject landscape or portrait, nor even to be sure you’re keeping the horizon level. These would be details left to post processing, and totally irrelevant considerations at the time the photo is actually taken. This frees up one’s attention for other matters.

Third, lots of subjects benefit stylistically from being photographed in a circle, right up to the edge of vignetting and distortion – typical shots of faces, flowers, birds not in flight, and lengthy roads winding into the distance come to mind right off the bat – which is currently an effect one has to do with filters or post processing.

Posted in Uncategorized | Leave a comment

Zoom vs. Prime: Focal Length Usage

I recently observed an online discussion of camera lenses. People were debating the merits of zoom vs. prime lenses, falling into the predictable points of flexibility vs. performance. One of those defending the merits of prime lenses posited that if one were to plot the usage of focal length with respect to number of photographs taken, the tendency would be for photographs to cluster around the extremes of the focal lengths.

Since I was curious about how I would fall into this range, and decided to run the experiment for myself. As a bit of background, I have a Digital Rebel XT with which I use four lenses:

  • EF 28-135mm IS USM is my midrange general purpose lens. I have this lens mostly because when I got it in 2005, the passable EF-S midrange lenses did not exist yet. I believe it was the right choice for the time, but I can’t pretend that I’d even consider it today. The 28mm is still way too long in a crop sensor.
  • EF 70-300mm DO US USM is my telephoto zoom. For roughly the same price I could have bought the EF 100-400mm L IS USM, but that thing is huge and conspicuous, and I wanted something I could carry everywhere and use without feeling self-conscious, as I’m kind of a shy guy.
  • EF-S 10-22mm USM is my wide angle. If you have a crop sensor and want a wide angle, it’s this or nothing.
  • EF 50mm f/1.4 USM is my only prime. I got this a relatively short time ago when my baby girl was born, and I found my 28-135 utterly inadequate in low light. However, because it is so new, and because the purpose of this analysis is to see which focal lengths I used (in a prime, there’s no choice!), it is not covered in this analysis.

With the help of a small Python script to and EXIF.py, I was able to read and aggregate all of the focal lengths and lens types for a period a couple weeks after I got the 10-22 wide lens, up until the day my daughter was born. Naturally my photographic habits changed with both events, but between these two events my behavior may have been stable. This covered a period of roughly two years, with 1366 photographs. Naturally, these are just the photographs that I did not delete.

In the following plots I give focal length on the horizontal axis, and the number of photographs in the Y-axis in a cumulative histogram. So, pay less attention to the value of the vertical axis, and more to the size of the jumps.
We lead off with my general purpose lens. Unsurprisingly, I do have a tendency to use the entire range of this lens. I have a slight tendency to “top off” at around 80-100mm or so, before having a strong tendency to shoot at the maximum 135mm. The most notable and obvious trend, though, is my heavy usage of the 28mm focal length, perhaps reflecting my frustration with the lens being too long for its own good for my camera body.


Next we have the telephoto and wide angle zooms, in that order. My behavior with these is quite different from my general purpose lens. Indeed, as posited, there is a strong tendency to use the extreme endpoints. I like my longs long and my wides wide, clearly. Nonetheless, I do shoot a substantial amount along the range of both, and I do exhibit a slight tendency to shoot with my telephoto wide (probably because I’m too lazy to swap in the general purpose lens in many circumstances), so clearly I get substantial benefit from the zoom.

I find this quite interesting, not only for the sake of this isolated analysis, but because it seems like there’s a tremendous amount of interesting data we could mine regarding people’s photographic habits if we could just collect everyone’s EXIF tags. What do good photographers do that bad photographers don’t, or vice versa? If we multiplied aperture times ISO times shutter speed, would the resultant value tend to decrease in summer, and increase in winter? What if we could somehow limit the analysis to “good” photographs? It’s unclear whether any of this could be put to any practical worth, but it’s still interesting.

Posted in Uncategorized | Leave a comment

Webpage Exile

About one month ago, I received an unwelcome notification from Cornell’s CS department that they intended to close my account and delete the associated data. This was something I had expected for some time: I had defended and graduated a year and a half prior, and I didn’t expect the department to host my account indefinitely. I’m surprised they kept on as long as they did. However, it did present a practical problem, as Cornell still hosted my personal webpage. A personal webpage is de rigueur for computer science graduate students, both current and former. It’s a bit weird if someone doesn’t have one. Further, I do publish some software dating from my student days which some people actually use, so a stable website is desirable. Unfortunately the notification came at an astonishingly bad time, as my child had been born three hours prior to the notification (!?), so my schedule was not exactly clear. Nonetheless, I felt obligated to move the site.

Making a new website was something I should have done earlier. I had procrastinated partly because of the effort and cost, but actually more because of I could not choose a hosting company. They are a vast legion of largely indistinguishable anonymous entities. The number of services with their slight variations in their services is bewildering, and there are so many suspicious looking review sites to the point where objective information is hard to call out, and nearly everyone has something bad to say about everyone. How could one tell the good hosters from the bad? Making an informed choice seemed impossible to the point where I did not want to decide at all.

My needs are few and my traffic quite light, but I did have some criteria: I wanted databases, Python, Ruby, Ruby on Rails, PHP, and a hoster that did not offer “unlimited” plans. (Seriously, who’s kidding whom with that “unlimited” nonsense?) In the end, I went with a host a friend and colleague of mine used for his own site and recommended as reliable.

Hence, tfinley.net. Owing to the aforementioned birth of my child, my ambitions were limited: it’s basically my Cornell site with the verbs attached to my graduate student career put in the past tense, and the URLs updated. Of course, between the parade of feedings and diaper changes, even these modest changes took me a couple days to finalize.

Subdomain on Localhost

When developing the site, I found it helpful to host a site on my local web server, to make writing and debugging my site more efficient. However, I could not work in the root directory of localhost, as I was already using this to host some resources for my home LAN. I initially worked in a subdirectory, but this becomes really irritating as absolute paths do not work. An <a href="/page.php"> would mean entirely separate things locally and on the actual website.

My solution was to add a subdomain to localhost. The address http://localhost would point to my existing web server, but I added a subdomain to localhost so that http://tfinleynet.localhost pointed to the directory where I was developing the website. This allowed absolute paths to work properly. The requisite steps are not difficult, but they are sufficiently non-obvious to the point where others may benefit from my research. The steps are for a Mac OS X machine.

  1. Add a line resembling the following to /etc/hosts, of course replacing foo with your desired subdomain:
    127.0.0.1 foo.localhost
  2. In /private/etc/apache2/httpd.conf, uncomment (remove the preceding # from) the line:
    Include /private/etc/apache2/extra/httpd-vhosts.conf
  3. Edit the file /private/etc/apache2/extra/httpd-vhosts.conf whose inclusion you just uncommented, so that the uncommented lines read as follows, again with foo replaced with your desired subdomain, and the DocumentRoot set to the desired directory.
    NameVirtualHost *:80
    <VirtualHost *:80>
       DocumentRoot "/Library/WebServer/Documents"
       ServerName localhost
    </VirtualHost>
    <VirtualHost *:80>
        DocumentRoot "/path/to/local/development/directory"
        ServerName foo.localhost
        ErrorLog "/private/var/log/apache2/foo-error_log"
        CustomLog "/private/var/log/apache2/foo-access_log" com$
    </VirtualHost>
    
  4. Restart Apache, most easily accomplished by unchecking and checking “web sharing” in the Sharing pane of system preferences.
  5. Point your browser to http://foo.localhost, and observe the mighty workings.

Favicons and Robots

Technically, favicon.ico and robots.txt are optional for a website. However, a few days with an error log containing a megabyte or so of lines upon lines of File does not exist: /foo/bar/biz/httpdocs/favicon.ico and the like are strong motivators to create them, optional or not. I am on a shared server so I cannot reconfigure the website to just return 404 errors for these files, so I just bit the bullet and created them.

The favicon made me nervous. Sure, I was creating it just to get my error log to shut up, but any visitor would assume I did so because I thought what I had done was better than nothing. It’s not like I’m an artist that can create something beautiful in a sparse 256 pixels, nor yet a corporation or university with some universally recognizable brand or logo. Nonetheless, I think I acquitted myself adequately: I fired up the GIMP, put a gradient in a circle, some alpha-heavy black in an ellipse under that, and my initials in white. Boom. Instant favicon.

For robots, I’m not too picky, so I allow everything with the following robots.txt file.

User-agent: *
Disallow:
Posted in Uncategorized | Leave a comment

Orion: The Space Flight Simulator

Orion Credit Image When I was much younger, still on my first computer, my parents gave me a voluminous 240 MB hard drive for my Mac IIcx. Upon connecting the drive, I found among other contents a shareware spaceflight simulator, Orion, the work of one Robert Munafo. This serendipitous acquisition quickly became one of my favorite programs. With this program, one controls a spacecraft by applying thrust, roll, pitch, and yaw, and has a capability to travel to any star system within a 32 light-year radius (all of these helpfully had 9 planets, Pluto being considered a planet at the time). By flying at a planet in just right way and executing some slightly deft maneuvering, one could enter orbit. By executing the maneuver incorrectly, you’d fall into the planet or, more likely, get flung out past escape velocity back into the star system.

Pictured is a screenshot of Orion’s mighty workings. All controls are shown in the lower ribbon. By modern standards the control system is obtuse, but for 1988 at a time when people were still figuring out how computer interfaces should work, it was quite excellent. One turned the ship by holding the mouse down in the lower left box, with the direction and magnitude of the turn indicated by the offset of the click from the center dot. The F/S/R buttons controlled the motion of the ship, being forward/stop/reverse. The next box allowed one to perform lateral thrust. The top bar allowed one to roll. Then there was a control for easy mode, with non-easy mode resulting in Newtonian motion (once you applied thrust in a direction, turning did not redirect your velocity) and the application of gravity. Remaining controls toggled display elements.

This program was enormously fun and instructive.

First, it gave me a healthy appreciation of the vast scale of the universe. At first blush it may sound like a 32 light-year long leash is not terribly restrictive, but the paucity of “interesting” stars that lie in that sphere soon becomes clear. That limit quickly seems unbearably oppressive. (For example, none of the interesting stars actually in the constellation Orion can be visited within the program Orion. :) )

It also gave me an intuitive feel for Kepler’s laws, especially as it applied to elliptical orbits and the inverse relationship between orbital distance and velocity as I flew in orbit around a planet, many many years before I learned that they were called Kepler’s laws.  (“Hey, when I’m twice as far away, I’m going half the speed…”)

Quite apart from anything having to do with space and more appealing to my nascent interest in software engineering, seeing my orbital parameters (major/minor axis) slowly change over time when they realistically should not have changed gave me an appreciation for the role approximate arithmetic takes in software simulations of physical systems, and the dangers of accumulated numerical error.

Less intellectually, then, as now, I was a fan of Star Trek TNG and liked to pretend I was flying in the USS Enterprise. The supposedly impossibly fast Warp 9 amounts to a paltry and pitiful 1,516 times the speed of light (roughly 3 AU per second), meaning that a trip from earth to the Alpha Centauri system would take almost exactly 24 hours. My preteen brain found this a sobering and disappointing realization.  Nonetheless, I decided to make the trip one day. Pretending my bedroom was a spaceship cabin and my computer the helm (naturally allowances were made to leave the room for the bathroom and food), I set my sights for Rigel Kent and accelerated to Warp 9. During the daylong journey I read, periodically corrected my course, and overnight I slept.  Roughly a day later I arrived at Rigel Kent, whereupon I “explored” all the planets. I was oddly proud of myself for completing the journey, but the tedium did give the lie to the notion of Jean Luc Picard and company casually dropping by Earth from the far reaches with the same ease of someone today traveling from New York to Boston.

With the help of Basilisk II, I was able to use Orion once again for the first time in about two decades.  Aside from the usual shock I encounter using ancient software — e.g., the old Mac design philosophy of never using keyboard controls, the help box with a snail-mail address and no sign of a @ or .com anywhere, the usual hiccups of software when running on a computer thousands of times the speed of the Soviet era hardware it was designed for — it was quite enjoyable, and wonderfully nostalgic.  So, thank you Robert!  Sorry I never sent you $7, but my ten year old brain didn’t quite have a fully developed sense of social responsibility.

Posted in Uncategorized | Leave a comment