spacer spacer spacer

2005-06-20

Don't assume undefined is undefined

Sarah has a bug where she knows that:

typeof(foo) == 'undefined'

is true, but when she tested:

foo == undefined

it is false. How can this be? This is because undefined is not a literal in ECMAscript. Lots of people use undefined expecting it to be undefined, but it doesn’t have to be, at least not according to the spec. (null on the other hand, is a literal, defined to be the sole member of the Null type, just as the literals true and false are the only members of the Boolean type and cannot be redefined, and 1, 2, … Infinity are Number literals.)

If you try this in the debugger:

global[undefined] = 42

you will get a warning from the compiler, but now when you type:

undefined

you will find that it is indeed 42!

What’s the right way to test for undefined? It depends. Do you really need to know if a variable is undefined? If so, the typeof test is one valid way. The other valid way would be:

foo === void 0

void will cast any value to undefined — 0 is just a convenient (literal) value to use. Note the use of === to test for identical to undefined, if you really are testing for undefined, because:

null == void 0

is also true, so if you used == you would only know that foo was either null or undefined.

If all you need to know is that foo is not undefined, null, false, , or "" (an empty string), then you can use:

Boolean(foo)

or:

!!foo

or:

if (foo) { ... }

because all of undefined, null, false, and "" coerce to false in a boolean context.

The moral of the story is: Don’t assume undefined is undefined. If you really need the undefined value, use void 0, or test for typeof(...) == 'undefined'.

13:50 | Link | Reply | Track

2005-06-13

Different Thinker

Amidst all of the regular university degrees granted during this season there are always a scattering of honorary awards for special lifetime achievements. One particular honorary doctorate, awarded on May 15 at the University of Illinois Urbana-Champaign, honors a man who more than any other single human being made the Apollo lunar landings possible. And unless today’s space experts learn to emulate his vision, courage, and soft-spoken stubbornness, the grandiose ‘Vision for Space Exploration’ plans for resuming human flight beyond low Earth orbit may fail to be realized.

The Space Review: Academic honors for a spaceflight prophet

12:40 | Link

2005-06-07

Dark Matter

Oh, I can hear it now. “So, what do you think about Steve going over to the dark side?”

Well, it’s true. I love to hate Intel (what is it with that lazy e?) almost as much as I love to hate Micro$oft. Yes, there’s a lot to admire. But the x86 architecture is so lame. I guess that is why I am glad there are compilers. I more deeply regret their hand in the death of the DEC Alpha architecture. Apparently some of the goodness that was Alpha has made it into Pentium.

But I digress. O’Grady expresses much of my opinion. We all knew that Darwin was running on x86 platforms. We all know that BSD (Free, Net, and Open), the underlying technology of many large ISP’s and the flavor of Unix that Darwin is based upon, runs on most platforms. We all knew that Next, the basis for Cocoa, ran on x86. We all know that gcc compiles to practically every instruction set in the universe. So, how hard could it be?

Apparently not hard at all. And they’ve been doing it all along.

Rosetta was a bit of a surprise. But advances in dynamic recompilation have been happening for a while. The x86 is a popular target. The PowerPC has a very regular architecture, much like a typical virtual machine.

So, what do I think? I think that by admitting that they can retarget to the x86, Apple is in a very powerful position to use whatever ISA (instruction set architecture) suits. Right now, Intel has more bang per watt, and that is what us laptop users crave. But if the Power architecture is better for something else (like your server farm, or your car dashboard) they can target that instead. Application developers who buy into the ‘universal binary’ will have the same benefit.

Where it gets interesting is — what happens with Virtual PC? It should become very fast on an x86 mac. It becomes a way to sandbox your essential Windows apps so you can run them without risking your entire computer. (Gee, Darwin’s Mach micro-kernel was actually meant to run several operating systems simultaneously…)

And what about becoming a software-only company? Like the 47th biggest company in the world? Would that be a good thing? Maybe. I still think there is an advantage in knowing the platform the software is going to run on. The reliability should be better. But for those who prefer an unreliable but cheap platform, should they be denied? Maybe they should.

11:04 | Link | Reply | Track

2005-06-02

What is the type of a prototype?

I’m trying to beef up the Laszlo debugger to help myself with the SOLO data reimplementation. I have gotten confused a couple of times because the debugger isn’t careful enough.

The goal of the Debug.__String routine (which is used by Debug.write and Debug.inspect to present objects), is to compactly and unambiguously display objects. For primitive types, it displays a representation that, if evaluated, would give you back an === object. (In Lisp this is called print/read consistency). It can’t do that for objects without giving up its compact goal. So, for instances of Object, what it displays is:

type # uid ( length ) | name

The double-angle-quotes are just there to be distinguish this representation from primitive types (In Lisp, there is a reserved reader macro #< that is used to distinguish ‘unreadable’ objects).

type is meant to be the most specific class where object instanceof class is true.

uid is a unique id assigned by the debugger to distinguish objects whose representation is otherwise the same (e.g., 2 empty objects)

length will be displayed if the object has a property length with a numeric value. This is mostly for arrays, but means that objects that are used as arrays will be displayed usefully too.

name is meant to be some informative information about the object. Users can define a _dbg_name method on their classes to provide this. For LzNode, name will be either '#' + this.id or '.' + this.name. If an object has a toString method, other than the default one, that will be used. Otherwise an abbreviated listing of the objects properties will be used.

Here’s my plan:

  • Verify that type is correct.

    The constructor property of an object should be its type. If that is not the case the type will be annotated '(' + type + '?)', to indicate that something is fishy about this object. This will address the case where an object was showing up as being an object in the old debugger, but it had a null __proto__ and hence compared === null, mystifying more than one of us.

  • Verify that the __proto__ property of the object is normal.

    Normally the __proto__ property of an object should be === the object’s constructor’s prototype. If this is not the case, the object can have non-standard behavior. If the __proto__ is not as expected, the debugger will add a second uid that is the uid of the non-standard prototype object (which can be inspected by using Debug.inspect with Debug.showInternalProperties = true). This will address the case where the __proto__ chain is being (ab)used to implement defaults in parameter lists, for instance.

My question is:

What is the type of a prototype? For a class foo, foo.prototype.constructor === foo, but foo.prototype.__proto__ === Object.prototype (typically, it can be some other classes prototype if the class extends another class). Under my plan, a prototype will always show up as having a broken type. Technically this is correct, because the prototype of a class is not an instance of the class, but I don’t want the user to think all prototypes are broken. What would be the most useful thing to display for the type of a prototype? The class of its __proto__ is what I am thinking.

11:05 | Link | Reply | Track