RAII, or why C++ doesn’t have a finally clause

One of the most common idioms I see in a delphi program looks like the following:

foo := TObject.Create;
try
    // Do something with foo
finally
    FreeAndNil(foo);
end;

It’s primarily because you always create objects on the heap, and everything involving an object, essentially, is a pointer. This makes for a little bit of a memory management issue. You have to remember to destroy objects after you’ve created them, and because if something goes wrong, that destruction needs to take place in a finally block. Having it take place in a finally block keeps you safe from exceptions. If an exception is triggered it always passes through the finally block on it’s way back up the stack. This gives you the ability to cleanup temporary objects as needed

C++ uses the RAII idiom, which means that objects that are defined at a certain scope are always going to be destroyed once that scope is exited. What this means is that if you define an object X in a function Y, once Y is returned from then X will be destructed/destroyed. As an example:

std::stringstream streamer;
// do something with streamer

There’s no awkward streamer.create call, and once you return from the function streamer is appropriately tidied up

But wait, you say, they are not the same, what you are doing in Delphi is creating an object on the heap, while in C++ you are creating it on the stack, so of course during the process of unwinding said stack, you will destroy the object. The more equivalent code in C++ would have been:

std::stringstream *streamer = new std::stringstream();
// Do something with streamer
delete streamer;

Hah you say, no try finally means that if an exception is triggered in the ‘do something’ piece of code, you leak a streamer object on the heap.

To which I respond, silly rabbit, that’s why you didn’t create a pointer in the first place with the first piece of code. If you want to perform something like this, then you should use a smart pointer, which takes care of the destruction of the object once the smart pointer exits scope, like so:

std::unique_ptr<std::stringstream> streamer(new std::stringstream);
// Do something with streamer

But really, if you were just going to create an entity for the duration of a function, it’s far easier to just create it in-place without such complications

This leads to a little gotcha that regulary catches non C++ programmers when they are creating methods. As they typically come from a pointer-based economy (e.g. Delphi, Java), when they create a method:

function doSomething(object : TObject) : integer

What they’re doing is actually passing in a reference to TObject (as it’s just a pointer), and because it’s pass-by-value in this case, what they’re really just passing in is the value of the pointer. In C++ it’s a little bit different. When you pass in an object using the form:

int do_something(std::stringstream streamer)

What actually happens is a copy is made of the item being passed, and it’s that which ends up in the function; not the actual object that you’re passing in. If you want to pass in a reference to the object, then you need to use the reference passing semantic:

int do_something(std::stringstream &streamer)

You can use the const modifier if the method you’re invoking is not going to modify the passed in reference, which allows you to restrict the things you can do with the reference. In this form you don’t need to perform any indirection on the object (e.g. getting a pointer to it) in order to pass it in. This makes for slightly tidier code, which isn’t strewn with &’s on the way in, and var->’s in the method itself.

And for those Delphi haters out there; the reason I picked Delphi rather than Java is because Delphi is, unless you’re using the .NET variant, a non garbage collected language, and as such requires the free, otherwise you get memory leaks.

Objective C is another kettle of fish. Between the original model of retain/release, the GC model that was available on OS X from 10.5, and now the totally shiny ARC mechanism, it makes some people cry.

The Disappointment of New Features

I’m reading articles on new features in the CSS media queries level 4 spec. Items such as luminosity, which allow you to adjust the styling on your app depending on three grades of environmental brightness. This means you could adjust that bright white as it gets darker, so that it doesn’t blind someone who’s trying to read it in a darkened room (I had this experience this morning when the auto-brightness setting on my nexus decided that full-on-bright was what I needed while triaging my email at 6am, with the lights off).

It’s a pretty nifty feature, and once people start using it we’ll probably all reap the benefit.

The problem is that as of now, it’s pretty much only in a limited set of web browsers. Even though I have a laptop with an ambient lighting sensor, I’ll never see this work properly anytime in the near future.

The next thing I was reading was about making non-rectangular clipping areas for text so that it would flow around images. Looks pretty awesome, and makes things look more like a desktop publishing environment. Only available in Chrome Canary (which means, at the moment, the most bleeding-edge version of Chrome). Which makes it another feature that we have to wait for.

C++11 introduced some nice features such as Lambdas, which allow you to define the work to be done on something in the same place as the request to perform the work. It’s pretty nice as you can in-line work quite easily, whereas in previous languages you relied on an external function, typically with pointers to a data blob… the whole thing was quite tedious and leads to difficult to understand code. Again, you need a modern compiler that understands the C++11 syntax, but once you have it, it’s plain sailing. You ever tried to compile gcc… it’s fun times for all 😉

Again, a new feature, but it generally comes with a whole bunch of things that have to change to support it.

This is where the disappointment comes in. All these shiny features are available on the shiniest of newest systems. As developers, we like having the newest stuff – from operating systems to development environments, to programming languages. They all provide us with the ability to do our jobs better, and in a more efficient manner. It also allows us to royally screw things up much more rapidly, and then fix it so you almost don’t notice that it happened.

That’s not where most of the world lies. Most folks are living in the ‘it got installed, I’m not touching it’ world. It makes things difficult for us developers as we have to match up our work to what functions in their environment. That means we can’t use the newest version of X, because that’s not going to be present on the end-user’s system.

There is a sliver of bright light in the form of the automatic update. If you’re using Google Chrome, or any recent version of Firefox then unless you change something, it will always be silently updating to the newest version behind your back. This means that the next time you start it up, you’ve got the latest and greatest available. All the features are present. Unfortunately, this also means that the changes can trigger failures. This can be caused by a lack of testing, or a lack of backwards compatibility.

When it happens because of a lack of backwards compatibility, then people get genuinely angry – it used to work and now it simply doesn’t, and for no reason whatsoever. On Internet Explorer we have the ‘do the wrong thing’ switch, which causes the browser to act in the old, bad way, so that a user’s experience does not change when they install the newer browser.

I don’t think this is really going anywhere, so I’ll leave it as-is then.