The end of .NET? I can’t wait.

Ok, I admit, that title is a bit over the edge, but still that is how I feel. Developing for .NET is increasingly becoming not fun and far too expensive. The only reason to do it is because customers expect products for .NET, but under slowly increasing pressure from developers, that is going to change. It may take a while, but it will happen. There are a number of reasons for this.

.NET development is single platform. Admittedly the largest platform, but a platform that is increasingly having to share the market with other platforms. And already, according to some, there’s more sales potential for small developers in the OSX market than in the Windows market, due to a number of factors like customers that are more willing to buy and to pay for software, less competition in each market segment, etc.

.NET development is also entirely dependent on Microsoft’s development tools and those are increasingly expensive. For reasonable development, you need an IDE, a good compiler, version control, bug handler, coverage analysis, profiling, and a few more. We used to have most of that in the regular Visual Studio, but recently MS has removed all the goodies and plugged them into the Team system only, which carries an obscene pricetag (in Sweden around USD 13,000 + VAT for the first year…). This means that a regular one-man development shop can barely afford the crippled Visual Studio Professional at USD 1,500 for the first year. Sadly, there aren’t even any decent and affordable third party products to complement the VS Pro so it becomes a “real” development suite. And with every version of Visual Studio this only gets worse. More and more features are added to the Team suite and removed from the Pro. This is not the way to breed a happy following.

Meanwhile, OSX comes with XCode, which is almost as good as Visual Studio Pro, and is free. Objective-C is also a much more modern language with more depth than any .NET language, even though it is actually older. But, sadly, it’s not cross platform either and I don’t see how you can get the Windows fanboys of the Scandiavian healthcare scene to even consider another platform. Same probably goes for most other industries.

I’m no fan of Java, but on the other hand I’ve never worked much with it so that opinion doesn’t count. Eclipse, the IDE often used for Java development, is cross platform, very capable, and open for other languages such as Python, Flex, and many more. Yes, I know, in theory so is Visual Studio, but how many real languages do you have there? You’ve got one: Basic, masquerading as C#, J#, and, um, Basic.

Using Eclipse on any platform, you’ve got a real good chance of covering the line of tools you need, profilers, coverage, version control, without much pain and without breaking the bank. And you can write crossplatform integrated larger systems.

So, I guess it’s time to bite the bullet. I really like XCode and OSX, I really know C# and .NET, but I really only believe in Java, Flex, Python, Perl, C++ under Eclipse for enterprise development in vertical markets. And in XCode under OSX for regular shrinkwrapped desktop apps.

Not even Silverlight is very attractive and that is largely due to the marketing and pricing of the tools for it. A small developer organisation can’t afford it. Flex and AIR looks like serious contenders, though.

Windows, the one and only

One of the supermarkets I go to used to have self-scanning handhelds based on Linux but recently changed to Windows CE based scanners instead. I have no idea why. Can’t be the resilience, since I saw at least as many disabled scanners as I used to see with the old ones, if not more. One in ten, maybe.

This is how one of the new one typically looks. Motorola, Win CE and crashed. But the interesting thing is that Microsoft doesn’t seem to have bothered to actually adapt Win CE much to handhelds. The errormessages still assume a user, a PC, and the usual set of input devices. How else do you explain the recommendation to check your network settings and to see if your network card is properly seated. I mean, the average shopper…?

And grandma Jonsson, please do a warm reset. But first click the OK button with, um, with my umbrella? A mouse? (Note: not a touchscreen at all, and no mouse dangling from the handheld scanner either.)

Oh, grandma, please contact the vendor of SPB2_CE.exe. That’s, um, who?

And just for kicks, this is how the holders look when the cover is missing. Several were missing, in fact:

Yes, I know what it looks like, but I promise, it’s made of plastic and not porcelain.

GotoMeeting runs on the Mac!

I love GotoMeeting, but until now I had to run it under Windows. You could only run it as a viewer under OSX before, but the latest version has a full function Mac client. I opened GotoMeeting under OSX (Firefox) then under XP (Firefox) on the same machine in a full screen virtual window, and I got this beautiful effect as the image was recursively drawn out into the distance. Looks a bit like a very deep cinema with the rows populated by Mac icons up front and in the back. Click for a larger image.

x2c source

I finally got around to putting up the source code for x2c under GPL. No, you haven’t heard of this thing and it may not seem immediately useful, but when it is useful, it’s incredibly useful. The hardest thing is coming up with full samples of what it can do, so I’ll just outline it right here.

x2c stands for “XML to Code”, and it’s an interpreter for a little language I made with built-in commands to handle XML documents and write to plain text output files.

It started life as a tool to create VB and C# source code for data access layer classes, based on XML descriptions of an Oracle database. Another possibility is generating language tables from Excel spreadsheets, and I’ll tell you how:

Imagine an Excel spreadsheet with one sentence per row. In each column, the same sentence is written in another language, like Swedish, English, French, etc. Save the spreadsheet as an XML document. Now you can write a pretty short x2c script that reads these languages, column by column, and then produces a C++ header file with the right strings declared as constants. Great for products you want to recompile for a number of human languages.

Especially for this last use, I recently adapted the text output file command in x2c to allow output to ASCII, unicode (default), or any codepage you have installed on the Windows system you’re running this thing on. In the above script example, you see codepage 1251 used for Russian. In this case this was necessary since the C++ compiler used (Borland) couldn’t use unicode header files. This script runs under US or Swedish XP and Vista, as long as codepage 1251 is also installed on the system, and then produces the right MBCS file for Borland C++, resulting in binaries that will look real to russians running russian versions of Windows. Note that above is the complete script that is needed to convert the excel spreadsheet to four different C++ header files and it can easily be run from a build script.

The source is C++ in a VS 2008 solution. Have a go at it.

In furtherance of Mac pimping: RAID

What do you give to a Mac Pro that has it all? A hardware RAID card, of course. Outside of the sheer pimping factor, there’s a business reason, too, of course. Yeah. Like less danger of losing stuff.

Before getting one, I studiously read forums, user groups, ratings, and stuff, and came away with the clear impression that all of them have some problems. Among all the stories, it seemed that the Apple card had the least features and the most small problems, but also had the least really huge problems. So I went with Apple, as usual. The Mac Pro RAID card is the most expensive one, by a fairly wide margin, and it frequently has battery problems. Also, it won’t allow your Mac Pro to sleep fully, and if you sleep it partially, it often hangs and won’t wake up. But since my Mac, even with a full complement of drives, isn’t exactly noisy (you can’t hear it at all beyond a couple of meters), I can live without sleep. (You can take that several ways, all of them intentional.) I do shut it down for the night, though.

The card itself is impressive. Fairly heavy, reinforced by an aluminium profile along the edge, and with that distinctive Apple hardware smell. (Wonder what that is, actually…) The big chunk of aluminium you see in the pic is the battery. In the images, the battery cable is still unconnected.

The card comes with a very good hardware installation manual, a few little tools, but to work comfortably you’ll need a really long Phillips type screwdriver which is not supplied. The installation involves taking out the front fan assembly to get at the iPass cable (a cable connecting the drive backplane to the motherboard, which needs to be rerouted to connect to the RAID card instead), but that’s not a big deal. There’s a difference in how much stuff you need to disassemble depending on if you have an early 2008 8-core (I do) or an older Mac Pro, where you need to loosen and slide the memory cage in the older model. I didn’t have to do that.

I did go beyond the recommended procedure and took out the first SATA backplane connector as well, which made it a whole lot easier to untangle the iPass cable. Then I put the connector back in, of course. This maneuver provided for a whole lot more slack than the manual described, so I’m sure I could have put the card in any slot, not just the top slot. (The specs say that the card has to go in the top slot due to the short iPass cable.) Not that I see a reason to put the card anywhere else, but who knows?

It’s widely reported that the battery charges very slowly. In my case, it took some six hours to a full charge, I let it run overnight. According to specs, it’ll go through a deplete/recharge cycle every three months. During that cycle, the write cache will be disabled, slowing the system down somewhat.

The system I installed it on had four 500 Gb drives in it. I emptied the drives 2 through 4, backing up to a couple of 320 Gb Western Digital “My Passport” USB drives. I started the backup procedure days ahead of time, of course. It takes forever. I also backed up the system boot drive using SuperDuper to two external drives.

Apple mentions that you can migrate the system drive to the RAID array, but that is only if that drive has already been formatted using the RAID card. In other words, if you have a Mac Pro without a RAID card running, you can’t migrate that system drive. You have to reformat with the card, then restore from a backup.

Reformatting and then preparing the single volume for 4 x 500 Gb tooks something like nine hours all together. After that, I did a restore of the system drive, which took a while as well (two hours? don’t remember). The disk utility on the Leopard install DVD lets you restore SuperDuper images, making it all very convenient. Total space is around 1.2 Tb in one single volume.

Result?

Well, the system now takes much longer to boot, maybe a minute or two. According to miscellaneous group postings, that’s due to the RAID card spinning up the drives one by one. Makes sense. But once the login box comes up, things really take off. From logging in to getting a fully live system with all the menu bar items populated is a question of seconds now. Jeez. You could see the socks fly.

Since I develop on Windows, I run anything between two and five Windows instances under Parallels and VMWare Fusion on this machine. To open a saved Windows XP 1 Gb takes around 7 seconds now. It takes around 4 seconds to quit it all the way until Parallels has exited. I can get two XPs and a 2000 up and fully running in less than 15 seconds, using Butler. You get the message, this machine has become unbelievably snappy.

The drawbacks?

The slow initial boot. Doesn’t bother me much. The lack of sleep function bothers me a little more, since I can’t just leave everything open like I used to. I used to reboot this machine every three weeks or so. On the other hand, I used to have a heck of a time remembering where I got all those open windows from, where I started each process etc, so just having to do it again every day makes me a little more aware of what I’m actually doing. I see that as a good thing in a way.

I have two 23″ Apple cinema screens on this machine and the only way of powering them down now without shutting down the system is to enable the power switch on the side of the screens. To enable these switches, the USB cable to the screens have to be connected, but there aren’t that many USB connectors on this system (two in front, three in back) and only the ones in the back can be reached from the pigtail cable arrangement these monitors have. I ended up connecting one of them to a backside port, and using a USB extension cable to connect the other one to the back side of the first monitor. I use the other two backside USB ports to connect two external USB hubs, not wanting to chain them too much. The two USB ports up front aren’t very convenient for fixed cabling, so I leave those for the occasional USB stick.

Upgrading?

This card provides RAID level 5 (and 1 and 0 and 1+0, if I remember correctly), which doesn’t allow expansion on the fly. That is, if I want to replace the 500 Gb drives with 1 Tb drives or more later, I have to copy off the entire 1.2 Tb volume to external storage, switch drives, init the new drives and volume, then restore. Uh-oh… sounds like a project to me. You can’t even take out the old drives, put them in another cabinet and restore from there, since they can only be read using the Mac Pro RAID card now. But this problem is common to all RAID 5 implementations. Raid-X, Raid-X2, Drobo’s stuff, and other proprietary solutions do get around it in various ways, though, but that’s all for external NAS storage.

Pimp state?

In summary, the system looks like this now: Mac Pro early 2008, dual quad core 2.8 GHz, 16 Gb RAM, 1.2 Tb single volume on a Mac Pro RAID card, NVidia 8800 GT and two 23″ Apple Cinema displays. Hm… what’s next?

Meatloaf code

“Meatloaf code” is code that is there since a long time but nobody remembers why it’s there but everyone still respects it and keeps writing things that way. I call that “meatloaf code” based on the following anecdote that I read in one of my books, except I can’t remember which one so I apologize for not attributing it correctly… oh, now I do, must have been one of the Richard Feynman books.

“My mother often made meatloaf for us kids. One day I was watching her rolling the meatloaf, cutting off both ends and placing it in the pan. I asked her why she cut off the ends and she said that her mother taught her to do it that way, but she didn’t know why. After a bit of arguing back and forth, we called her mother and asked, and got the same story. Her mother in turn taught her to do that but she didn’t know why. A week passes and we go visit my mother’s grandmother and then we ask her why she taught her daughter to cut off the ends of the meatloaf, and she said: because my pan was too small back then. Don’t tell me you’re still cutting off the ends, are you?!”

Today I saw this: a textfile that serves as a printer template and begins with a tag like “[START]” and ends with a tag like “[END]”. All templates have to have that. Before sending it to the printer, you have to strip it out. Nobody knows why, but everyone does it.

Now that is meatloaf code.

Real developers…

… can read and understand several books with contradictory or complementary content without having their heads explode.

… and thus fear the one-book-religion as much as it deserves being feared.

… can understand, appreciate, and follow more than one methodology at the same time.

… know that no single book or methodology or language or tool or great tip will ever improve quality or output by any more than a couple of percentage points.

… know that any single book or methodology or language or tool or great tip used to the exclusion of common sense will reduce quality and output by close to 100%.

… truly understand the saying “A foolish consistency is the hobgoblin of little minds“.

… know the Orders of Ignorance and apply that knowledge.

… love writing lists of what real developers do and know.

MS is blazing the trail…

…on new innovative ways to make an install fail.

After upgrading my SQL server to 2005 SP2, I went to download the update for the Books On Line using the recommended link from the SP2 installer. Downloaded the msi to my desktop, double-clicked it and ran it. Then, after a while, got this message: “A network error occurred while attempting to read from the file:….”

Network error dialog
Network error dialog

“Network error”? Which “network”? The one between the screen and the desktop? Searched the web, found nothing. Started thinking deep. Read that message again and again and started noticing something weird about the filename. The file in the error message is called SqlServer2K5_BOL_Sep2007[1].msi, while the file on disk is called SqlServer2K5_BOL_Sep2005.msi, no “[1]” in there. So I renamed the file accordingly:

And lo and behold, the installer ran fine now! At a certain point while cleaning up temp files, it sits there spinning like forever, but that’s ok, normal MS behaviour, so don’t panic, it gets its knickers untwisted in due time:

The point of seemingly eternal wait
The point of seemingly eternal wait

The flip side of TDD

There is a problem with Test Driven Development (TDD) and security. Even though I’m a severe proponent of TDD and do my own development (largely) that way, I notice a strong conflict between good architecture and TDD. I’ve also seen mention of this effect in the journals lately, so I’m not alone in this.

What happens is that TDD promotes doing early and minimal implementation, then iterate over it until you get everything to work. Fine, everyone loves that. But early “ready-to-run” code usually implies a simplistic architecture. Not necessarily, but usually, please note.

Now, you start out writing all these tests, ostensibly free from architecture and design assumptions, only specifying the actual requirements. But you aren’t as free from assumptions as you’d like, since just by writing the test in a particular place, you’ve already made an architectural decision. Once the tests are in place and your code runs fine, you’re very free to refactor and improve your code safely, in a kind of localized way, class by class, method by method. But as soon as you do serious changes of architecture, your fine unit tests are usually blown away and have to be refactored or even rewritten. That hurts, and humans try to avoid things that hurt.

After a few incidences like that, you get gun shy and tend to not change your architecture unless you really and truly have to. And there are very few instances where you really have to do that just to make your system work (which is the only criterion your stakeholders care about). So, the architecture of TDD developed systems tend to be monolithic or at least simplistic and kinda smacked together, and guess what… there’s nothing more important than good architecture for secure systems. Forget about buffer overruns and unsafe APIs. It’s the architecture that makes your system fragile or resilient. The rest is just dust and filling. You can make systems run fine and bug free without a solid architecture, but you can never make them really robust.

Personally, I refactor my architecture anyway, over and over, but the only incentive I have is because I’m compulsive obsessive and people tend to not appreciate it (until years later, that is). Every other external incentive there is tells me not to do it. Timeboxing also increases the pressure to leave the architecture unchanged.

So I think we’re in the process of discovering the Achilles heel of TDD: even if the code is great, it much too easily leads to a poor and insecure architecture, and I think we need to take that seriously and try to come up with answers to fix this problem.

And, no, BDUF isn’t the answer.

Strongly typed constant parameters in C#

After a bit of searching, I found a way to have strongly typed constant parameters for C# functions. You know the situation, where you need to pass one of a limited set of strings or chars or other values to a function and you want to make sure somebody doesn’t just go and pass any old thing they find laying around the place. Enums are pretty good for this kind of thing, but it gets hairy if you need to translate it to anything else, like a string or a char.

Any solution also needs to pander to intellisense, making it easy to use and kinda idiot safe (I’m talking about myself a couple of hours after defining any constant, which usually leads to me behaving like the idiot user I had a hard time envisioning just hours earlier).

I think I found a good system for doing this, and as an example, I’ll invent a function that takes a string parameter, but it has to be just the right kind of string. To do that, I first declare the constant strings in a separate module this way:

Then I write my function, the fictional “Rechandler” that takes a parameter of the ConstRecTypeValue kind. And then I write a function that calls it. Now, while writing the caller, I want intellisense to do its thing, and it does:

As you can see, it obediently pops up a tooltip to tell me only a ConstRecTypeValue is accepted here. As soon as I start to type that, it recognizes the ConstRecType static class name and it intellisensively lets me choose which constant member I want:

…which I complete the usual way:

The callee (Rechandler) then easily recovers the string that is hiding inside the passed value (in this case “DELETED”) and continues its merry ways.

Naturally, you can use chars, doubles or entire collections of values instead of the string value in this example and still achieve the same effect.

You can also take it one step further along the path to universality, by using a generic base class for the value type:

If you have this guy in reach in your project somewhere, you can now simplify the definition of the value class like so:

…while everything else stays just the same.

I love it.