Vista has no respect for my work

Just for once, I opened up a Windows Vista (64 bit Ultimate) to run Ikea’s planning tool (I actually used the Swedish version of it). Ikea only makes it for Windows, but since I run Parallels and Fusion, I really don’t mind which OS the app is for. I run Vista more or less in its default state, especially since I haven’t spent any time on configuring it, and I don’t even know which nooks or crannies to work through.

So I downloaded and installed the Ikea planning tool. Admirably, Vista asked for the admin credentials to do that. Great. Then I started using the Ikea tool.

After a while, I leaned back thinking about what to do next and noticed in the task bar that “Windows Update” was running. It wasn’t signalling me in any way, but I idly clicked anyway and this is what I saw:

Vista reboot warning

CRIPES!!! It’s like in those movies when you hear a sweet voice over the PA system, saying “Countdown to self-destruct…. 60 seconds remaining….”. And there I was with a layout for my living room unsaved in the Ikea tool and I didn’t even know how to save, it being the first time I’ve used it. Trying to click “Postpone” or change the “Remind me in:” dropdown was futile. They’re disabled. A close button? No. Just a countdown.

I found the save and saved the file in Ikea’s app, took about ten seconds. Then intentionally left the Ikea app in the foreground just to see what kind of warning and choice I’d get from Vista when the countdown reached zero. Well, nothing. When the time expired, Vista rebooted without any information or choices being presented. This must be the most intentionally hostile action I’ve seen from an OS yet.

I run as a non-admin, of course, which may explain why Vista spits me in the face (it being in the Windows world a despicable thing to do, it seems), but still, can’t say I find it an admirable way of treating the user. It’s possible I could have terminated the update through the task manager, but since I’m running as non-admin, I doubt it.

Be warned. Be afraid. Either dig into the update settings and disable that crapola before it clobbers your work, or save every minute. Also, always keep an eye out for “Windows Update” in the taskbar. Never leave your computer unattended without saving everything first.

And this is progress?

LinkedAvoid

Browsing through the people LinkedIn recommends I should link to, people it thinks I may know, I just discovered that LinkedIn not only flags what people I may enjoy contacting, but it often clearly flags what people to avoid.

There are a small number of people around that I’ve had a very bad experience with. Anyone that recommends any of these guys or is recommended by them, is automatically very suspect to me. One explanation for the recommendation is that they’re similar in character (a bad sign). Another is naivité (not a much better sign).

In other words, LinkedIn may be a valuable tool to filter out people you should avoid and not waste time on. Maybe even more so than the other way around. When someone you know and trust links to someone you know and mistrust, it has a tendency to diminish the trusted person more than it enhances the mistrusted person. Especially recommendations have this effect. Simple links may not mean that much.

So, be careful who you link to, and especially who you recommend. Or who you accept recommendations from (you can decline, you know). It may reflect badly on you.

A call to (telescopic) arms

Medical technology is evolving and one particular area where a lot is happening is in robotic surgery. By moving the surgeon a couple of feet away from the operating table and into a comfy chair, we accomplish a few goals: relaxed surgeon, better view using keyhole techniques, filtering of movements, etc. But it’s only a step on the way to telesurgery and that is where the real benefits reside. Imagine, for instance, to be able to get the best surgeon for a procedure independent of the location, any time of the day or the night. Or to get any surgeon at all, for that matter, to operate at the scene of an accident or in a little village somewhere. All you need is the robot on the spot and a good network connection. And that’s where we run into trouble.

The requirements on the network we need for telesurgery are pretty horrific and no current network, as far as I know, is designed to fulfill any such requirements. The network needs to be absolutely secure, and by that I mean it needs to be very resistant to breakage, to delays, and that it must ensure data integrity at all times. It also needs to protect privacy, of course, but that’s almost an after-thought.

For us security people, the telemedicine networks is a new challenge and one I think we should spend more effort specifying and creating. For instance, we need to find a way to ensure the following characteristics:

Max latency

For instance, we know that turnaround delays above a hundred milliseconds or so make telesurgery very difficult and dangerous.

Redundancy and resilience

Obviously, we don’t want the network to go AWOL during an operation. And if it does, we need to fail safe. Both the surgical instruments and the procedure as such need to fail in a safe manner.

Integrity

The data integrity is of utmost importance. When we want a 3 mm incision, we don’t want that to turn into 3 meters by accident.

Authentication

We want to make sure only the right surgeon is on the line.

Discussion

The above is just a few issues I could think of right off the bat. Internet protocol, for instance, is well suited to the resilience requirement, but its lack of guaranteed time of delivery is a problem. I do think we need a separate network that has the desired functionality and characteristics, and that may in part be based on current protocols and infrastructure. I do think, however, that the problem hasn’t yet been attacked on a holistic level. I’m also sure that the current Internet structure will not suffice to carry telemedicine applications. In other words, it’s time we looked over these requirements and started coming up with real solutions, else the next step in the evolution of medicine will not get started.

One pilot missed the point

I have this “letter to the editor” on my desk that is too good to throw away, still I don’t know what to do with it. So I’ll just translate it freely from Swedish and post it here for your enjoyment.

The letter is a response to another letter to the editor from “LS” and goes like this:

“I’ve been a pilot all my professional life, largely with SAS. The last five years I flew long distance with Boeing 767 and Airbus 340 to, among other places, Thailand. LS’s statement that a trip to Thailand corresponds to a release of two tons of carbon dioxide per passenger is an exaggeration of colossal magnitude!

“An A340 weights maximum 260 tons when it departs from Copenhagen and has 261 passengers, 11 crew. That means 272 persons on board.

“Releasing two tons of carbon dioxide per person would come to 544 tons. The fuel load is about 100 tons, of which six tons remain after landing in Bangkok.

“To ensure that the discussion on environmental impact remains credible, such absurd statements as those of LS must be avoided.”

Carbon dioxide molecule

Now, I must admit that such incredibly unscientific remarks from a professional airline pilot scares me more than a little bit. I never saw a response to the above letter to the editor, so I assume a lot of readers swallowed it whole.

With the numbers above, two tons of carbon dioxide per passenger is entirely possible. If you don’t see how, go back to your high school chemistry, or ask any high school kid, and they’ll tell you how this works out.

To be entirely fair, the number comes to almost 1.3 tons per passenger so LS exaggerated a bit, but I think the pilot who wrote the letter thought it came to 94 tons divided by 272, that is 0.35 tons per passenger. It sure looks like that from his letter.

And, no, I’m not going to publish the pilot’s name, even though he signed his letter to the editor in full. He ought to be ashamed of himself.

PS: this isn’t an april fools joke either.

Cleaner Windows

Windex spray bottleI’m getting more and more convinced that MS will start over with a Unix based OS (I’d call it Windex). If they’d include a virtualization system allowing them to run Windows apps on it, similar to what Fusion and Parallels are doing on the Mac, they would be able to transition, gradually replacing old Windows apps with new Windex apps. If they’d integrate the virtual Windows apps more tightly to the new OS than what Parallels or VMWare can do, they’d come out ahead.

It’s the only way out and it would put them ahead of Linux and OSX again. I’m sure they’re working on it. They’d be crazy not to.

PS: not an april fools joke, I mean it.

Why apps will get slower

New machines come with multicore processors. Mine has eight, ought to be plenty fast. Unless the apps only use one of them, of course. Since the number of cores go up pretty quickly with each generation, while the speed of each core remains more or less the same, and the workload of the apps goes up, the net effect of a singlethreaded app is that its performance goes down with each new generation of hardware. So, please, fellow developers, get a grip and go multithreaded now.

For the last half hour I’ve been watching grass grow, or rather the Mac OSX Stuffit Expander unpack excercise files from Lynda.com. These excercise files are for Final Cut Express HD and consists of 12 sitx files, each around 240 Mb. …ah, it just finished. Looking at how the CPUs are loaded during the execution of the Expander, it’s no mystery why it’s so slow:

As you can see, there’s a 100% CPU hogger walking from core to core. It’s even clearer just as the walking ends and the process is done:

Interestingly, during this half hour, Safari hung (which in itself isn’t too unusual) and Parallels that was running two XP instances in the background that were doing nothing and I did nothing with, crashed. Normally, this machine is stability itself, except for occasional Safari hangups and I’ve never before seen Parallels crash like this, so I think there’s a connection.

Now, if you look at how a righteous app like iMovie ’08 works, you’ll see something like this (while creating movies):

I could run WoW with totally normal performance even while iMovie was going full blast. No crashes or hangs either. I wouldn’t be surprised if the system is most stable when all cores have some headroom left, while 100% load of any core is destabilizing. I’m just guessing here.

Memory mix on Mac Pro

I have a hard time finding info on exactly which combinations of memory are doing exactly what on the Mac Pro 8-core. As far as I understand, full memory speed is only achieved with four memory modules, since the machine can access four in parallel. I did get four 2 Gb modules (800 MHz) from OWC and I’m very happy with them, but there are the two 1 Gb modules (also 800 MHz) that were delivered with the machine and what do I do with those? If I plug them in on the top memory board as the manual advises, will accesses to these two modules be slower than accesses to the rest of memory, or will all memory access slow down?

To find out, I did a very unscientific test. I plugged in the two 1 Gb modules, so I had a total of 10 Gb of RAM in the machine, then booted it up. Ran a few programs and then started WoW. Had my character run around in a circle in Gadgetzan and saw a consistent jerkiness. The framerate according to Wow was 29 fps.

Powered down, pulled the two 1 Gb modules, booted up again and went back into WoW. Again ran around in a circle in Gadgetzan and had 60-65 fps all the time. Gone was the jerkiness. Conclusion: it’s not worth it to plug in pairs of RAM you’ve got lying around. Stick to foursomes.

Oh, BTW, if your machine becomes sleepchallenged (reboots when you try to wake it from sleep) after swapping memory, do the pull-all-cables-wait-15-seconds-then-plug-in-again, plus the parameter RAM reset (cmd-opt-P-R) one or more times. At least, that fixed it for me.

Mac Vista

Since I have both Parallels and Fusion running, I found it useful to try out Vista under Fusion. According to tests, Vista runs better under Fusion, while Parallels’ forte is XP. No, I haven’t verified that, I’m entirely happy assuming those tests are right.

Windows Vista is kinda pretty, even though I don’t see any aqua or aero effects, or whatever they’re called. (Here’s a full size view on how it looks while it is huffing and puffing its way through the initial 49 updates of the OS.)

But, even the Mac Pro I’m running now, with its 8 Gb of RAM is starting to page out when I’m running Vista. Admittedly, I’m running a few more things at the same time…

task bar, pretty full

… as you can see. Two XPs (1 Gb, resp 768 Mb) are running, Photoshop, Transit, the Fusion with Vista (1 Gb), Skype, Safari, OmniFocus, Mail, NeoOffice Writer, iTunes, Preview, Transit, and Activity monitor. As you can see on the activity pie chart, there’s just a gig of blue and nothing green to be seen. I guess the next move will be to fill’er up to 16 Gb of RAM, and if that’s not enough, I have to go to 32 Gb, which this machine is supposed to handle. I’m sooo spoiled…

Just like on OSX, I just now created a separate admin account and changed my regular account from admin status to “standard”. I’m very curious to see if it’s workable. While running as admin, I got a deluge of “Please approve this action” dialog boxes. Let’s see what happens if I’m not an admin and I try to install Open Office.

First, it blocked the download, warned me about the dangers of the internet, but it’s easy enough to approve it and proceed. Then it warned me about the installation program being unsigned, fair enough. Then it asked me for an admin logon to install the program (perfect!). And then the installation threw up an error box:

Failed Open Office installation

Did a quick Google and didn’t find anything about this error (“Wrapper.CreateFile failed with error 123”). Interestingly, after clicking “OK”, the installation proceeded, where I would have expected it to abort, rewarding me with:

Installation Completed dialog box

And, hey, it seems to work! Ok, so far so good under a limited user account. That is definitely good news.

Next little test, let’s open Task Manager:

Task manager

No problem, up it comes. Fine. Now let’s click “Resource Monitor”, which I know only admins can use:

Needs permission

Darn it, I’m bloody impressed! Instead of having to do that cumbersome “RunAs…” stuff, Vista does exactly what OSX does, asks for admin credentials (it even put in the right admin user name, which I rubbed out with a bit of yellow mud just above the password prompt). And up comes the resource monitor.

Phew, never thought I’d have to say I like this very limited first look at Vista after what I’ve heard about it. I think I’ll keep exploring it.

Now, let’s look at what Vista sees as the underlying machine, that is what Fusion pretends to be:

Basic info about computer

It sees a dual 64 bit processor and 1 Gb of RAM. Nice. If you look at the very bottom you see a new MS policy of automatically activating the OS, instead of letting the user do it. Normally I wouldn’t care, but if you’re running MSDN copies, you should be aware of this, since you often don’t want to waste activations on every installation you do. Vista isn’t going to wait for you to approve if I interpret that statement correctly. (I’ll wait and see if it goes ahead without asking or not after another three days, will keep you updated about it.)

So, what about the “Windows Experience Index”? It says “1.0” here, can’t be lower than that. Hm. Better check out the details:

Windows Experience details

Ah, now I see, the overall rating is equal to the lowest rating, which I got for gaming graphics. I have enabled “3D graphics” but I get no aero. I think Fusion doesn’t support Aero yet, but I couldn’t find anything on the web to confirm that, so I may be wrong. Apart from that, I find the above scores pretty impressive. Vista, at first blush, seems useable on this machine, not too sluggish, but then nothing is sluggish on this setup, really.

Parallels or Fusion?

Way back when, I used to use VMware desktop on my Dell for development. When I switched to the Mac, I naturally selected Parallels desktop to let me run Windows instances under OSX. A couple of days ago I was offered a review license for VMware Fusion, so I tried it out to see if it’s better than Parallels, even though I actually have very few complaints about Parallels.

So what I’m comparing here is Parallels Desktop 3.0 for Mac and VMware Fusion 1.1.1. My comparison isn’t in any way exhaustive, just a first impression after a few days of use and for a fairly limited application, namely software development and backups and stuff.

Parallels aboutVMware Fusion about

The machine I’m running these guys on is my brand new Mac Pro with dual quadcore Xeons at 2.8 GHz and 8 Gb of RAM. On this machine it’s hard to have any software perform poorly, so I wouldn’t be able to detect much in the way of inefficiencies, if there are any. Nice for me, but it hobbles my advice somewhat. For details on the machine, see my earlier entries on “Mac XP”.

I’m running an old Win 2000, and two instances of Windows XP under Parallels. One XP is equipped with MS SQL Server developer’s edition and Visual Studio 2005, while the other one harbours Visual Studio 2008. They have 1 Gb resp. 768 Mb of RAM allocated in Parallels. The Win 2000 has just 512 Mb, but I don’t use that one much.

For this comparison, I created a third Win XP and gave it 512 Mb of RAM. I plan on using this VM as a “utility VM”, containing stuff like backup software. The first thing I installed in it was Retrospect 7.5 for Windows that came bundled with my Netgear ReadyNAS+ (5 clients included) and then I purchased and added a further 5 client licenses. So it can now backup 10 clients, mixed Mac OSX and Win clients.

It turns out that Fusion is a pretty good choice for this “utility” VM, since it allows me to allocate 2 virtual CPUs. Retrospect does exploit multiple CPUs if you have them, so this allows Retrospect to use two of the eight cores I have in the machine. Parallels would limit Retrospect to just one core.

Retrospect

Running Retrospect in one of the other XP VMs would make that VM go very slowly. All it’s activities would be limited to one core on the Mac and I would have a nasty time of working in Visual Studio in the same VM at the same time. Having Retrospect run on two cores in its own VM allows me to work in the other VMs without noticing any slowdown at all. It’s great! In all fairness, running Retrospect in a Parallels VM would have had the exact same result, except Retrospect would have run slightly slower.

I’m usually writing quite a bit of multithreaded code, making it practically necessary to run on a multiprocessor to avoid subtle bugs. That would seem to mandate Fusion. But I don’t know how fully it emulates two CPUs. Does it interrupt right in the middle of memory accesses like a true multiprocessor machine would do, or is it more civilized than that? The lack of documentation about this is a problem, just like for hyperthreaded CPUs. In both these cases, it’s very unclear how close they mimic a true multiprocessor machine.

As far as simply running most software, I think both these products do a grand job. I’ve not encountered any problems with either, but remember I’ve done much more on Parallels than on Fusion. The network setups are also practically identical with choices for host sharing, bridged, and host only. The intricate and flexible network configurations we see in VMware’s Windows product aren’t found in Fusion (yet). What’s also lacking is a decent snapshot management in Fusion. You can take snapshots and revert, but there’s no management of multiple snapshots like in the Windows product or in Parallels.

Miserable keyboard handling

Now for my real beef with both of these products: the keyboard. Obviously, most keystrokes should be passed on to the virtual machine, some should be converted, and some intercepted and sent to the host OS. Both these products have made a mess of this even though it ought to be simple to get right.

In Parallels, the command and control keys swap nicely on the left side of the keyboard while remaining unswapped on the right side. Kinda confusing, but I don’t mind getting used to it. But function keys is a real problem. Sometimes I succeed in getting them through to the VM using different combinations of command and control or something, then I can’t remember exactly what I did. It also varies according exactly which function keys we’re talking about. The function keys that have predefined uses for dashboard, exposé, and similar, behave differently from other function keys. Parallels does have a menu where one can select magic key combinations to send to the VM, but it would have been great to have these keystrokes pass right through under their own steam, so to speak. Having to select function keys from a menu is good for once or twice, but get’s old real quick. This is how it looks under the “Actions” menu in Parallels:

The Actions menu

Under VMware, there’s a setting in the “Preferences”, which means that it is the same for all VMs under VMware:

Preferences in VMware

As you can see, there’s a single checkbox “Enable Mac OS keyboard shortcuts” and it works admirably. A little too admirably, in fact. Once you deselect it, all keystrokes go to the VM, including command-tab. Now there’s no point in passing command-tab to the VM since Windows doesn’t know what a command key is. But it makes sure I can’t easily switch between apps on the Mac. This is ridiculous, since Windows reacts to alt-tab, so they could just as well left command-tab for OSX. The new Mac keyboards also have a special “Fn” key where the useless “Help” key used to be. That ought to be exploited by Parallels and Fusion somehow, but isn’t.

I don’t know which of the systems, Parallels or VMware, got the key settings most wrong; it’s a close call. To me it’s obvious they really could spend a little effort in getting this right, since it’s the one thing that makes working with VMs hard; everything else is almost perfect. Having options allowing all keystrokes to pass to the VM except command-tab and possibly control-space (which I use for QuickSilver), would be absolutely great. Allow the user to freely define another couple of magic combinations that should not pass to the VM, and you’re set.

Converting: defeated by Mickeysoft

Both products are able to convert a VM from the other product to its own system. Both of them take forever to do it, but seem to do a good job of it, ultimately. But Windows isn’t happy about it, since it sees the conversion as a move to another machine and then insists on needing a new activation. Just to be a real PITA, Windows only gives you three days to reactivate, if it was already activated. Considering that you get 60 days to activate (for the MSDN version of XP), you are actually severely punished for having activated your XP in the first place before the conversion. How very nice of MS. Actually, WGA being what it is, it’s not a good idea to convert Windows installations from Parallels to VMware or vice versa at all. Actually, if you can avoid activating at all, that’s even better, but it limits you to 60 days per setup.

Dock difference

Parallels shows an actual live image of the VMs screen in the dock and in the task switcher, so even though a VM is hidden behind a stack of other apps, I can keep an eye on the dock icon and see if some compile has finished or a dialog box is waiting for input:

Parallels dock icon

Fusion, on the other hand, just shows a Fusion logo, missing an opportunity to display something useful:

Fusion dock icon

Conclusion

My current conclusion is that both products are great and work just fine. Both need serious work in the keyboard handling. Fusion has dual CPUs, a major advantage, especially on a multicore machine. Parallels has better snapshot handling and really useful dock icons.