Category Archives: Technology

Instant-on OS: a bad solution to a legitimate problem

Laptop manufacturers have recently started marketing a new technology which will soon be available to costumers (if not already available). There are a number of different approaches but the general idea is to provide the user with a BIOS which contains an OS and applications tailored for speedy access. So the user can, instead of booting into Windows or Linux as usual, boot into this “instant-on” OS (which is usually Linux). The advantage is that this “instant-on” OS boots in a few seconds which is quite fast compared to a regular Windows or Linux boot. Depending on the specifics of the technology used there can be other advantages like extended battery life, etc. I think the problem is real and legitimate: if a user just wants to check something quickly on the web or read email, booting up can introduce a significant delay. However, I think the solution of providing a customized OS in the BIOS is a bad one. I really wish the engineers would have spent time making the boot process faster for regular OSes. At this moment, I’m convinced that the “instant-on” OS feature is going to be a failure. Here’s a list of reasons in no particular order:

1. Boot times can already be mitigated by avoiding the boot process in the first place. Put the laptop to sleep instead of turning it off. Sure, the idea of the “instant-on” OS is to turn off the laptop completely and rely on the ability to turn on the laptop instantly. So in theory the user would keep the laptop off more often. So this “instant-on” OS should in theory be more environmentally friendly. But are the trade-offs of using the “instant-on” OS be worth the savings?

2. Missing features in the “instant-on” OS are going to be an irritant. Browse, browse, browse and bang, you can’t read a web page properly because the browser is not up to date or lacks a plugin? Or if someone sends an email with an important attachment which can only be read with software not in that “instant-on” environment? Users will say “screw it” and boot into their regular OS right off the bat rather than wait and see whether they’ll hit a brick wall again.

3. This is yet another environment to babysit. Yes, users of smartphones already deal with this. I myself have a Treo. The bookmarks on my Treo are not the bookmarks I have in Firefox in Linux. I’ve had to configure email access again on my Treo and then I stopped using it because the email software was just too old.

Someone somewhere will scream “but you can upgrade or install another email reader, you idiot!” Sure, but that’s my point: this “instant-on” OS thingy represents yet another environment which must be managed. Is it really worth managing that thing besides my regular OS and my smartphone? Users of smartphones will turn to their smartphones rather than their laptop with “instant-on” OS when they want “instant-on” capabilities with all the downsides which come with it. People who do not already have smartphones will have to make the mental shift towards managing multiple environments. For most people, we’re talking about dealing with a different OS than the one they are used to: Linux is not Windows.

Of course, some of this babysitting can be eliminated by relying on web-based applications. For instance, a user who has Gmail as their only email application does not have to reconfigure email readers everywhere. Bookmarks can also be managed online. But there is always a minimum of configuration which must be replicated across environments.

4. How secure is this feature going to be? Are updates addressing security issues going to be released frequently? How is my data going to be stored? It is probably not going to be encrypted. Is this going to be the low-hanging fruit for people looking for sensitive data on laptops?

5. What about added costs? Nothing is free.

6. A laptop is not the best vehicle for “instant-on” computing. People need computing devices which will fit into a pocket and that can be used instantly. They understand that such small size comes at a cost. There is a monetary cost. There is also a technological cost because these devices are not full-fledged computers. People also need portable full-fledged computers. But the “instant-on” capability on laptops fulfills neither needs. The laptop is too big to fit into a pocket so it does not replace the need for pocket-size computing devices. On the other hand, the “instant-on” OS does not give access to all the capabilities of the laptop. That is, a laptop with a regular OS already fulfills the need of having a portable full-fledged computer. The addition of an “instant-on” OS does nothing towards fulfilling this need. What I mean to say here is that it is unlikely that customers will think that such a feature is a must.

I need to reiterate here that I think the problem is legitimate and should be solved: it would be desirable to have laptops be able to go from being off to full functionality in a matter of 2-3 seconds. However, engineers should aim to achieve this by booting the regular OS users want rather than a BIOS-embedded special environment.

Backup software for Linux… distressing

Edit: I should preface this by first saying that I think there are plenty of backup solutions for Linux. It is just that the set of features I want does not seem to be widely available yet. If a piece of software is great at encryption, then it does not have continuous backups or if it has continuous backups, then it is not good at encryption, etc.

I’m currently researching and testing backup solutions for Linux. I stumbled upon this post, in which the author comments:

In the last years several projects were started to provide user friendly solutions for the backup of Linux desktop machines. A year ago I already reported about SBackup. Also, the Ubuntu team developed the solution TimeVault and last but not least there is flyback which I used for several months to keep a backup of my thesis. But despite their advantages they all suffer from stalled development: all mentioned projects are effectively dead at the moment.

This is distressing. I was looking forward to TimeVault and Flyback becoming mature solutions but it seems that this won’t happen any time soon. What I’m looking for is:

  1. end to end encryption: with ID theft, I’m not comfortable with leaving unencrypted copies of my files around.
  2. client-initiated backups: I need to backup laptops which are not always on so the client must initiate the backup.
  3. continuous backup (similar to what TimeVault and Flyback provide).
  4. support for a backup store located on a network.
  5. user friendly: desirable but not essential.

I realize that neither Flyback nor TimeVault offered all of this but it looked like they were going to really tackle the continuous backup problem head-on. Right now, I’m testing boxbackup and I’m also keeping an eye on duplicity. I’m not sure yet which one I want. I know that duplicity does not (yet?) support continuous backups but it has other advantages that may make up for it.

No more DjVu for me…

Today I’ve reviewed again the advantages and disadvantages of DjVu and PDF for scanning and archiving old books. I’ve decided to abandon DjVu. While googling, I’ve found a post which confirmed that the situation with DjVu is bad (the post is from Feb 2007 but I have no reason to believe that the situation is better today).

When I started using it several years ago (my oldest DjVu files date from late 2004) support was sparse but I understood that it was still relatively new. I started using it with the hope that it would catch on. It seems however, that the situation has not improved much since then. Yes, there is more software available for it but most of it is not free (in any sense of the word free) and there does not seem to be any kind of end-to-end support for all features that the format allows. For instance, I have no tool in Ubuntu which allows me to add comments to a DjVu file like I do with PDF files. So I’ve given up on DjVu. It used to be that the size advantages more than made up for all the other disadvantages but PDF has evolved and gotten better at compressing images (by using more advanced compression algorithms), disk space is not the issue it used to be years ago and PDF is just better supported.

That’s very unfortunate because I think if the creators of DjVu had opened it fully, it would most likely have displaced PDF.

OOHanzi 0.3 released

Change Log

  • 20080302:

    • Updated for OOHanzi 0.3: the only new functionality is the addition of an “About…” menu item.
  • 200802??:

    • Added some code to make things a bit more user friendly when a JRE is not properly installed.
    • Modified the way web browsers are launched.
    • Changed the nomenclature of menus and some functions.
    • General fixes to improve stability in Windows.


This documentation deals with version 0.3 of OOHanzi. This software is very much at the Alpha stage of its life-cycle. Expect bugs. Expect nonsensical design decisions. Expect quirks.

Imagine a paper even before it is at the draft stage, when it is still just a bunch of thoughts quickly put together. Or notes taken at a conference. At this stage, OOHanzi is very much the programmatic equivalent of that paper or those notes.

Continue reading

The pain of writing OO extensions

I’ve been working on some Chinese extensions for OO but at every step of the way I have to fight with obscure documentation and really strange design decisions. Here’s the latest example. Want to display an image in a dialog? We’re not talking about anything fancy here but just one single image which remains static. There’s nothing dynamic about this. So can you just put in the relative path in the dlg:src parameter which indicates where the image lives (e.g. dlg:src=”../image.jpg”)? No way! That would be way too simple and would violate the spirit of OO which is “why make things simple when you can make them complicated”. Instead you have to create two additional XML files to tell Open Office where to find the image in your extension and then at run time you have to query Open Office to find where the image really lives and load it into your dialog. Yay! The reason for this is that you do not know ahead of time where your extension is going to reside on disk. You’d think a relative path would be rock solid because it is relative to where your extension is located, but no: that does not work. You have to file extra paperwork with Open Office to declare the existence of the images.

(I searched through the dialog files bundled with Open Office to see if I could find something useful in there but what I found were paths like “file://D:/…”. Ooops, I guess even the Open Office developers are having a hard time keeping their paths portable.)

And this is just the latest in a loooooooooooooooooooong series of grievances. Here’s a new motto: “you don’t know the meaning of bondage-and-discipline programing until you’ve tried to write extensions for Open Office.” Open Office is like a bureaucrat: you can’t do anything without filing multiple forms to announce what you want to do and justify it.

References: here, here and here.

OOHanzi now packaged for Ubuntu

I’ve packaged all of OOHanzi for Ubuntu. I’m using Launchpad to host them. Follow the link for information about the sources that must be added to your /etc/apt/sources.list to use my repository. It is also possible to just use the web interface to download all 3 packages individually and install them one by one. If you add the repository to your configuration, just installing oohanzi (e.g. apt-get install oohanzi) should pull everything needed. If you install individually you need to install, in order:

  • java-unihan-lib
  • oounihan
  • oohanzi

Or you can issue a single dpkg -i command with all 3 listed. If the installation system complains that it cannot complete the installation, issue “apt-get -f install” after the installation.

People who have already downloaded the files individually and who would like to switch to the Ubuntu packages should first uninstall the old OOHanzi extensions and the unihan java library.

People who want to keep abreast of developments can subscribe to an RSS feed that contains only OOHanzi announcements.

Note for people who want to edit OOHanzi’s OOBasic code in OpenOffice’s IDE: If you use the Ubuntu packages, there is no way to edit the OOBasic code. If you want to install the extension so that you can modify the code as needed, you can install the java-unihan-lib and oounihan Ubuntu packages but you must install the .oxt for oohanzi manually as described in the previous release notes.

Moving from Dotclear to WordPress: no regrets

In June of last year, I moved from Dotclear to WordPress to manage my blog. I have not regretted the move one bit. This morning I quickly took a look at the Dotclear web site and found that Dotclear 2 is still in beta. If I had stayed with Dotclear, I’d still be waiting for version 2! Boy, did I make the right decision when I decided to switch to WordPress!

An inherent problem with DRM

This post about Adobe’s DRM being unsupported on the Mac generated a lot of comments. A good deal of commentators made the issue an “Apple vs Microsoft” one but I think they are missing the real problem.

No information structure is ever totally impervious to eventual obsolescence. Even ASCII files will become obscure one day. However, the simpler a structure is, the more chances it has to survive longer and the more chances it has to be supported in a variety of environments. An ASCII file will still be easily readable long after everybody has stopped producing readers for PDF files and it is readable on more computing platforms than PDF files are.

There are many problems with DRM, but the one I want to focus on here is the fact that it needlessly increases the complexity of the information structure. By “needlessly” of course I mean that the person accessing the information infected with DRM does not need the DRM. The information would be just as usable without the DRM. Of course the proponents of DRM argue that DRM fulfills a need, namely the need of whoever owns the information. However, as I user of information, the DRM is just an obstacle to my goals. But here is the fundamental problem: DRM makes the information structure it infects more fragile. Implementing the external infrastructure to allow to properly process the DRM information embedded in a file is not trivial. Because of this, information structures infected with DRM are more likely to become unusable in the future than those not infected with DRM. They are also more likely to receive a narrower support across diverse computing environments. That is precisely the case in the Adobe issue reported by Consumerist: Adobe’s DRM is supported in Windows but not in Mac OS.

Now, Windows users may glibly boast that at least on their platform Adobe’s DRM is supported but they’ve got to realize that their files are more fragile than if they were not infected with DRM. They must also realize that even though they can use the information now, they still do not own it and it is only a matter of time before their DRM infected files become unusable.

The Impact of Dead-Tree Magazines on the Environment

Chris Anderson, editor at Wired, posted a blog entry claiming the following:

So by this analysis dead-tree magazines have a smaller net carbon footprint than web media. We cut down trees and put them in the ground. From a climate change perspective, this is a good thing.

I can’t help but read his conclusion and his post as a self-serving rationalization to a) deflect the criticism raised against paper-based publishing and b) keep the status quo in place. In other words, the message is “publishers (and magazines such as Wired) are not doing something environmentally detrimental by relying on print-media.” There are several flaws in his logic. I’m going to concentrate on only a few of them here:

  • Most of what he puts up is conjecture and a lot of it is based on vague scenarios. Some of the guesses are clearly overoptimistic. It is true that the USPS would not disappear if print magazines did not exist but he sees the impact of print media on the USPS as essentially non-existent: “we print and bind that paper into magazines, which are delivered mostly by the US Postal Service, which runs the same routes whether they’re carrying our magazines or not.” Yes, but print magazines have to be sorted and carried by the mail trucks and mail workers. I can’t believe that if magazines were eliminated the USPS would use exactly the same resources they are using now.
  • The carbon footprint is not the only environmental impact of print publishing. He focuses on the carbon footprint because he wants to talk about the climactic impact but I think this is misleading. Other forms of pollution must also be taken into account. I doubt that print comes out ahead when the entire environmental impact is considered.
  • He points out that “trees take carbon out of the air”. But then he associates that benefit with the print industry only. Somehow, cutting down a tree and then planting another one, which is what the forestry companies should ideally do, is better than not cutting down a tree in the first place.
  • Even if the claim that “print publishing is carbon neutral whereas web publishing is detrimental” were true, the reality is that magazines like Wired and publishers are not currently doing one or the other. They are already doing both. What must be demonstrated is not that print publishing in the abstract is environmentally equal or better than web publishing in the abstract but that engaging in both print publishing and web publishing at the same time is environmentally equal or better than web publishing alone. To take one element of the production line as example, the comparison is not between printing presses on one hand and web servers on the other but between printing presses and web servers on one hand and only web servers on the other.

Then he takes a study of the Royal Institute of Technology, Stockholm as confirmation that his guesses were right. But there are problems here too:

  • Anderson says:

    [The study] compared printed newspapers to people reading those newspapers on the web, and concluded that for the same time reading (30 minutes) the printed newspaper has a lower carbon footprint.

    However, he conveniently fails to mention that only in the European scenario was web reading for 30 minutes better than print. The researchers also crunched the numbers for a Swedish scenario and found that print was worse than everything else.

  • This difference between the European and Swedish scenarios brings to mind a problem with the study. There is no formal discussion of error. It seems to me that whatever estimates the researchers came up with should have had some percentage of error associated with them. There is presently no formal way to know how reliable their numbers are. They do not formally explain how the dependent variables would be affected by variations in independent variables that they used in their study. If they had over or under estimated the energy consumption of web servers by 1%, how would this affect the results? What about all the other variables that are part of this study? This is not insignificant because some models are very susceptible to show large variations in output even for small changes in the input values. In other words, it is possible that if their guesses are a off even by a little, the results could be dramatically different. The difference between the European and Swedish scenarios suggests to me that their model is indeed fragile.
  • How does this apply to the US situation? Given the difference between the European and Swedish scenarios, I’m not keen of extrapolating the results to the US scenario.
  • Because publishers are not about to turn back the clock and go print-only, the question for me is “is it environmentally detrimental for a publisher who publishes electronically to maintain its print publishing operations in addition to the electronic operations?” I think the answer is yes. Anderson’s musings do not convince me to think otherwise.

Kudos to Dell for their recycling efforts

I typically write about Dell to point out what I perceive to be problems with the way they conduct their business. Today, however I want to point out Dell’s involvement with recycling used computers. Ideally, Dell’s involvement should not be needed. There should be easily accessible recycling facilities everywhere and citizens should have enough environmental awareness to take action. My wife and I are lucky to live in a city where there are well advertised recycling facilities that accept computers. Not everyone has such luck and then there are some people who won’t think about recycling until a big name like Dell makes a fuss about it. So kudos to Dell for facilitating the process.