2009-12-30

The Software Development Mismatch Problem

How often do you think that you have not been well prepared in school for the real life?

To me it often happens that I find particular realms where I did not learn the really important things in several subjects. However, for my career as a software developer I feel that I had a quite good start. Later, when attending university I experienced more difference in what I was taught and what I found useful. That said, I must admit that I am a practically oriented person. I don't deal a lot with theories although I know that it can be useful sometimes.

However, I thought several times that there is a mismatch problem (see Malcolm Gladwell about the mismatch problem) in the whole software development branch.

Recently I followed two podcast episodes that confirm my doubts that in a lot of cases software development results do not fit customer needs or economical concerns. I found them on se-radio.net: "Episode 149: Difference between Software Engineering and Computer Science with Chuck Connell" and "Episode 150: Software Craftsmanship with Bob Martin".

I agree with most of the content - for instance about the fact that computer science is way different from software engineering - see also the article about Software Engineering <> Computer Science from the show notes of the podcast - or more articles about "Beautiful Software".

In my opinion (also):
  • A lot of time is wasted later, when the software architecture is bad and the code is not kept clean.
  • The developer needs to choose wisely and master his primary tools - the programming language(s) of choice and the IDE.
  • Focus must be on the code and not on the usage of some tools or too many documents.
I also find it very useful where Chuck Connell draws the separation line between Software Engineering and Computer Science (written in free words from my understanding): When people are involved in a crucial manner (compare a custom development project of a desktop or web application with developing or analyzing an encryption algorithm) then you are on the software engineering side.

What I can see from all this is as a personal conclusion:
  1. It is extremely important to think well and communicate well with the customer before creating the architecture of the software.
  2. Don't forget about the importance of creating clean readable and well documented code (code comments are more important than other documentation that is created along with your implementation as that tends to be more outdated).
  3. Experience is an important factor. A well experienced developer is worth a lot!
Related posts: About agile software development, The good, the bad and the ugly.

2009-12-20

Going Linux

It now has been more then four months ago that I have decided to completely switch to Linux (not only at home where I use Linux since already about 2006). And it is about three months now that I am running completely Linux (Ubuntu in my case) on my working place. This has been a little more a challenge than just using Linux at home because for work I have several more requirements than "just" doing E-Mail, Web-Surfing, writing some letters, scanning, storing and printing my photos.

One thing is that I need more applications like for Mind Mapping, Taking screenshots (with annotations), creating videos (showing bugs or how to configure particular software pieces), creating virtual machines (for Windows legacy development and testing software on different environments), software development, writing concepts, Investigation of files (diff viewing and hex editing), remote support and more.

A second thing is to integrate with the environment in the office and remote access to customers who use partially VPN and partially other applications like TeamViewer (for which unfortunately no Linux version is available yet).

I started with Ubuntu 9.04 Jaunty Jackalope and upgraded to 9.10 Karmic Koala right after release (so running that for nearly two months now). My notebook is a Dell Latitude E5500.

The installation and first impressions were much better than compared to Windows 7 for example (where already the first use of IE or first software updates were full of annoyances). Further, a lot of important things were already there. However, I am used to be not like a standard user and there have been several things I want to have different from a default installation...

Additional applications:
Many additional applications I wanted to have were just a few clicks away using the synaptic package manager (this is one of the first additional installs I do recommend). This is far more efficient than on Windows where you have to go first to Google and then to the appropriate websites searching for the place to download additional core tools (like a better zip tool). So to get up-and-running with the most important apps is much faster on Linux (or at least on Ubuntu in my case).

And last but not least, your software downloads are done from trusted sources automatically having all packages pgp-signed (a thing that a "normal" Windows user is neither thinking about and installing all kind of stuff from whatever source).

Regarding the applications the biggest issue is to know your options and to choose an application that fits your needs. For many needs there are at least two options (often more). But there are some realms where it is difficult to find either one single program that fits. For example I searched a long time to find a screenshot tool that fits my needs. I need to do a lot of screenshots with annotations for documentation or as part of user support. So a very efficient tool is essential for me. Finally I found Shutter fitting my needs. It was a longer search because it is not in the standard repositories - to get it you need to add an additional repository from Shutter-project.org. - So although many applications are already in the repositories, for some more special applications you might need to add other optional repositories. But once added the repositories, the applications are seamless integrated in your package management (and hence updated along with all other updates).

There is a rest of tools that you might want to manually install and update because you are contributing by testing or you need the really newest stuff. In my case NetBeans is an application, that is currently developed really fast and since I am on Ubuntu I already did the second upgrade - so I am on 6.8 now and the distribution version would be "still" 6.7.1. - For NetBeans I do it because I want to decide separately when to switch to a new version.


Reliability and stability:
I can work now the whole day (8 and many more hours) without noticing the system getting slower and slower. Even Firefox runs faster and more smoothly under Linux (I use quite the same plugin set as under Windows). Some software I do download/install does not run well but this happens usually just when I installed some older not well maintained software that does not seem to be widely used. In such cases uninstallation of software can be done without leaving clutter on your system. So in general I do not fear that my system gets slowly cluttered and broken as time passes. That makes it not a big deal trying out a lot of different applications while looking for a particular feature set.

Even although I already did a major distribution upgrade, I still have the impression of having a clean system. I would never ever upgrade a Windows PC on the other hand and always doing a clean install of the OS. That said, lately I read that many people also recommend doing a clean install for Linux too. I noticed that there has not been a migration to Ext4 in my case so I am still on ext3. Probably I am doing a clean install the next time also. But before doing this I will test how fast I can be then in transferring data, configuration and same set of applications.

Another big advantage is that under Linux there is no fragmentation happening slowly as time passes by - an issue that slows down every Windows PC after a while and one needs to defragment the machine over night (because a long process) from time to time.

But the most important thing is: In general things work or not. So far I did not experience issues as "sometimes it works, sometimes not" and the like.
There is either a bug or you are missing some library (which might not have been correctly added to dependencies). But applications do not interfere with each other in the way that an installation of one thing breaks another (as known from other so-called operating systems) - at least I never faced such an issue.

I had a single exception so far - experiencing an issue recently of the type "sometimes does not work" with my mobile Internet stick where I had to unplug and re-plugin it again sometimes. That happened on Ubuntu 9.04 Jaunty Jackalope. After upgrading it did only work when it was plugged in already when starting up the machine. And this is already the only thing I can really complain regarding reliability: This was/is a problem with some Huawei modems not working any more on Ubuntu 9.10 Karmic Koala - a Linux Kernel problem in general (see bugs #446146 and #449394). For me (for the E160G that shows up in lsusb as E220/E270 - at least in my case) the problem was solved with a firmware update of my Huawei-Modem which I found strange as it was working on Ubuntu 9.04 previously. But anyway, this is a serious problem that should not happen. There are many people for whom the mobile internet connection is the only they have and with braking it they even can't get their updates (and a probable fix for this problem) any more. That said, I have seen Windows machines where the mobile Internet wasn't working either.

But this and some other minor issues I had initially with Karmic Koala have been solved with the second update wave about two weeks after release. Never seen a first service pack on Windows within two weeks after a release!


General architecture:
I do consider the Linux operating system as simple and straightforward. Although there are plenty of modules, services and appropriate configuration files, there are rules of what is saved where and very most configuration files (if not all) is saved in human readable text format.

Also the driver system to me seems to be much better (from the little I do know about it) on Linux - but this might be just my personal impression.

On Linux there is less flexibility in the permission system, but it is clearer. I had so many permission issues on Windows where searching for the missing permissions was a painful work. However, this implies that for some requirements it might be a little more difficult to get the effects you want if you want high sophisticated permissions. There is the "acl" package with its commands getfacl and setfacl to manage enhanced permissions (if the core file permission features are not enough for you).

Linux was just or mostly commandline for a long time but what it means still today is: Everything can be done on the commandline. This means, that integration with the OS can be done by calling system commands - so not necessarily integration of system APIs required.


Hardware compatibility:
This is really the biggest issue. Not all hardware is supported. I tested Linux (different distributions) on machines where I got only a black screen. When you are thinking of moving to Linux then you should take care of buying compatible hardware (gadgets).
Before blaming Linux because of that think a moment: Microsoft is "outsourcing" most driver development to the vendors and Apple has a very reduced set of hardware pieces they need to support (what you get there is more vendor-lock-in). On Linux a big part of the drivers is developed by the community and a smaller set of vendors that have noticed the rising interest of people in Linux.


Software compatibility:
On Linux there is the focus on open standards and open formats. This is a big advantage if you think of long term archiving of your data or long term compatibility of software components. I have files created with applications that I cannot read any more with current software or the newer versions still running on recent Windows versions got too expensive for me. Only thing: There are several multimedia formats with some law glued on them to restrict usage. To make them work on Linux some additional packages must be installed manually for most distributions (Linux Mint, which is Ubuntu based, ships them by default).


Overall conclusion:
I am very happy of my decision and I cannot imagine to switch back.

Using Ubuntu at work is clearly a bigger challenge than just using at home, in particular, if the company infrastructure is not looking at compatibility either. I was lucky because at the company where I am working the used network printers are all HP and HP printers are generally quite supported under Linux - for your company you might be interested in having a look at linuxprinting.org.

The biggest issues that I had was with the different remote desktop support tools in use by our customers. I did not notice before that so many products for remote desktop support do not have a Linux version available. For me one of the most widely used - TeamViewer - didn't either work using Wine although people in forums claimed it should. I found Yuuguu (it's like Skype for screen sharing) being free and available under different operating systems and I use this now as far as possible.

There are a few apps left (like TeamViewer or some internally developed internal apps) that only run (well) on Windows. For those I have installed a VirtualBox running Windows (a license I got from the company).

A very important consideration is: If you would like to switch to Linux as a company or as an advanced Windows user then this is a long-term project. In my case I was keeping an eye onto alternatives to Windows since - let's say - about 2005. When I noticed how quickly Linux improved from release to release (in that time looking mostly at Fedora) I started to prefer applications that are available on Windows AND on Linux when I searched for new applications to fit my IT needs. So when I finally switched most applications I do now use on Linux I already used on Windows for a longer time (like Firefox, Thunderbird, Open Office, VLC, VirtualBox, Freemind, MySQL and so on). For a company that heavily relies on some applications that are chained to Windows an option is always to use one Windows Terminal Server and users do their Windows stuff that way (which is a usual practice for companies switching over). In general a company should favor applications that do work on both platforms because even if still favoring Windows under current conditions, it might be subject to change. So take the sure way and be platform independent.

BTW: The title of this post was borrowed from goinglinux.com - a podcast I can highly recommend for beginners and even more advanced users. I am a subscribed listener and enjoy every episode (only the computer America episodes do contain a lot of interruptions - even if the commercials are stripped - so I skip them often).

Related posts: Why Linux?, Why I switched to Ubuntu, Cross platform solutions, About Dell, The operating system, The Open Source idea, New year's IT resolutions, Software on speed, Ubuntu 10.04 experiences, Small Business Boom, Ubuntu compatible hardware, The Dell Latitude 2110 and Ubuntu 10.04.1, User lock down, The community, Popular Ubuntu desktop myths, Why companies do not use Linux on the desktop, Distribution choice.

2009-11-14

About agile software development

Somehow it seems to me that agile software development and extreme programming are getting a hype (if not already). I am not a friend of long political and organizational disputes and I like a flexible cooperation and collaboration between developers and customers. I pay attention that all those people meet and communicate who are (or will be) directly affected by the project during development or by the final outcome.

So from that it looks like agile software development and/or extreme programming is something for me. But I have serious concerns regarding those techniques or at least how some people understand agile software development or extreme programming (which in reality are new names for old stuff).

The reason for my concerns is that for many people this is the ok for starting development without thinking much about the software design and possible future goals. Seeing the goals and purpose of a piece of software as a "moving target" is also an often adopted thought.

And this although a bad design is the biggest obstacle later on when enhancing the product!

When evaluating products the design and background technology is often not taken into consideration although that is the core that usually can't be changed during the whole life-cycle of the product, so this is similar to a vendor-lock-in - it is a technology-lock-in. All technologies have their advantages and their flaws and you have to live the appropriate drawbacks then once decided to go with a particular product.

From my understanding agile development and "extreme programming" (the longer I think about it the more I dislike the phrase "extreme programming") is not (necessarily) like "programming without thinking", but somehow people see it as an opposite to some "old-fashioned" big and documentation-heavy project management. In times of economic crisis as companies try to shorten their time-to-market and reduce their costs, projects are narrowed down to just coding because this is the action that most obviously can't be omitted...

Related post: Web vs Thick client.

2009-11-05

Cross-platform solutions

Recently when searching the internet for platform-independent solutions for particular requirements I got annoyed and angry about what people write on their websites. It seams that they mostly do not understand what cross-platform means.

In many cases software vendors who offer a tool or solution for Windows and Mac find that sufficient to call it a cross-platform thing. My opinion: This is pretty poor for labeling it "cross-platform". If you look at Wikipedia for instance you can see not only Windows and Mac, but also all kind of Linuxes (and there is really a lot - see distrowatch.com) including widely used Ubuntu/Debian, Redhat/Fedora or Solaris or BSD or ... (well we skip DOS, Amiga and the like here ;-) ).

So: Cross-platform is NOT just Windows and Mac! At least the major distributions of Linux should be also included in the list of supported platforms!

I simply cannot consider a software that is just running on Windows and Mac as being cross-platform... - Those vendors should write: "Blablabla - for Windows AND Mac!" instead of "Blablabla - cross-platform solution!".

Related posts: Why Linux?, Why I switched to Ubuntu, Going Linux, New year's IT resolutions.

Virus scanning on Ubuntu

You might need to download files on your Ubuntu machine (or any Linux running Gnome) that are intended for Windows machines or you might have fear getting one of the few Linux viruses (yes, there are some).

There is the free clamav software. On Ubuntu (at least 9.04 Jaunty and 9.10 Karmic) there are the following packages available:
clamav
anti-virus utility for Unix - command-line interface

clamav-base
anti-virus utility for Unix - base package

clamav-daemon
anti-virus utility for Unix - scanner daemon

clamav-freshclam
anti-virus utility for Unix - virus database update utility

clamtk
graphical front-end for ClamAV

klamav
KDE frontend for ClamAV
nautilus-clamscan
Antivirus scanning for Nautilus

There is one problem with the clamtk (the gnome GUI tool for scanning) - at least under Karmic used together with Thunderbird: Thunderbird tries to open all files with the clamtk and there if you change that it does not remember the change. And this although I don't want to scan everything automatically (even text files).

And there is a problem with with the nautilus-clamscan (at least under Karmic): It hangs in an infinite scan for the first file.

So whatever I tried, everything is annoying. Solved it the following way:
apt-get remove clamtk
apt-get install nautilus-actions (if not already installed)
Then I created a new nautilus action (via System->Preferences->Nautilus Actions Configuration) with the following command (enabled for files and folders and multiple selections):
path: gnome-terminal
parameters: -x /opt/clamscan.sh %M
And I have created the clamscan.sh with the following content (and set to be readable and executable for everyone):
#!/bin/bash
clamscan $@
read line

Last but not least:
chmod 755 /opt/clamscan.sh


Now I can scan files with clamav on demand using the context menu in Nautilus.

Releated posts: Stationary under Ubuntu, Change hostname on Ubuntu.

2009-10-29

Change hostname on Ubuntu

In business it is a common pattern that you install one user workstation that becomes the template for others.

Basically after cloning such a template machine (e.g. using CloneZilla), the most important thing after starting the clone is to change the hostname - and here is how to do it:

There are two files to edit - and you should edit them in this order - just changing the hostname (in total should be 3 times occuring):

sudo vim /etc/hosts
127.0.0.1 nowthenewhostname localhost.localdomain localhost
127.0.1.1 nowthenewhostname

sudo vim /etc/hostname
nowthenewhostname

For those not familiar with vim - just press "i" to get into edit mode and when finished press "ESC" and enter ":x" followed by pressing "Enter".

And sure, you can use gedit instead of vim also.

See also more information on Cloning Ubuntu to Different Computer Hardware.

Related post: Virus scanning on Ubuntu, Stationary under Ubuntu.

2009-10-27

Damn Fast Linux

I was visiting my parents on the weekend (as usually once a week) and as I stumbled over some small Linux Editions dedicated to run on older Computers, I gave it a try as I have some older Computers there. I even have a 486 with only a small amount of RAM. I did not know that there are either distributions specialized for 486 architecture also.

Unfortunately trying Linux on the 486 was not successful, because ..... because the BIOS did not support the CD drive in the boot order. I did not expect that. OK, it was the time of the 5 1/4 inch and 3,5 inch disks...

Then I tried another machine that was a lot younger - some Pentium with an XP installed and it was annoying slow already to start up. I tried Puppy Linux on it and playing around a little with the Live CD.

IT WAS SO DAMN FAST - even although from CD!

So my hint: Before you are thinking of buying a new computer just because your one is old and slow - and you do not much more than EMail, Browsing and writing some documents - then really think of giving Linux a try!

A list of Linux distributions that also should support older architectures can be found at DistroWatch.com. I would start with Puppy Linux, MEPIS, absolute Linux or Damn Small Linux for example.

Related posts: Why Linux?, Why I switched to Ubuntu, The sad thing about Linux, The operating system.

2009-10-06

Stationary under Ubuntu

Nowadays instead of printing your business documents and sending by mail, in many cases people send PDF by email. What is missing here is usually the stationary paper that is taken from the printer tray where the comany headers and footers are pre-printed.

If you want to have a nice easy way for using or existing word processor templates adding a stationary PDF - here is what you can do:

Preparations:
sudo apt-get install nautilus-actions
sudo apt-get install pdftk

Then create a new action as:
Path: /bin/sh
Parameters: /home/username/stationary.sh '%d/%f'

And here is the content of script stationary.sh (don't forget to make it executable):
#!/bin/sh
pdftk "$1" background /home/username/stationary.pdf output "$1.new.pdf"
if [ -f "$1.new.pdf" ]
then
rm -f "$1"
mv "$1.new.pdf" "$1"
fi

And all what you need now is to put save your stationary as PDF to /home/username/stationary.pdf

Related post: Document file format, Virus scanning on Ubuntu.

2009-10-05

Why Linux?

I do not consider it a good way how FSF started bashing Windows with Windows7Sins. Besides the fact that this is bad behavior, they should have asked the people why they keep using Windows.
Here are some arguments, I get told by people who don't want to switch OS:
  1. "I don't want to invest time into learning a new system. Computers are already a big time waster."
    Learning something new is additional effort. Although it might be an investment with a short ROI, it depends how people are using the computer (and how much money they have to pay others for fixing their problems).
  2. "I heard that it is pretty common to experience incompatibilities with the hardware and I am worrying about my foto printer, scanner or mobile device not working with it."
    Yes, this is indeed true. There were made big steps forward in the last years but there are still plenty of gadgets not working well. If you are planning to buy new hardware, you should prefer those items that are known to be working with Linux over others - just for the case you might want to switch in the future. Anyway, with some manual work I was able to get my Canon foto printer and my Canon Scanner to work with Fedora already back in 2006. Nowadays it should work better.
  3. "I do need to share a lot of documents in Word, Photoshop and other proprietary formats with others I collaborate with. I worry about incompatibilities."
    This is indeed another thing to be taken into consideration. In the best case it is just some additional click but in the worst case you can't read or write the needed format any more. Although some Windows applications run fine under Wine (Windows-Emulator), many others do not. Although people should prefer open standards this is not always possible.
  4. "I am a gamer and the most cool games run on Windows."
    Also true, although there are many free games on Linux, vendors produce new games quite only for Windows. Dual boot is an option but means to maintain two systems - and you still pay for the Windows license.
  5. "I have plenty of friends around who can help me when I have troubles with my Windows-Machine, but I have no or very few options for the case I then have problems with the Linux box."
    This is also true for many as there are simply still more people around knowing Windows than knowing Linux. It does not really make sense trying to convince a person worrying about issues that are really there.
I think that most people who switched to Linux switched because of suffering and not because of philosophical or political reasons. - I personally suffered - and so here are my top reasons for switching to Linux:
  1. Stability & Reliability
    I had several key experiences with Windows in a short time frame like these: Machine began with strange behavior being slow and having hangups. Nothing in the event logs and no indicator whether it could be the hard drive, motherboard or "just" a software problem; Burned CDs that were not readable on a different Windows machine (but Linux could); Blue screens on startup turning out to be a problem with the CD drive (hell, I booted from HD so why can't go without the CD drive); files getting corrupted over time silently - I could continue further and not mentioning the plenty of issues I had trying to get reliable (restorable) backups/images of the OS.

    There are also some stability issues on Linux - I already had some sudden restarts of X - but I am simply less in panic mode on linux. In case of emergency, the OS switches to read-only mode to prevent from further corruption. Apart from that: Firefox runs more smoothly and stable on Linux, Memory usage of the complete system is less, ...
  2. All the important stuff is right there after install.
    After installing a Linux desktop distro, you can right away burn CDs and DVDs, write your documents (full open office suite included) and emails and more. With a few clicks you can start developing software, do mind mapping, manage databases, do sound and video editing and much much more. - Without paying a dime!
  3. You are legal without entering a single license key.
    I hate the annoying entering of serial numbers as well as the annoying registration over the internet. I have seen too many people not finding their original CDs any more (even in companies) and I have seen too much problems registering over the internet - even losing the only developer license of a product after setting up a new notebook after the old was rotten - just because vendor saved hardware information.
  4. Software management.
    On Linux there are managed repositories of software. That means basically, that you have a list of available software, can go through it or search and you can install a software by just putting a checkmark in front. Dependencies of files and packages are handled by the package manager. The result is that there is only very few chance to kill your machine with just installing some software. I only had a very few cases where two application were conflicting and that is already a longer time ago. Removing both an reinstalling the one you like more then works.
  5. Intention and business strategy.
    The intention of the makers of Linux is to create a reliable and helpful system that helps you getting things done - at least for me it feals that way. If features are missing then they are not ready yet, they are not missing because of business considerations. This makes a big difference in user experience. PDF is a good example. On Windows there is a huge market around PDF and there is no interest in integrating PDF features in the core OS. - Or on the other hand there are "features" like DRM in the multimedia area that would be better missing. In many cases it seems for me that I have just the "light version" and need to pay more for the final needed productivity improvement. Further business strategy of Microsoft and Apple for example shows that they do not miss a chance to drive the customer into a dependency and try to block out every attempt of communities to improve interoperability. Although Microsoft shows some cooperation, they put big rocks somewhere else. Microsoft did a better job than all the other vendors in the last about 20 years helping to improve productivity on the client. But currently I can only see that they fully exploit their market situation and customers are suffering from that. There are many messages in the news pointing out untrustworthy behavior of Microsoft and even Apple (e.g. Microsoft capturing ISO or common strategy on entering product categories; Apple removing Google voice).
    There is a saying: "Fool me once, shame on you. Fool me twice, shame on me."
However, I can understand people who stick with Windows as Windows has it's advantages - for instance the more options in the Explorer configuration offering many more columns to show in detailed view. The above list of pros and cons is not exhaustive and shows just some general thoughts over particular features. For me personally the Linux ln command for creating symbolic links is a really cool feature. It allows you to link a file or folder to a file/path somewhere else in your file system. This works really seamless (not as the lnk files in Windows). You can use this to link parts of your profile from a completely different location (e.g. external USB drive). But this is just one of the small nice things - as the default support for multiple desktops.

The more I see people litigating about what OS is best (Windows, Linux, Mac or even others), the more I come to the conclusion that those quarrels will not bring a winner - at least not nowadays. I think, everyone should accept, that there are different operating systems and sooner or later we will find ourselves within a multi-OS environment. We should improve interoperability and find ways living together in peace.

Related posts: Windows 7 RC, My application set on Windows, Installing your PC from scratch, Apple worse than Microsoft?, The operating system, The Open Source idea, Why I have chosen Fedora, Why I switched to Ubuntu, Going Linux, Ubuntu 10.04 experiences, Small Business Boom, Ubuntu compatible hardware, User lock down, A few Linux related videos.

2009-09-03

Paying for free and Open Source

When talking with other people about free and open source alternative software I often get asked how those companies behind such products make money. - It is nice that people worry about how other companies make their living - but sometimes they have concerns investing time into a product that maybe does not exist any more within two years because there no more founders.

I want to emphasize what I think is the core idea of creating software declared as free and open source:
You should pay people for implementing the features you like and fix the bugs that you suffer from - you should pay for effective work - and not for ??? (maybe doing something new and innovative, doing nothing, doing something else, ...).

This is like planting a tree - you pay somebody for planting it and then you let it be on it's own.

Yes, there is a certain amount of money needed to provide the infrastructure that allows you to offer the appropriate software downloads and some amount of money needed for media (for your working data and backups as well as for the case, someone desires a CD). An option is to put that sum into the amount you are charging for your work or into the amount you charge for support contracts.

But anyway, the Open Source way is just another business model. Nowadays most companies providing software free and open source do earn their money (in addition to charging for the work) selling support or for additional (non-free) features. I see this as a form of software leasing. You pay a certain amount for support which you expect to need more in the beginning. When the product is well established and you don't want or need to invest more you stop buying support but you still have the product.

Another different model is the complete rental of software as a service. - Well there are also people that go fine with that.

With the ordinary licensing model you were used to pay for the existing thing, but the company can/could plan better and in general you have many people contributing with the license fees to a common pot of money that is - or could be - used for bug-fixing and new innovative enhancements. What today many people have a problem with is that they can't be sure what happens with their money because there is no contract that says how the money from license fees has to be invested/used. And I think this is one main reason for the run to open source. Another reason is the higher dependency from the other company because companies don't have the source code and in case the vendor goes bankrupt they can't do the maintenance on their own. - However there are also companies who sell the source code in addition to the normal license fee.

The lack of open source software is that people have to get well organized if you want to have - let's say 50 people contributing each with a small amount of money for implementing a larger feature that all 50 would like to have. But the advantage is that if you got organized you have a better influence on the product as everybody gets implemented what he/she pays for. And: With the open source strategy you usually get a bigger community with some implementing features themselves for their (and maybe also your) need. The amount of developers working on the product is simply higher.

So for you as customer or vendor - check what model fits best for you - if you trust the vendor (in being innovative in the future) and he also sells you the source code (open source is not necessarily free) their is either nothing bad with that model.

Related posts: The Open Source movement, The Open Source idea, The license keys, IT projects cost explosion, The small software vendors, The community.

2009-08-17

Why I switched to Ubuntu

Before talking about why I left Fedora I want to emphasize that Fedora is also a great distribution - I like it in general, especially that
  • there is no general focus on being used at server or workstation - so can be used for either purpose. - For Ubuntu there is a separate image for servers (even if I guess that the difference is "just" in the default packages being installed and I can't change that during install process).
  • I can choose my desired focus at installation (of being a server, developer machine etc).
  • I liked the graphical firewall configuration tool more (over those available on Ubuntu).
  • GSmartControl installed by default (will also be available for Ubuntu 9.10, Karmic Koala).
  • I prefer yum over apt on the commandline because yum offers better output.
  • There were more updates for Fedora than for Ubuntu, so I also had the impression that security updates were available faster - but this might be an impression only.
But nevertheless I have chosen Ubuntu for future notebook installations because (in the order of relevance):
  1. My mobile internet stick (from "Three") was not working on Fedora 11. I tried a lot of manual operations to get it to work, but nothing helped. On Ubuntu 9.04, Jaunty Jackalope, it was a plug and play thing. Either less clicks than on Windows.
  2. A lot of applications I use are already available as package in the repositories (e.g. Freemind). Some of those I had to install "by hand" on Fedora. There are simply more packages available for Ubuntu than for Fedora (maybe because Debian-based), so less manual installations of separate rpms/debs not available in the repositories are needed.
  3. It was less tricky to get Skype to work (especially mic problems under Fedora). Although I got Skype to work even on Fedora 11, there were less manual changes needed on Ubuntu.
  4. There are a lot more cool applications in the repositories that I had not available on Fedora (at least not just with a few clicks). Just to give an example: With "Alien" it is either possible to convert rpm packages to Debian/Ubuntu packages - so you can get and install Fedora packages on Ubuntu as well.
  5. It seems that there is more Ubuntu on the clients than Fedora and it seems that the community is larger.
Most of the reasons for chosing Fedora also apply for Ubuntu. And as I already mentioned, both distributions are good - and if you are coming from Windows you also might like Linux Mint - I gave it a short test and for the Mobile Internet Stick for instait was either less clicks than on Ubuntu. Linux Mint is Ubuntu-based and is compatible with the repositories of Ubuntu with a main goal to provide a more complete out-of-the-box experience by including browser plugins, media codecs etc. - so Linux Mint is a hot tip for newcomers. Maybe I will switch to Linux Mint in a few months. ;-) - However, it tries to be more Windows-like (which I maybe don't want now being familiar with Linux ;-) ).

Related posts: Why I have chosen Fedora, The operating system, Why Linux?, Going Linux, Small Business Boom, Distribution choice.

Installing your PC from scratch

I was used to hear that from time to time it is needed to install your PC from scratch. From my experience this applies only to Windows PCs. My Linux machine at home "survived" already 2 major system upgrades which is comparable with getting your Windows 95 upgraded to Windows 2000 to Windows XP without having to setting it up from scratch over this time. The only difference is that there is not so much time between major version releases for Linux.

And what I lately noticed, when a Windows PC has to be installed from scratch (there is no real difference either for Windows 7 here): You loose a lot of time searching for and downloading the newest version of all the tools and applications you have installed and used over time. With the application repository (or repositories) on Linux this task is much faster (and can be either automated). Not talking about the fact that a lot of applications on Linux are already there with the default install without having to install them later. For one that does "just" E-Mail, Web-Browsing, Instant Messaging, writing documents, viewing and editing images, burning CDs/DVDs, viewing videos gets everything needed with the default installation (just for MP3 and such some codecs must be installed afterwards - at least for most distributions).

Related posts: The operating system, Why Linux?.

2009-08-04

Your holy machine

One of the arguments that I hear from people who want to emphasize the advantage of web applications is usually that you can access your application and data (e.g. webmail) from every computer.

But think of it: What are the PCs you really trust? For me it is a very few not more than 3 or 4. When I am at a customer or at a friend or somewhere else - the only PC that I really trust is the one that I bring along.

And thinking of GTD, on a different PC you will not have your keyboard shortcuts configured and other options set that allow you to be efficient. You can't really be highly productive without having setup your personal environment with the tools, settings and shortcuts that give you a boost. Being on a foreign PC sometimes my problems already start when grabbing the mouse as many may have different mouse acceleration settings than me.

Using a USB stick with your portable applications installed and configured is not a real solution as it binds you to a particular OS to be found at the client at least.

So it really makes sense to take your machine with you. With the rise of netbooks it even got easier to carry your machine with you. And you will have it configured to make you most productive and efficient.

Related posts: Web vs Thick client, The mobile device, Web application security, Pros and cons of cloud solutions, User lock down.

2009-07-29

About Dell

"Finally" I can tell a little about personal experience with DELL hardware and their service:

The issue: After about two months the motherboard of a DELL Latitude E5500 suddenly broke. That was quite clear after not getting any noise from the fan, HD and no BIOS message. Just the CAPS lock blinking. Called support hotline and explained what happened and what I already tried.

For the guy on the phone it took a longer while to identify my guess that it must be the motherboard, but anyway - better to be sure. The last thing to figure out was the particular main board type that was built in because there were two options. I didn't know in detail because it was DELL assembling the thing, not me.

To keep a short story short: "Next business day" to DELL really means "fix on next business day" - it's not just the first reaction on next business day. A guy from a partner company came to me / to the notebook with two spare boards and repaired the notebook. Other "local" support contracts expect that you bring in the thing to some shop in your more or less neighborhood.

It is really a pity, that the quality of computer hardware parts in general is not what it was 15 years ago. Unfortunately I see the reduced quality everywhere. Therefore a good service is essential nowadays.

Related post: The hardware, Going Linux, Ubuntu compatible hardware, The Dell Latitude 2110 and Ubuntu 10.04.1, The truth about hardware support.

2009-07-07

Windows 7 RC

I needed a Vista test workstation today and as there was none available in our office I needed to install one on my own. As for my tests I thought that Windows 7 should be ok also I got a Windows 7 RC and installed that. - I was curious of the changes, as I found also a benchmark on the net comparing with Ubuntu - and that promised core improvements in Windows 7.

But after the installation was finally finished I got several update notifications with a certain delay after restart. And after it seemed to have applied them and asking me for restart I had to wait a good while because on shutdown it "configured" the updates.

What I still do notice is that after startup the machine is not really started up. Disk access is hanging around still a while starting services in the background. That is annoying since a very long time and I love to know that if Ubuntu/Fedora/... is up, it's up and I can work.

Well, and I was curious about the applications that come along with Windows 7 - here is the list:
  • Browser (IE8)
  • Windows Media Player
  • Paint
  • Desktop Notes
  • "Snipping tool" (a little screenshot tool)
  • Wordpad
  • Notepad
  • Calculator
  • Audio Recorder
  • DVD Burner
  • Fax Viewer
  • XPS Viewer
  • Tool for remote desktop connection
That's it - not much. Now I would have to install a bulk of other software. I just installed the VirtualBox Guest additions and - guess what - I wasn't asked for administrator privileges. I think too many people were annoyed by that dialog alwayss popping up on Vista so they removed it probably - or it was just a bug that this dialog didn't come up.

From the first impression I think they have cleaned up the system in comparison to Vista. So at least some important system settings seem to be more easy to find.

I tried to run the DVD Burner but it told me that my video card does not meet the required minimum requirements. - DVD Burner and Video card? - mmmmhhhhh. - Well, that just sounds familiar to me remembering installation of some applications under XP...

I did not do further general testing because I did not found any really exiting news - but for some, a better performance (at least when there is still nothing installed) is already exiting enough...

Related posts: Why I have chosen Fedora, Why I switched to Ubuntu, Why Linux?, The Microsoft logic.

2009-06-16

The License keys

Arriving in the office today I got notice of a notebook that a customer brought in and it should be reinstalled. Unfortunately nobody can find an install CD that is matching the license key labeled to bottom of the notebook - neither the customer nor our technical staff.

...yes, it is a Windows installation. Variants of license keys must match the variant of CD used.

It is typical - especially for home users - that they do not find their (correct) install CDs when needed. Unfortunately many people are not well organized.

Similar things happen to different type of software pieces. For instance, I used the Wise Installation System software and we have bought an upgrade a long while ago. When I later reinstalled my development machine I first had to install the older version, enter the license key and then - you might guess - upgrade and enter the new update license key. I do not know if current versions are still made this way. When reinstalling a complete machine you can easily get in rage entering (typos included) serial numbers.

There was a time when I argued against software automatically connecting to the internet and automatically retrieving their license from the internet. But now I think this is maybe the less annoying variant. - However, we also have bought software in the past designed that way and often we had to argue that the old machine was replaced with a new one and that they can be sure there is still only one license used.

What companies do to ensure that people do legally use their product caused me to avoid commercial software wherever possible. And indeed even on my Windows machine at work I use mainly free and open source software. This is the best way from avoiding pain when reinstall is required.

Related posts: The Open Source movement, Paying for free and Open Source, The Open Source idea.

2009-06-09

The hardware

When thinking of hardware, one of the first things I think about is hard disk capacity. A few weeks ago I could see the first tera-byte HDs available at a discounter in the neighborhood. Although my notebook has "only" a 160 GB HD, I have consumed about 4 hard disks in about a year! Not to tell that Windows didn't warn me in any of those cases...

The HD was always the part in a computer that goes rotten most often - that is usually the weak point in a computer hardware system. But as there is price pressure nowadays on memory manufacturers I fear quality going down for main memory also in the next years...

Lately I was at my parents with my 1 1/2 year old son and he likes very much pushing buttons - so "by accident" he pushed the power button of my very old 486 which I left 1997 there. I wondered why it was still connected to power supply. I could see the machine booting DOS 6.22 and I was expecting a scary noise and immediate HD crash (as manufacturers also say you shouldn't leave a HD too long without using it). - But nothing. Even Windows 3.11 started up. Then I got really curious and started a complete surface check of the 2 HDs - if I remember right a 120 and 420 MB HD - something like this. Result: Not a single bad sector! And I was using the newer disk at least from 1995 to 1997.

I am convinced that with the increase of hardware capacity and performance as well as in the same time reducing the size manufacturers do work hard on the physical limits (as currently known).

Further I lately consume one mouse per year and although the quality of the keyboard is very important thing for me when choosing a notebook, my 3 year old HP notebook now gets problems with the keys and accurracy - even the external HP keyboard.

When I look at current special offers in the neighborhood, usually some keys of the demo machines are already missing or broken. There are only two options: Either those got rotten so easily or people are stealing them because their models at home have the keys broken and people are looking for spare parts.

So what I miss is a little more focus on reliability and quality of the hardware in general!

And when looking for models that fit my needs I find notebooks with reflecting screens (they tell the colors are more brilliant - but reflecting screens are much more annoying then less brilliant colors...), horryful keyboard layouts (nothing for power-keyboard users) and strange combinations of HD size, main memory capacity and other hardware options. Netbooks currently get better, but for my needs too small for daily intensive work. Those are something for people that are permanently on the road or somewhere else where is not very much a working place and they want to look mails or surf the internet just occassionally (IMHO). And last but not least most vendors automatically include a Microsoft Windows preinstalled. For my private life I am not willing to pay that tax - I want a clean peace of hardware.

So when I went a few weeks ago to buy a new private notebook, it was difficult to find a notebook fitting my needs and the only vendor I found that was enough flexible was DELL (finally I got something assembled that was not officially available at their webshop). Although I can't tell about a long time experience right now, the overall quality seems to be better than everything else I have seen lately. - So fortunately it is still possibly to get a reasonable configuration.

But why it must be so difficult? - I think hardware vendors do not have the right to complain about bad business if they do not produce what people really need.

In many cases vendors produce configurations and focus on features, that do not fit the real-world needs of the customers.

Related posts: The operating system, The mobile device, The features, Bronce age of IT, About Dell, Ubuntu compatible hardware, The Dell Latitude 2110 and Ubuntu 10.04.1, Use cases for netbooks, The truth about hardware support.

2009-05-08

The Open Source movement

Economic crisis seems to boost the demand for Open Source solutions, at least if you look at the news, blogs and articles in magazines.

For me it seems as one of those keywords that is put as headline to IT fares, like DMS, CMS, ECM and others in former years.

On important thing I notice is that even IT people often do understand: open source = free - This is not necessarily the case. Open Source means that if you get the product, you also get the source. That open source = free only applies for software that is licensed under GPL or the like. Especially now as open source topic is on the hype, many companies search for a way to put that keyword on their sites so the get found and so I stumbled upon a lot of open source products that cost a big amount of money to own.

Today I had a look at Google trends comparing search terms "free software" and "open source software". Surprisingly for people it seems to be important to be free and not to be open source. - Although this is a sample that market analysis often not fit real world. In this particular case I think the difference here is because of the private users that do search just tools for their own use and they know that they will not go to hack into the code (besides those who either do not know what open source is).

Although open source is a very good thing, a company thinking about to use open source alternatives, should ask the following questions:
  • Will I ever hack into the code or let others hack into the code to apply changes or will I just use the product as it is?
  • Do I want to heavily rely on the product?
  • Is there a long-term future for the particular product I am thinking of?
From my point of view Open Source supports long term IT planning and investment. It gives you the opportunity to support, fix and adapt a product to your needs even if nobody else is using it any more. This gives you somehow an independence from market evolution.

But there are some concerns:
  • It might be a big effort to implement the Open Source product - this might come along with high costs for buying services from other companies who will install and/or support the product.
  • If the product is not well documented and the code is not clean enough then there can be a huge effort necessary to do code changes.
  • If there is no company behind doing the main development and support for the product you do not have anyone to blame or take into charge when you have a serious problem with the product.
  • Depending on the community it might happen that the product evolves in a direction that creates incompatibility with your own changes.
  • You may need to have the manpower and knowledge for developing the product in-house (depending on how the product evolves over time).
So Open Source includes no guarantee to be cheap or to exist for a long time. But it gives you the opportunity to do everything yourself and with a large community the chance is higher that you find people who have knowledge on the product.

Related posts: IT investment, Economic crisis and IT, IT project costs explosion, The License keys, Paying for free and Open Source, The Open Source idea, The community, Ignorance of the different, Microsoft & Skype.

2009-05-06

Utility libraries

While everybody is thinking of frameworks when it comes to rapid application development (RAD) needs, my experience is (and I used RAD tools in times where nobody was talking about that and I either didn't know that it was RAD ;-) ) that all kind of frameworks draw several borders that limit your flexibility. - Well, if you look at it, the name already says it and I can't avoid thinking of a photo frame - it has a certain size and if your picture doesn't have that size it doesn't fit...

There were application development frameworks like dbase, Magic II and Microsoft jumped onto that train later with Microsoft Access - which now is the most popular tool in that realm. I worked with all those mentioned above and while I found the Magic II was the most effective and efficient framework (back in the late eightees and early ninetees) it was also the most expensive one. The Magic II seems to have evolved further and is now named uniPaaS - I have not tried it yet but from the screens it looks somehow a little familiar to me :-) .

Anyway, while working with those RAD tools in the past I was very fast creating first application versions but sooner or later when it came to particular special customer desires, I was limited by the frameworks and often had to tell the customer that I cannot do it in the desired way. Mostly I found different acceptable solutions that were ok also, but nowadays there are two important changes in IT world here:
  • In former days IT was built of a lot of different isolated applications. Today everything must be highly integrated and plugable.

  • Customers (or CEOs) assume that nowadays "everything" is possible in IT. So when you tell the customer you can't do it, you are out. You must be able to do it (but furtunately you have the consultants to tell the customers or CEOs that what they desire is not the best solution to their problem :-) ).
Both changes make life hard for RAD tools. My experience from the last about 15 years is that there is another - better - approach to increase development speed and still keep being flexible:

Build utility libraries on different levels

What I do mean with this is:
  1. It sounds strange but every programming language lacks of very basic functions needed in nearly every application. - The best example I stumbled upon lately is that there is no one-liner to copy a file in Java. Can't be, you think? - I browsed several hours through the library documentation and searched the internet, without success. But I found several samples of ways to do it (and wondered how many options I have to copy a file...). Finally I wrote two methods with different focus: One method to copy a small file in a single step copying everything into the memory and a second one to copy a large file using a buffer. You need to build a library of very basic methods that you often need and can reuse in whatever type of application you are going to create. Try to have no or very, very few dependencies on that level.

  2. Logically - as many things are already missing on basic levels - when it comes to higher levels, even more is missing. Maybe having those levels served with utility classes would result in a bloated language core. But anyway, you need this stuff - we are talking here about having comprehensive toolboxes with utilities for particular realms that are not necessarily used in every application.
    So Build several specialized libraries that offer utilities for different needs like database access, GUI building, sending/handling email, transferring data over the net and so on.

  3. Having different libraries often results in more effort needed to put everything together. This is a major goal that frameworks try to solve. But if you consider the connection issue right on the beginning of building your libraries you can create your own interface objects that the different libraries on different levels know to handle. An example is to use an array or list of key-value pairs (Map) that you can pass around to different libraries so that you can receive data in such a form from the GUI library (after displaying a dialog for instance) and pass it to the database utility library to have the set of data saved to a table.
    So create interface objects and make them "well known" in all your libraries to easily connect your libraries together.
Especially when building more specialized libraries prefer convention over configuration (of your modules or components).

The big advantage of the utility library approach is that you can put things together from different levels as needed but also do things different where you like it (for instance to improve performance for particular tasks). So you have a tool set but you stay flexible.

Related post: The IDE and the libraries.

2009-05-05

IT Outsourcing

Outsourcing of IT services - especially software development - has been a very common behavior of companies. We can either see separate outsourcing of support so that you have finally a local call-center with a lot of students just picking up the calls and taking notes plus software development very far away in a very cheap eastern country.

I never really understood the big benefit of outsourcing as there are serious drawbacks on the other hand like
  • Know-How is pulled away from the company.
  • Communication channels get inefficient (e.g. several people involved while nobody is communicating in mother tongue, physical distance, different time zones, ...).
  • The cheaper employees often do have less qualifications.
  • Cultural differences may introduce additional obstacles during the collaboration.
One major fact, that companies should understand: Good software developers are difficult to find. You can find a lot of cheap coders, but there is usually a difference in the output quality (clean code, sufficient comments, good component design, ...). This applies for other IT services also but especially for software development.

Outsourcing can make sense under the following circumstances:
  1. There is a clear and compact technical specification of what to implement.
  2. The project is smaller and/or not to be tightly integrated into a lot of other projects.
  3. Few dynamic adjusting of the application at the customer necessary (so the far away from the customer the better suited for outsourcing).
I have experienced that the time from the need to the finished implementation of a solution is much shorter, when development is moved towards the customer (so doing the contrary to outsourcing). The response to changes in the priorities and goals are much shorter - even more, if there are no consultants in the middle. From the productivity point of view the direct connection from the developers to the customers is the best - but attention: There are prerequisites and pitfalls:
  • For a developer that is in close contact with the customer there are more qualifications needed than just being a good coder and being a good software designer: There are communication skills, project management skills, understanding for the business point of view etc. necessary.

  • The biggest mistake that is made often at this point is to act faster than to think! What I mean is, that both sides should think double before ordering and implementing a change request.
While in big software projects (or in big software companies) often the progress of development slows down massively, in very small projects and companies often there are shortcomings in software design.

Related posts: IT project costs explosion, The features, The small software vendors.

2009-04-09

IT project costs explosion

It is a well known problem that costs of IT projects often rise far above the initial estimations. This might happen for non-IT projects also, but the reasons might be different. And there is a difference between reasons why IT projects fail and why they cost more!

I tried to summarize the most important reasons why IT project costs are higher than expected - from my experiences and in the order of relevance:
  1. Unclear, unrealistic and permanently changing ideas of requirements (missing, unrealistic, vague or insufficient stable goals, objectives and milestones).
    Solution: Invest enough time in evaluation of requirements, put it in relation with what solutions exist on the market and define clear and concrete milestones building a path to your final goals.

  2. Insufficient or wrong knowledge about the solution options (mostly from technical point of view).
    Solution: see above - and think also of the option to develop (or let develop) your own solution which is always an option worth thinking about.

  3. Insufficient knowledge about the overall effort needed to implement the solution (= unrealistic estimation of time and resources).
    Solution: If you have missing experience on a particular field then ask experts or develop prototypes. And never beat down the price of your suppliers (to unrealistic values) because then you provoke final costs being higher than expected.

  4. Insufficient knowledge about the destination environment where to implement the solution.
    Solution: Invest enough time in analysis.

  5. Side effects of quick-and-dirty implementations of the solution (or parts of it).
    Solution: Make quality your focus and not being cheap. If you create a bad product that nobody really wants to have you waste more time and effort as if you would have created another smaller product, but well done.

  6. Unqualified interference or intervention of upper management.

  7. Missing GTD like and technical skills of involved people.

  8. Bad time management (schedules, time reserves).

  9. Missing or insufficient quality assurance (clean design, review, testing, documentation, ...).

  10. Communication / teamwork insufficient or inefficient.
Related posts: Economic crisis and IT, Bronze age of IT, IT investment, Features, IT outsourcing, The Open Source movement, Paying for free and Open Source, Scope of IT projects, The small software vendors.

2009-03-26

Bronze age of IT

There is a big myth around: IT exists for such a long time now and there are so many companies and software pieces around that quite everything can be done.

Nope! I hear this often by customers who sometimes believe it is just a matter of price to solve every IT problem nowadays. And it's either the IT consultants and some vendors that tell this. The sad truth is: We are still in bronze age of IT.

I agree that much is possible yet - an on one hand much has been done. That creates a large scale of possible file formats, protocols, standards etc.

On the other hand for example many software pieces suffer from legacy and some newer are very similar - "me too" products driven by the requirement of a short "time to market".

From my point of view there are still many construction sites. Just to reinstall and reconfigure all the applications I need when reinstalling my PC is a nightmare on Windows. At least on Linux it got better (therefore I said we are in Bronze age and not in stone age ;-) ). Software development gets more and more complicated because of the very different interfaces, technologies and languages. The new arriving programmer faces a jungle of things. I like diversity but too much is too much.

Probably Internet is the most important thing that had lead us out of stone age. Now we have at least the possibility to connect to each other finding solutions thinking globally and not reinventing the wheel many times.

But the most sad thing I see these days is not the technical possibilities - it is the behavior of companies. They just want to produce software cheap or refuse to give support or buy and buy other companies destroying organizations and products.

Related posts: Economic crisis and IT, IT project costs explosion, The hardware, Reinventing the wheel.

2009-03-17

Document file format

Yesterday my wife received a request for a poll by email. She confirmed to participate and received ... - a Word document. Not a link to a website, not a PDF form, not a text file - no, a Microsoft Word document with form fields. And we are on Linux.

Although Open Office does a great job on importing Word documents some form fields (meant as a pull-down menu) did not show up the available alternatives). My wife told me and said she thinks that initially somehow something showed up for a moment. I have to say, that I had no interest in investigating this problem, because of a single reason:
People should write documents in a portable, platform independent standard format.
Especially when intended for large amount of people it is important to use a format that can be displayed (and edited where necessary) by everybody.

I had an interested talk a few years ago with a project manager who also teaches and trains other project managers about the format of documentation created during a project. He told me that prior to a project start they analyze the environment at the different partners involved and if they use different versions of Microsoft Office products then they estimate higher project costs due to compatibility issues because different versions of Microsoft Office behave differently and display contents differently. They often use the RTF format to overcome different drawbacks. However, using RTF reduces available formatting features.

I know these problems also. Not only once a co-worker wasted half a day correcting the layout of a concept that has been written as a Word document and has been edited by a customer.

So already years ago I switched to Open Office for every project relevant documentation. After evaluating the options that sounds the best alternative for me using the Open Office formats ODT, ODS and ODG.

The major advantages are:
  • Platform independency
    I can read and edit the documentation at work under Windows and at home with Linux.

  • No (or at least very few) compatibility issues.
    Using different platforms or versions of Open Office does not raise display or layout issues - or at least not with the amplitude known from Microsoft Office.
    In the past for major changes in the document features Open Office did change file extension keeping compatibility and transparency.

  • No license fees
    Open Office is free and would also help companies saving a lot of money.

  • Direct PDF creation with bookmarks and links
    Open Office can directly create PDFs (without the use of an external PDF printer driver) Preserving links from table of contents or other links within the document. Further (when using appropriate heading format templates) bookmarks are automatically created in the resulting PDF which makes navigation easy when reading the document online.
For mind mapping I also use FreeMind which is a good supplement when writing project documentation.

Especially in IT or more technic/industry focused companies there are other types documents also to be written that cannot be written efficiently using Open Office. For those cases I recommend looking at OSALT.COM. That site helps finding alternative Open Source software for different commercial products. - E. g. DIA as an alternative for Visio (although - with some limitation - Open Office Draw can be used also). The Open Source alternatives do not always offer the same huge feature set as some commercial product, so there are sometimes relevant limitations. But in other cases the

Related posts: Data file format, Why I hate ribbons, Stationary under Ubuntu, Ignorance of the different, Popular Ubuntu desktop myths.

2009-03-02

Social networking sites

There were several events and discussions that caused me to test several social networking sites over time. Finally I got to some conclusions.

I tested the following sites:
More social networking sites can be found on Wikipedia.

My focus was on the following features:
  • Keep the contact
    I have a lot of work and a family - both needs much attention. I don't have the time for regular meetings with my friends. I do know many people that I would like to have contact more often but usually I see my good friends only once in a few months. So one big challenge is to not completely get the people out of sight that I do see very seldom.
  • Efficiency
    As I have no time and as I am a life hacker using GTD methods, efficiency is a major priority for me. If the application does not bring big benefit with a minimum effort I will not use it.
  • Annoyances
    It is easy to nag me with additional necessary clicks or time used to search for a specific feature (e.g. password change, write to others wall etc.). I looked for the tool that minimizes those annoyances.
  • Security
    A https-Access to the site at least for the login is a must. If the whole site can be used https then either better.
  • Members from your network
    It is somewhat important how many of the people you know are already using the particular site. Although some people have their account on several social networking sites, they are barely really using them all. And if you can convince somebody to sign up for a different site it will not be clear if this person will continue to use the site. So it is better if your contacts are already actively using the site.
And my results are:
  • XING (Business focus):
    Good:
    • Complete navigation can be done in https.
    • Many of my colleagues, partners and customers do have an account there (talking about Austria/Europe).
    • The available features can be used quite easily.
    Bad:
    • Very annoying that many features are only available to the one that pays. Status updates and so on are limited to a certain number of messages. So if you want to stay informed you have to look at XING at the very least once a day - very annoying.
  • Facebook (Private focus):
    Good:
    • What I like most is the fact that you can easily stay in contact with your friends and feel still connected somehow even if you see them very seldom. You get notified, what they like most, what they are currently interested in and doing. You can discovered shared interests you didn't were aware of. Many of my friends are already there - many are not (but from those most can't be found either anywhere else).
    • If you use Facebook, you IMHO don't need Twitter - such a feature of micro-blogging is included by status updates.
    Bad:
    • Login not https if you do not manually type it into the address link. Can be solved by saving the appropriate link in your bookmarks.
    • What I hate most on Facebook is the spam. There is a lot of stuff like karma, snowballs, pets that gets thrown at you. Similar problem with those huge amount of unwanted applications - I have no other word than spam for this.
  • LinkedIn (Business focus):
    Good:
    • Focus on finding the right business partners - appropriate features like recommendation of others.
    Bad:
    • Tries to avoid adding people you do not personally know (well, also can be an advantage).
    • Limited features for those who don't pay.
    • Recommendation features are basic and do not reflect uncommon situations (e.g. I have a friend who is also a business partner and he runs two separate businesses - both combinations are not supported).
  • Orkut (Private focus):
    Good:
    • Tightly integrated with GMail/Google as part of Google applications collection.
    Bad:
    • Poor feature set.
    • Quite nobody of my friends/contacts is there.
  • StudiVZ/MeinVZ (Private focus):
    Good:
    • Focus on main features to link contacts (so it has not the spam level known from Facebook).
    Bad:
    • Very annoying separation of StudiVZ and MeinVZ. What if I run a business and do study? How to change later when I first used StudiVZ and then finished studying? - Maybe a change is easy but I currently do not know if this is well supported. Then when searching a friend I have to consider whether to search StudiVZ, MeinVZ or both.
    • Needs certificate exception added to firefox to somehow get https to work.
  • Ning (Different focus):Ning has a little bit a special status - it is not one common social networking site, instead it offers the possibility to create more specialized networks. For instance to get all the Star Trek fans together or to create a community for all vendors and customers of a certain product or even to create communities for a specific hobby.
    Good:
    • Completely different areas for different communities can be created.
    • Combines social networking, forum and blogging features.
    Bad:
    • If I have multiple networks I may need to write some common personal information twice.
    • The forum features are quite limited although I think that for more specialized communities the forum features are the most important.
Conclusion:
For staying connected with friends you either see only once a year or even less, Facebook is IMHO the best choice. When you meet you can focus on the most important or interesting topics to talk and you can save a lot of time by "just bringing your friend up to date about a lot of minor changes your life". Those things can be published on Facebook. I would only keep attention not to publish too private and personal details. One good thing is also the availability of a mobile app that is available for most mobile phones - so you can post at Facebook what you are doing while on the road (I use this because while on train or waiting somewhere is the best time for such activities - you don't loose other important productive time then).

For business it may depend on your location what is more relevant for you - it might be XING or LinkedIn - both have business focus features, so it makes sense to have one private and one business focused social networking site in use. However, all the business focused sites do offer only limited features without payment - logically (you want to make business using that site so they also want to get their part of it).

Regarding personal information posted into social networking sites:
This is an often discussed issue, therefore I want to give a short statement on this. People often warn about the use of social networking sites and post too much private information.
If one use drugs or is alcoholic (s)he won't publish such information anywhere - not on paper nor somewhere on the internet (exceptions given for people who want to show in public how to quit).

I also had a discussion regarding "bad-will" of others. There are always people who don't like you even if you wish love, peace and success to everybody. As you can't satisfy everybody, there will be people who don't like you. The more information they can get about you the easier it is for them to harm you. - However, if somebody really wants to harm or nag you, phone number and address is the most important information. Most social networking sites allow particular privacy settings. In Facebook for instance posted information is only published to those who you confirmed as friends. If somebody does not like you and wants to talk bad about you to your friends, it's not your friends if they believe it without at least asking you also for a statement to hear both sides.

On the other hand, you would like your old friends to re-find you, you want to keep in touch with friends you see only seldom because you have not much time and you also might want head hunters to find you for offering a good job. So being present on the web and on social networking sites has also a lot of advantages. But anyway, you decide whether and where to be present - just remember: A big part of success is "showing up".

Related posts: Why I don't need Twitter, The mobile device, Surveillance, privacy (NSA, PRISM, ...) and encryption.