2009-08-04

Firefox Upgrade Broke Eclipse?

Doesn't sound likely? That was what i thought, and so failed to draw a link until further diagnosis and web-surfing.

i had been using Ubuntu 9.04, and had Firefox 3.0 and Eclipse 3.4 Ganymede running on the same machine without any apparent problem (but of course, having Firefox and Eclipse on the same machine is such a common combination). One day, i decided to upgrade to Firefox 3.5, and after that, well, actually, Eclipse still worked without a hitch - or so i thought.

The problem only surfaced when i tried to open a new empty workspace in Eclipse. It froze with an empty dialog box after the splash screen. Being eager to press on with some development work, i simply copied an existing workspace and cleaned out the existing projects to get a "new" workspace. Without further information, linking this issue to the Firefox upgrade was the last thing on my mind, and since a workaround was available (copying existing workspaces), i just put this problem aside.

What got me more intrigued, was after i installed Fedora 11. Yes, the frozen empty dialog box surfaced again when i tried to start a new workspace. Fedora 11 comes installed with Firefox 3.5, but, that was again not in my consideration. (Unless one has some knowledge about the internal workings of Eclipse, who would have linked Firefox and Eclipse?) However, i got curious and decided to investigate further.

So i ran Eclipse (on a new workspace) with some debugging arguments:

$ eclipse -debug -consoleLog

and got something like this (together with the frozen dialog box):

!ENTRY org.eclipse.ui.workbench 4 0 2009-08-04 23:27:24.066
!MESSAGE Widget disposed too early!
!STACK 0
java.lang.RuntimeException: Widget disposed too early!
    at org.eclipse.ui.internal.WorkbenchPartReference$1.widgetDisposed(WorkbenchPartReference.java:171)
    at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:117)
    at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
    ...
    at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:549)
    at org.eclipse.equinox.launcher.Main.basicRun(Main.java:504)
    at org.eclipse.equinox.launcher.Main.run(Main.java:1236)

!ENTRY org.eclipse.osgi 4 0 2009-08-04 23:27:24.089
!MESSAGE Application error
!STACK 1
org.eclipse.swt.SWTError: XPCOM error -2147467262
    at org.eclipse.swt.browser.Mozilla.error(Mozilla.java:1638)
    at org.eclipse.swt.browser.Mozilla.setText(Mozilla.java:1861)
    at org.eclipse.swt.browser.Browser.setText(Browser.java:737)
    ...
    at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:549)
    at org.eclipse.equinox.launcher.Main.basicRun(Main.java:504)
    at org.eclipse.equinox.launcher.Main.run(Main.java:1236)


It was only upon seeing the terms "Mozilla" and "browser" peppered throughout the second stack trace that i started to think that it might have something to do with Firefox (but it still did not occur to me about the upgrade). i recalled also, that there was some introduction screen whenever Eclipse is started in a new workspace, but still wondered what it had to do with Firefox.

So i did a search and found that the problem was due to an API change in a later version of xulrunner (https://bugzilla.redhat.com/show_bug.cgi?id=483832).

The page also contains a simple-enough workaround:

Create a file (e.g. noWelcomeScreen.ini) containing the line:

org.eclipse.ui/showIntro=false

and when starting eclipse, include the -pluginCustomization argument:

$ eclipse -pluginCustomization noWelcomeScreen.ini

No need to copy existing workspaces anymore. HTH.

2009-07-09

The Benefits of Using HTTPS - It's Not Just for the Encryption

Out of good habit, i access websites that require authentication using the HTTPS protocol whenever possible (i.e. whenever it is supported by the site). These websites include Yahoo! Mail, Gmail and Facebook.

Yesterday, when i tried to access Facebook (using HTTPS), Firefox gave me a warning:

www.facebook.com uses an invalid security certificate.

The certificate is only valid for a248.e.akamai.net

(Error code: ssl_error_bad_cert_domain)


Fearing that my DNS cache had been polluted, i decided to compare the DNS lookup for www.facebook.com with the result of the same lookup using OpenDNS.

OpenDNS resolved www.facebook.com to:

69.63.184.31
69.63.176.15
69.63.176.15
69.63.187.11
69.63.184.142
69.63.186.12

While doing a nslookup www.facebook.com on my own machine returned

Non-authoritative answer:
www.facebook.com canonical name = www.facebook.com.edgesuite.net.
www.facebook.com.edgesuite.net canonical name = a1875.w7.akamai.net.
Name: a1875.w7.akamai.net
Address: 125.56.199.40
Name: a1875.w7.akamai.net
Address: 125.56.199.89
Name: a1875.w7.akamai.net
Address: 125.56.199.11
Name: a1875.w7.akamai.net
Address: 125.56.199.19


The fact that www.facebook.com mapped to www.facebook.com.edgesuite.net seemed pretty phishy (pun intended) to me, though the domain akamai.net looked vaguely familiar.

Doing some search online revealed that the edgesuite.net domain is a result of Facebook making use of web application acceleration service provided by the company Akamai. It was definitely a relief to know that my DNS cache or my home machine has not been compromised.

To be on the safe side however, i configured my modem/router to use OpenDNS's nameservers. Now nslookup www.facebook.com returns a result that looks more normal:

Non-authoritative answer:
Name: www.facebook.com
Address: 69.63.184.142


The key thing here is that if this was indeed a good phishing attempt, and if i wasn't using the HTTPS protocol, i would not have known that the URL that i trusted had taken me to a bogus site injected into my DNS cache.

2009-07-04

Apache Commons IO - Full of Simple IO Goodness

If you are writing Java applications that use IO (e.g. through file or socket operations), you should get familiar with the API of the Apache Commons IO library. It is not a complex framework which helps you adhere to coding best practices, nor is it an underlying implementation breakthrough that boost the performance of your applications. However, it will save you a lot of coding time, and it is very easy to make use of, as we shall see.

The library can be broken down to four main areas: filters, comparators, streams/readers/writers and utilities, and they are described in further details.

Filters (in Package org.apache.commons.io.filefilter)

Implementations of the java.io.FileFilter and the java.io.FilenameFilter interfaces.

These are useful if you are getting a list of files or filenames (e.g. contents of a directory), and you want to include only certain types of files. An example of this is the HiddenFileFilter class which contain singleton instances HiddenFileFilter.HIDDEN and HiddenFileFilter.VISIBLE which helps you to filter files by whether they are hidden or not, respectively.

In other words, if you want to get the names of the hidden files in a directory, you can do this:

File directory = new File("/some/directory");
String[] filenames = directory.list(HiddenFileFilter.HIDDEN);


Comparators (in Package org.apache.commons.io.comparator)

Implementations of the java.util.Comparator interface.

These may save you time writing your own implementations when you are looking to sort a list of files by their metadata. Examples of these are the ExtensionFileComparator (helps you to sort files by their file extensions) and the SizeFileComparator (helps you to sort files by their sizes).

So, if you want to sort a list of files by their size, you only need to do this:

List filesList = ... // Obtain a list of files from somewhere
Collections.sort(filesList, SizeFileComparator.SIZE_COMPARATOR);


Actually, with the SizeFileComparator, it doesn't end there. If you have a list of directories, and you want to sort them by the total size of their contents (recursively), you can use it like this:

List directoriesList = ... // Obtain a list of directories from somewhere
Collections.sort(directoriesList, SizeFileComparator.SIZE_SUMDIR_COMPARATOR);


Streams and Readers/Writers (in Packages org.apache.commons.io.input and org.apache.commons.io.output)

Implementations of the java.io.InputStream, java.io.OutputStream, java.io.Reader and java.io.Writer interfaces.

A useful one is the TeeInputStream (conversely TeeOutputStream). This is a decorator wrapper that adds a tee-like functionality to the underlying input stream. Its constructor takes in an input stream (the input stream it is supposed to wrap) and an output stream. When you call any of its overloaded read methods, the byte(s) that is read is also written to the output stream (which you passed in through the constructor).

Another interesting one is the CloseShieldInputStream. You may find it useful in such a scenario:

InputStream inputStream = ... // Obtain input stream from somewhere
someThirdPartyObject.processStream(inputStream); // Call a third-party method to do some processing

int b = inputStream.read(); // The third-party method is done but you need to do some further reads

// But OOPS! The method from the third-party library has actually already responsibly closed the input stream!


What you can do, is to wrap the inputStream with the CloseShieldInputStream, and pass the instance of the CloseShieldInputStream to the third-party method instead. When close() is called on the CloseShieldInputStream instance, the call does not propagate to the underlying inputStream object, leaving you free and safe to operate on it when the method returns.

Utilities (in Package org.apache.commons.io)

This is, in my opinion, the most useful area of the library, and any Java programmer must really get to know these utility classes. This is because, if you have done any IO programming in Java, you will find that some of the common few lines of code fragments which you have written and rewritten, can be replaced by a simple call to a method in one of these utility classes.

A few examples:

- Closing an input stream without throwing an exception:

Instead of writing

if (inputStream != null)
{
    try
    {
        inputStream.close();
    }
    catch (final IOException ignore)
    {
        //
    }
}


you can write

IOUtils.closeQuietly(inputStream);

- Copying a file:

Instead of getting the source file input stream, the target file output stream, reading/writing bytes from/to the input/output stream, and cleaning up (closing) the resources, you can write

FileUtils.copyFile(sourceFile, targetFile);

- Deleting a directory and all its content, recursively:

Instead of writing a recursive method and calling that method, you can write

FileUtils.deleteDirectory(directory);

In short, Apache Commons IO is a library that any Java developer should really get to know intimately, as it will really save a lot of unnecessary time re-inventing the wheel. HTH.

2009-06-19

Running Oracle Universal Installer on Red Hat Enterprise Linux 5

Have not updated this blog for more than a month now, due to heavy work commitment. Here is a lesson learnt from the course of work.

If you're trying to run the Oracle Universal Installer (version 10.2) on Red Hat Enterprise Linux 5, you may run into the following error:

[me@myhost client]$ ./runInstaller
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2
                                      Passed


All installer requirements met.

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2009-06-18_09-58-12AM. Please wait ...[me@myhost client]$ Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/OraInstall2009-06-18_09-58-12AM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: cannot open shared object file: No such file or directory
        at java.lang.ClassLoader$NativeLibrary.load(Native Method)
        at java.lang.ClassLoader.loadLibrary0(Unknown Source)
        at java.lang.ClassLoader.loadLibrary(Unknown Source)
        at java.lang.Runtime.loadLibrary0(Unknown Source)
        at java.lang.System.loadLibrary(Unknown Source)
        at sun.security.action.LoadLibraryAction.run(Unknown Source)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.awt.NativeLibLoader.loadLibraries(Unknown Source)
        at sun.awt.DebugHelper.(Unknown Source)
        at java.awt.Component.(Unknown Source)



The solution is to install the libXp-1.0.0-8.1.el5.i386 package. Yes, make sure that it's the i386 package, EVEN IF you are trying to install a 64-bit Oracle product on a 64-bit system. This is because the Oracle Universal Installer is a 32-bit application.

Download the package libXp-1.0.0-8.1.el5.i386.rpm package and install it by doing

rpm -iv libXp-1.0.0-8.1.el5.i386.rpm

The Oracle installer should now run fine. HTH.

2009-04-28

I Didn't Know That - MySQL, MyISAM and Auto-Commit

i have been using MySQL - on and off - for the past four years or so, and have become fairly comfortable with its normal usage scenarios (i.e. usage that does not involve clustering, replication, etc). Hence, i was quite embarrassed to be stumped by a problem which, on hindsight, should be common knowledge to someone familiar with the database.

i had set up a MySQL instance, added a new schema, and created a few tables using the standard DML CREATE statement:

CREATE TABLE TBL_MASTER
(
    MASTER_ID    VARCHAR(15),
    FIELD        VARCHAR(255),
    PRIMARY KEY (MASTER_ID)
);

CREATE TABLE TBL_CHILD1
(
    CHILD1_ID    VARCHAR(15),
    MASTER_ID    VARCHAR(15),
    FIELD        VARCHAR(255),
    PRIMARY KEY (CHILD1_ID),
    FOREIGN KEY (MASTER_ID) REFERENCES TBL_MASTER (MASTER_ID)
);

CREATE TABLE TBL_CHILD2
(
    CHILD2_ID    VARCHAR(15),
    MASTER_ID    VARCHAR(15),
    FIELD        VARCHAR(255),
    PRIMARY KEY (CHILD1_ID),
    FOREIGN KEY (MASTER_ID) REFERENCES TBL_MASTER (MASTER_ID)
);


Then, i wired up the MySQL instance to an Apache Tomcat servlet container by using the MySQL JDBC connector driver and configuring the Tomcat datasource to point to the database schema that was created. Most importantly, i had configured the datasource to set the database connections to auto-commit = false, as i would need a group of separate database update statements to be invoked as a single transaction.

A basic connectivity test (SELECT 1 FROM DUAL) showed that it was working perfectly at this point.

Next, it was time for the web application deployed in Tomcat to actually do something, so i had the application run the following (simplified) Java code fragment. (Note: datasource is the JDBC datasource object obtained from the Tomcat container via JNDI.)

Connection connection = null;
Statement stmtMaster = null;
Statement stmtChild1 = null;
Statement stmtChild2 = null;

try
{
    connection = datasource.getConnection();
   
    System.out.println(connection.getAutoCommit()); // For debugging purposes
   
    stmtMaster = connection.createStatement();
    stmtMaster.executeUpdate("INSERT INTO TBL_MASTER (MASTER_ID, FIELD) VALUES ('000000000000001', NULL)");
   
    stmtChild1 = connection.createStatement();
    stmtChild1.executeUpdate("INSERT INTO TBL_CHILD1 (CHILD1_ID, MASTER_ID, FIELD) VALUES ('000000000000001', NULL)");
   
    stmtChild2 = connection.createStatement();
    stmtChild2.executeUpdate("INSERT INTO TBL_CHILD2 (CHILD2_ID, MASTER_ID, FIELD) VALUES ('000000000000001', NULL)");
   
    connection.commit();
}
catch (final SQLException ex)
{
    if (connection != null)
    {
        try
        {
            connection.rollback();
        }
        catch (final SQLException ignore)
        {
            //
        }
    }
   
    ex.printStackTrace();
}
finally
{
    if (stmtMaster != null)
    {
        try
        {
            stmtMaster.close();
        }
        catch (final SQLException ignore)
        {
            //
        }
    }
   
    if (stmtChild1 != null)
    {
        try
        {
            stmtChild1.close();
        }
        catch (final SQLException ignore)
        {
            //
        }
    }
   
    if (stmtChild2 != null)
    {
        try
        {
            stmtChild2.close();
        }
        catch (final SQLException ignore)
        {
            //
        }
    }
   
    if (connection != null)
    {
        try
        {
            connection.close();
        }
        catch (final SQLException ignore)
        {
            //
        }
    }
}


In the code fragment above, the line

stmtChild1.executeUpdate("INSERT INTO TBL_CHILD1 (CHILD1_ID, MASTER_ID, FIELD) VALUES ('000000000000001', NULL)");

was obviously wrong. The number of column values that i was trying to insert was less than the number of columns declared; i had carelessly missed out one column value. So when the application was run, an exception occurred, and the stack trace correctly pointed out the errant line above as the culprit. And there should be no record added into any of those three tables, since all changes were rolled back as part of the exception handling. Or so i thought.

Looking into the contents of the TBL_MASTER table using the MySQL Query Browser, it appeared that the record ('000000000000001', NULL) had in fact been inserted and committed. But yet, the line

System.out.println(connection.getAutoCommit()); // For debugging purposes

had printed out the value false, indicating that auto-commit has been correctly turned off. So what's going on here?

After some research, i found out that:

1. When i set up the MySQL instance, the default database engine was initialised to MyISAM, and this setting was not changed.

2. When creating the tables, since the storage engine was not explicitly specified, the default engine - in this case MyISAM - was used.

3. MyISAM is a non-transaction-safe storage engine, meaning, all statements are immediately committed, regardless of the auto-commit mode. Hence, rollback would not work here. (See more information on the MyISAM storage engine here, and also, Comparing Transaction and Non-Transaction Engines.) As a comparison, the InnoDB storage engine - another popular storage engine in MySQL - is a transaction-safe storage engine.

Hence, in order to achieve what i want with the code fragment above, i would need those three tables to use InnoDB (or another transaction-safe storage engine). And in order to create a table that uses the InnoDB storage engine (instead of MyISAM), i could do one of three things:

1. Change the default storage engine to InnoDB. This can be done via the MySQL Administrator. Under Health > System Variables > Table Types, change the value of the variable table_type to InnoDB. This will take effect on ALL tables created from this point on, whenever the storage engine is not explicitly declared during table creation.

2. Before executing the DML statement to create the table, set the storage engine by executing

SET storage_engine=InnoDB;

This will take effect on all tables created in the current session (as long as not overridden by (3) below).

3. Declare the storage engine explicitly when creating the table itself, e.g.

CREATE TABLE TBL_MASTER
(
    MASTER_ID VARCHAR(15),
    FIELD VARCHAR(255),
    PRIMARY KEY (MASTER_ID)
) ENGINE = InnoDB;


HTH

2009-04-24

Tweet...

If you're keen on the occasional small dose of random tech rambling, feel free to follow me on Twitter at http://twitter.com/edwinlee11.

2009-04-23

Ethernet Card Issue When Using AMD 64 Architecture of Linux

i would first like to give a shout out to fellow members of the Slugnet Mailing List - especially Patrick Haller - who helped me to diagnose and figure out the root cause of the problem. It was a real community effort! :-)

Previously, i was running Ubuntu Intrepid Ibex (8.10) i386 and did not encounter any networking issue. When the release candidate for Jaunty Jackalope became available, i decided to try it out (with a fresh install), opting for the AMD64 architecture so as to fully utilise the 4 GB of RAM that i have on my system.

Installation went smoothly, but after starting up, the NetworkManager applet reported that a network connection could not be established (i.e. it could not get a DHCP lease from my router).

Thereafter, a series of diagnosis and troubleshooting ensued and took up the best part of the weekend:

1. ifconfig showed that the ethernet card got detected (eth0), listing the correct HWaddr (MAC address). But of course, there was no IP address for the interface since it failed to get a lease via DHCP.

2. Networking worked fine with the i386 architecture of Jaunty Jackalope (tried using live CD). Furthermore, the NetworkManager settings on the AMD64 run were the same as those on the i386 run.

3. Doing a grep on dmesg showed that the ethernet card was correctly detected, and that the link became ready. The relevant lines were also the same across the AMD64 and the i386 runs.

4. Doing a tcpdump (while the NetworkManager applet was trying to establish a connection) showed that DHCP request packets were being correctly sent out, but with no offer coming back.

5. Statically setting the IP address, netmask, gateway and default route (instead of relying on DHCP) did not work either - regardless of whether they were set via the NetworkManager applet, or via the ifconfig and route commands). i still could not reach my router either by ping, telnet, or using the browser (router configuration web page). Furthermore, arp -an after pinging to the router - unsuccessfully (at 192.168.1.254) gave the response of (192.168.1.254) at <incomplete> on eth0.

6. The same issue surfaced when using the x86_64 architecture of Fedora 10 (again, tried with the live CD).

At that point, suspicions have started to narrow towards the ethernet card itself and/or its driver. In a subsequent post to the mailing list, i mentioned the model of the motherboard that i was using (Asus M2A-MVP) as well as its onboard LAN (Marvell 88E8001). There were apparently problems reported when using it with Ubuntu 7.04 x86_64 (http://hardware4linux.info/component/5811/).

This led to Patrick with his winning entry: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/131965 (in particular, this comment - https://bugs.launchpad.net/ubuntu/+source/linux/+bug/131965/comments/19). Booting up my system with mem=3G (and having a working network connection) showed that it was the same issue as what Patrick had identified. Final confirmation was with Anton's contribution of http://kerneltrap.org/mailarchive/linux-netdev/2009/2/10/4944484.

So that was that! In hindsight i might have done better by searching for reported issues with the particular model of network card, but i suppose i was thrown off the hardware / device driver trail when tcpdump told me that the ethernet card was sending out DHCP requests correctly. For now, i am running my system on 3 GB of memory, but i will probably get a new ethernet card soon.

2009-04-02

Recovering a Harddisk Using the Freezer

My portable harddisk had given way over the weekend, and i got an interesting piece of advice from a fellow member of the Slugnet Mailing List. He had suggested that i try and get it started one last time by placing it in a freezer for a few hours, in order to recover any important bits of data left from it.

i gave it a go then, more out of curiousity than anything else (since i had done a backup of the more important data a day ago and was generally satisfied). And what do you know? It worked wondefully! i was able to copy out the less important stuff from it as well.

One point to note is that i had to run the USB cable into the freezer to connect the harddisk so as to avoid having to take the disk out of the freezer just to mount it. Given the humidity of the region, the condensation would definitely have dealt it one final blow before i had a chance to attempt any recovery.

2009-02-09

VirtualBox - Broken Mouse Integration

VirtualBox mouse integration had previously been working for me after i installed Guest Additions (running a Linux guest on a Linux host). But recently, ever since i updated some packages (on the guest machine), it had stopped working.

This baffled and inconvenienced me, and after doing some searching, i found a couple of solutions (from MakeTechEasier and Tombuntu) which involved editing the /etc/X11/xorg.conf file.

The first suggestion is to add the line

Driver "vboxvideo"

to the "Device" section (if that line is not there). In other words, your "Device" section should look like:

Section "Device"
    Identifier "Configured Video Device"
    Driver "vboxvideo"
EndSection


The other possible solution i found is to add a "InputDevice" section to the configuration file (if the section is not present), and to ensure that it uses the "vboxmouse" driver (if the section is already there).

So your xorg.conf file should have an "InputDevice" section like this one:

Section “InputDevice”
    Identifier “Configured Mouse”
    Driver “vboxmouse”
    Option “CorePointer”
EndSection


Both solutions sound logical. Unfortunately, they did not work for me. i stumbled upon the cause when i tried to mount the shared folder in the guest machine and failed (no such device). This led me to conclude that the updates that i did must have broken the Guest Additions installation.

Installing Guest Additions again instantly did the trick for me, and this is something you may want to try if you run into the same problem. If re-installation does not fix it, try examining your xorg.conf, and ensure that the relevant sections look like the ones listed above.

2009-01-20

Mounting Shared Folder in VirtualBox - Linux Host With Linux Guest

i am using VirtualBox to run a Linux guest virtual machine in a Linux host (both Ubuntu Intrepid Ibex), and utilise VirtualBox's shared folder to copy files between host and guest.

Sometimes, i would get the following error message when trying to mount the shared folder in the guest machine:

/sbin/mount.vboxsf: mounting failed with the error: Protocol error

Searching on Google for the cause and solution led me to a few suggestions:

1. The kernel module may have not been loaded - try doing a modprobe vboxvfs.
2. The folder name in the guest machine cannot contain upper case letters (e.g. /mnt/Share).
3. The folder name in the guest machine cannot contain a dash (e.g. /mnt/sh-are).

After some experimentation, i may have found the solution (or perhaps "workaround" sounds more appropriate), and hopefully help others facing the same issue. i have to add though, that i am using Ubuntu 8.10 guest on Ubuntu 8.10 host, so am unaware if the same solution works for other combinations. Also, though those suggestions above did not work for me, one of them may just do the trick for people in slightly different situations (different combination or different root cause perhaps).

First, responses to the three suggestions above:

1. Doing modprobe vboxvfs did not work for me. i still faced the very same error message after that. In my case, the kernel module most probably has already been loaded beforehand.
2. The folder name i am using does not contain any upper case letters. In any case, after using the workaround (to be described below), i tried and can use upper case letters without problem.
3. Same findings as suggestion (2).

Now, the solution (or workaround) i found is that the mount command cannot be issued in the folder that contains the shared folder itself. For example, if the shared folder is /mnt/share, the mount command cannot be issued from the /mnt folder (even if the full path is provided i.e. sudo mount -t vboxsf share /mnt/share). Another example, if the shared folder is /share, the command can be issued from any working directory except for /.

Strange, but this finding works for me, and i hope you will find it useful as well.

2009-01-07

My Conky Configuration

i have just discovered Conky. It is an elegant, unintrusive, lightweight, yet powerful system monitoring application that can just sit in the background of your desktop. i use it to display the current time, weather conditions, as well as various system statistics.

In this post, i shall share my Conky configuration, and hope that it will be helful to get you started on this tool.

First, a screenshot of Conky on my desktop:



One thing to add is that, besides the standard Conky variables, i also make use of the conkyForecast python script to display the current weather information. Basically, i use the execpi call to periodically execute the python script and obtain an output based on the conkyForecast template file. The output is then parsed by Conky and displayed along with all the other information.

Here are the various configuration files that i used:
    .conkyrc - the main Conky configuration file
    .conkyForecast.config - configuration file for conkyForecast
    .conkyForecast.template - conkyForecast template file, for formatting the weather section

Have fun experimenting with Conky!