Category Archives: Linux

Linux Mint 18 “Sarah” MATE Edition on Acer Aspire E5-575-33BM

Just completed the installation of Linux Mint 18 on a new Acer Aspire E5-575-33BM as a Christmas present for my wife. Her 2007 Intel Core Duo Macbook 2,1 running LM 17.2 was getting too cranky and the battery life was getting shorter so it was time for an update/upgrade. I’d considered getting her a Chromebook but the price delta was not that much between getting a 15.6″ Chromebook (which has printing issues) and this machine which included a 1TB disk. She can always use my Chromebook by simply logging in with her Gmail account, so there was little to lose by going with the more capable machine.

She has become very comfortable with using Linux over the past year plus on her Mac and it has met all her computing needs so it was a no-brainer to avoid the intrusive solution that is Win 10 and go Linux on this unit too. The following are the steps I took to make it work. Got some clues from here and here.

  • Update Acer Firmware. The update tool works only under Windows, so I did this before modifying the machine. Booted into Win 10, did only a basic system set up (including disabling ALL automatic updates before connecting to the web) and then ran the FW update downloaded off the Acer site. Latest FW (jumping two versions) installed fine.
  • Twiddled some FW settings (press F2 at power on) to allow for Legacy mode booting off a Linux Mint 18 MATE live USB image and tested basic functionality – pretty much everything worked fine. Cool. Shut down and remove back panel to do some HW work.
  • Swapped out the Samsung EVO 850 SSD from her old Macbook to the new Acer, removing the 1TB disk for safe keeping with the Windows 10 install. Just swapped the applicable disk caddies and put the SSD in. For giggles I twiddled the FW once more to use Legacy (BIOS) mode and was able to boot the existing LM 17.2 image just fine on the new machine, but the wireless did not work and it was a 32 bit install, so it was time to install a new version for the 64bit machine.
  • Updated FW settings once more to enable USB boot for the installation and disabled Secure Boot (yet UEFI was enabled).
  • Booted live USB for installation. As for some strange reason I did not follow my common practice in the old LM 17.2 installation to set up a separate home partition, I needed to migrate the contents of home to someplace safe – so I used GParted to create and resize partitions for both home and the UEFI boot files (and to create backup images). Used FSArchiver (Partimage doesn’t work on ext4 file systems, which my prior install was on) to create a backup disk image of the old 17.2 environment. Migrated the home directory files using GRsync to the new /home partition. Installed LM 18 from the live USB, which took a surprisingly short time with a wired Ethernet connection.
  • Bring up FW set up once more and enabled Secure Boot again. Within FW “trust”ed the UEFI partition files for Secure Boot. Reboot and come up in new LM 18 environment. Using Driver Manager, installed the intel_microcode firmware to support the i3 processor. Reboot as required by Driver Manager.
  • Run Mint Update Manager and install all 167 offered updates. Onboard wireless works, but is incredibly slow (1 MB/s), yet all available updates are installed. Tried a bunch of online solutions to update the Atheros Ath10k firmware and kernel, etc. which didn’t work – and ended up causing some problems when I tried backing them out – so I ended up repeating the installation of Mint once more from the FW/Secure Boot settings through to the reboot required by Driver Manager above.

I have on hand an Edimax USB WiFi dongle with a Realtek chipset, so tried that one out via Network Manager and it connected at 54 MB/s right away. So for the time being, we’re sticking with that one and pretty much everything now works – sound, volume control via function keys, wireless, video playback, Jacquie Lawson Christmas Web Advent Calendar card (HTML5 or flash-based?). Brightness control does not work via the function keys yet but does work with the software control.

Machine is very snappy under Linux (much faster than it seemed under Windows 10) and idles with just single digit percentages of CPU core use and a small fraction of the 4MB DIMM capacity being used. Screen is very nice. Battery life is still TBD under normal use (as I was hitting it pretty hard with all the installation work), but it definitely goes for at least something like 6+ hours.

Think this one will work out well for some time into the future – the OS is supported until 2021!

Creating OpenVPN .ovpn Files for Android (Any?) Clients

In another post I cover setting up and OpenVPN server on a Tomato powered router and making client connections to that server.

In setting up a new phone, I see the OpenVPN for Android app will now import yourVPNclient.ovpn files (much easier than transferring and importing the separate key and cert components as covered in my prior post). It took a bit of Googling to find out how to create the .ovpn files, but now that I’ve found the file format, setting one up turns out to be a piece of cake. Here’s the template:

proto udp
port 1194
dev tun

key-direction 1

# insert base64 blob from ca.crt

# insert base64 blob from client1.crt

# insert base64 blob from client1.key

-----BEGIN OpenVPN Static key V1-----
# insert ta.key
-----END OpenVPN Static key V1-----

I edited the “remote” directive to point to my VPN (router’s) dynamic DNS address and then copied the specified parts of the files from the /etc/openvpn directory as created in my prior post to this template. Then saved the consolidated file as myserver+clientname.ovpn.txt on my linux box.

Why with *.txt extension? Because otherwise the bluetooth file transfer from my desktop linux box to my phone would fail (unsupported file type). Text file transfer is supported, .ovpn is apparently not.

I then simply renamed the file on my Android phone to drop the .txt suffix and imported the resulting file in the OpenVPN for Android app (it turns out you can leave it, but the app will include that text in the connection name by default, so I now simply cut it there). I still needed to go through and set some options properly in the app to match my server config (LZO, persistent TUN, etc.), but the heavy lifting was already done.

Connected successfully on my first try! I see no reason why the same file set up would not work in NetworkManager on Linux or some other client, but I haven’t tried myself. Good luck!

Credit for the .ovpn template content goes to this ServerFault discussion thread .

OpenVPN on Tomato with Android and Linux Clients

I’ve been wanting to do this for a very long time. When away from home I sometimes need access to the systems (or data residing on those systems) back at home. I wanted to set up a secure means to access the machines behind my router’s firewall and one of the most versatile and secure ways to do that is with a Virtual Private Network (VPN). The problem was that this stuff is pretty complicated and even though the open source firmware we run on our router has had a VPN-enabled version available, I’ve been loathe to try implementing it.

Well, the garage control system project I was recently working on had a hardware failure such that I could not implement it in the original way intended (until I replace the CAI WebControl board central to it). The board failed in such a way that it would not accept PLC programming but would still respond through the default web interface – which unfortunately is not sufficiently secure to expose to the internet directly. However, we were going away for an extended period and I needed to be able to access it while away. A perfect application for VPN technology, I could keep the “vulnerable” system firewalled behind the router and poke a secure hole through it using the VPN to control it from afar when needed. Just the shove I needed to get going on the VPN!

Curiously enough, in googling, I was able to find various basic tutorials about setting up a Tomato VPN-enabled router (which is Linux based) as a VPN server with Windows clients and creating the certificates and keys on Windows but pretty much nothing simple about doing so with other platforms like mine – Android (again Linux based), Linux and Mac. The ones about setting up a VPN with Linux all seemed to want you doing everything down in the weeds of config files and installing VPN packages on your own server (not a router). Not what I wanted.

The good news for you and me is that I figured out how to get this done with minimal effort and it pretty much worked perfectly on the first try, so I’m writing it up here for future reference and to share with any others following this path. Looking back, it wasn’t that hard but the lack of clear guidance made it all confusing. All that said, here’s some clarity on how to get it done:

Creating Certificates and Keys

On Linux Mint LMDE (Debian Linux) workstation, using Synaptic or another package manager install:

This will install the easy-rsa scripts into

Taking note of the instructions at, I did the following:

Copy the easy-rsa files to another location that will persist after package upgrades (note, this location already existed as a result of the openvpn installation and contained the single file update-resolv-conf, so maybe that claim is misleading?) and cd into that directory:
sudo cp -R /usr/share/easy-rsa/* /etc/openvpn/
cd /etc/openvpn

Edited the vars file using vi to set the KEY_COUNTRY, KEY_PROVINCE, KEY_CITY, KEY_ORG, and KEY_EMAIL parameters. These were at the bottom of the file for me. From what I can tell, the two email entries are identical but for the quote symbols. I presume the quoted one is meant to be the “real life name”, but nothing I could easily find via google confirmed or contradicted this – so I just set them both to the same address.

export KEY_CITY="MyCity"
export KEY_ORG="My ORG Name"
export KEY_EMAIL=""

I then completed the rest of the steps at the above link using root/sudo priviledges, creating the certificate authority, server certificate and key and then the client certificates and keys. What I found online was not very informative on this point, but the Commmon Name (CN) must be entered each time you build these items and should be varied so as to be descriptive. So, for each command:

For this I specified my own name as the Common Name (I’m my own certificate authority) and it generated two files, ca.crt and ca.key (note, these are not named after the Common Name given, unlike the following).

./build-key-server ServerName
I gave my intended VPN server name as the ServerName which it then used as the Common Name and generated ServerName.crt, ServerName.csr, ServerName.key plus a 01.pem file and changed the index.txt and serial files in the keys directory.
NOTE: I also here encountered something different than that laid out at the above URL, for each key it asked me for:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
which I simply clicked Enter through (I presume to set as blank, as no password was later asked for).

./build-key ClientNameHere
I gave unique descriptive names for each client and it created similar files to the server ones above, named per the client names I gave, created sequentially numbered pem files and updated the index.txt and serial files.

Returned Generating DH parameters, 1024 bit long safe prime, generator 2
This is going to take a long time
(But it didn’t – it was less than 5 seconds on my main Linux 64 bit workstation)

In the Arch Linux wiki entry for EasyRSA it stated that there was a need to convert the server certificate to an encrypted .p12 format for use on Android. I found this to not be needed, using the OpenVPN for Android client from the Google Play Store.

In order to provide additional TLS security and to protect against potential denial of service attacks against my router/VPN server I also set up an HMAC signature:
openvpn --genkey --secret ta.key

Setting the server and clients up…

As I created all the certificates, keys, etc. on my main Debian workstation, I needed to transfer those files to the associated machines. First I used my browser and the Tomato-powered router (VPN server) web interface to set up the VPN server following the info here *except* for using TUN instead of TAP. Installing Tomato is covered in my other blog post. Here’s screenshots of my settings (click on them to enlarge):

Connecting with a Linux machine. I then set up the test client on my Mint LMDE/Debian laptop following the leads at , which dragged along a bunch of other required packages including openvpn, easy-rsa, etc. I imported the certificates and keys when setting up the VPN connection using NetworkManager. Trying to connect via this initially failed. I thought this might be because I was on my home network at the time, so I proceeded to set up my phone as a client to see if I could use the cellular network to test outside access.

Connecting via Android. I installed OpenVPN for Android from the Google Play store onto my cell phone. Copied over the certs and keys to my phone using USB cable and set up the connection in the app. Took a bit of twiddling to figure out where everything went and which boxes to check, but it connected quickly once set up. Could access private resources behind the router firewall now! I went on to set up my Android tablet with the same app.

Connecting from an Outside Network. Brought my phone, Linux laptop and Android tablet for a drive to find an available Xfinity Wi-Fi connection. Tried each client to access the VPN once connected to some poor folks’ wireless access point (why folks stand for Comcast doing this, I don’t know), and they all connected quickly and could access my Garage Control System web interface on my home network… success!

Note that in none of these set-ups did I need to edit or create any configuration files manually on the clients or server, despite lots of other tutorials making great points of this! It appears each of the OpenVPN server and client implementations I used took care of this for me.

The only bit of weirdness is that I cannot figure how to directly disconnect from the VPN using NetworkManager under MATE desktop on my Linux laptop. I can disconnect the VPN by dropping the wireless connection overall. There should be a “Disconnect VPN” option within NetworkManager but I don’t see it on my laptop when I’m connected (it is there when I’m not!). But that’s a (minor) problem for another day.

I’ve found the disconnect option in the VPN menu under NetworkManager and that can be used to drop the VPN connection. The Android clients have a connection status entry in the notifications list which provides a disconnect option once clicked on. All good to go now!

Spicing Up Your Dance Collection with Pepperplate

In this post I will lay out some history of how I have managed my dance collection and what I am currently exploring as a way to greatly simplify the work of building and maintaining that collection and making it available whenever and wherever I am.

I’ve been calling (leading/prompting) contra dances for several years now. One of the things you need to figure out pretty quickly when you decide to become a caller is how you’re going to record and organize your dances.

Like most beginning callers, at first I recorded dances by hand on index cards (or all too frequently, any spare sheet of paper I could find at a dance). As my collection grew (and I started actually having to use these compositions to lead others through the dances) my requirements for standardization and legibility grew. (In all fairness, my background includes quite a bit of business process work – so I’m a bit of a process wonk.)

So for a couple of years now I’ve been working with a system which typically involves too much work. Why? Because I chose to standardize on using 3×5 inch index cards, which turns out to be pretty darn small. The 3x5s have enabled me to carry my core card collection easily in my dance bag if I want. But the small size means I need to really work on re-writing a lot of the material I gather to abbreviate or summarize and reduce to a standardized shorthand format I record on my cards. The real estate has been very constraining (but in fairness has made me really good at slimming down language clearly). And when someone else wants to see my card or I post a dance to a discussion group there’s sometimes questions about the notation.

I have been creating these cards using a template I had set up in Open Office Writer (now Libre Office) which then I would laser print 4-up on card stock as I added new ones or a given card wore out/was revised and then cut them with a paper trimmer to size. I could also export the cards to a large PDF file containing my whole collection. I kept both the pdf and original files backed up and synchronized across several computers via Spider Oak so I would never fear losing my collection. An example card:
A dance card format example.
My workflow for finding dances and transforming them into a usable card was essentially:

  1. Find a dance I liked. This could be from dancing or seeing one danced or based on something in email from a group/forum, etc.
  2. If got in person, I originally would scribble it down. Sometimes a caller would offer to email it to me. My latest trick has been to either take a cell phone picture of a caller’s card or quickly get the dance name and/or moves entered into Google Keep on my phone.
  3. If via email, I tag the email with a “Dance to Collect” tag in GMail which becomes a queue to transcribe from.
  4. Discover my dances in queue (Keep, email or photos) and review them for quality/suitability. Was I just in a dance trance and got carried away or is it really a good one? Will I actually call it? If all good, continue on. If not, delete or recycle the paper.
  5. Process worthy dances into standard format, adding them to the master Writer file. Queue them in the “dances to review” section and when there’s a suitable chunk, print on recycled regular sheets of paper to try out.
  6. Kitchen Validate. Try dancing my transcribed card in our kitchen. If needed, cajole other family members to run through it with me. Apply my now standard set of QA checks to the dance (progresses? work for both roles? etc.) and create teaching notes as required for when I’d call it.
  7. Dance Validate. When a suitable opportunity presents, call the dance. Note any key learnings on the card and mark it as validated as applicable. Factor in any dancer or musician feedback (often noting the tune chosen, if I’m sharp enough to ask).
  8. Update Cards. When I’d think of it, I would drag out my cards and scan them for ones with handwriting on them and record that information back into the electronic copy. If significant, I’d reprint the card(s).

As you can likely tell, that’s a lot of work. However, my cards enabled me to do a pretty good job calling even material that was new to me. I often got positive comments from musicians I worked with about the cards being very usable.

If you’re a caller, you might ask why I wasn’t using one of the existing caller tools to capture my cards, like Caller’s Companion or Dance Organizer? Well the answer is that I don’t have any iThings or WinThings. I run Linux on all my computers plus my cell phone and tablet are using Android currently. Sadly both of the established caller solutions don’t support any of what I’ve got.

So in a fit of frustration the other day I launched into yet another of my ~yearly reviews of the caller/leader software out there and found the dedicated applications landscape to have essentially remained unchanged. I thought briefly of setting up something on my own domain,, to do this as a database application but that would be limited to where I could get on a network. So, as an open source enthusiast, I started thinking creatively (we often need to do this, as popular “local app” tools are frequently omitted for Linux in particular). My breakthrough was thinking “what is a dance card?” and my answer was “it’s effectively a recipe for a dance.” With that insight, I started researching the recipe management software solutions out there. Again, I found a lot of stuff for OS X and Windows, even Android and a bit for linux. As I looked into it I realized my criteria basically boiled down to:

  1. Being able to add or edit dances anywhere I was on any device
  2. Being able to print them to hard copy if needed
  3. Being able to organize them into a program for an evening.

These were the core requirements, several ancillary ones flowed from there. These included the ability to classify dances in standard ways for filtering, searching to quickly find one, and managing my work queue. Also important was the ability to work offline when a web connection was not available (and sync that work when connected again).

The end result of my search was finding the Pepperplate recipe suite. It supports all my electronic devices, either through local apps or website tools. It supports tagging, filtering and search. The dance and meal analogy gets extended via treating a dance as a dish, a program as a menu and a booked event as a planned meal. It supports sections (parts) of a recipe, like sauce (A1), ingredients and instructions (moves and calls/teaching points). Pepperplate provides for adding dishes to a menu, and menus to a meal. I find that the analogy fits pretty well and I can use this tool to do most of what I want for my dance collection seamlessly. It also supports sharing recipes (dances) in a couple of easy ways.

The biggest difference from what I’ve otherwise found in the caller tools space is that this will work with pretty much all popular (and even unpopular) devices and that it automatically syncs across them. And not that it really matters given the relatively low cost of the existing dance leader applications, but it is also free.

I’m in the early stages with Pepperplate and tried calling from it for the first time just this past weekend. I only have a limited set of dances in the tool so far but it has been doing pretty well. I’m no longer severely space constrained! I do have some criticisms and have discovered some workarounds (mostly Android settings) to get around them. And BTW, there’s a big plus for me: the Android app includes a timer for each dish (dance) in a menu (program), so I can set it for how long I want to run the dance and a “can’t miss” message pops up to keep me on track.

In fairness, there are some risks and glitches with using Pepperplate for a dance collection beyond the obvious. These include a dependence upon a business with a not entirely clear how they make money business model. They might also not be happy with it being used this way (though from a quick review of their Terms of Service it appears to not be in violation and doing so just provides more eyeballs for their ads served). However, the data is stored locally on the device for off-line use and (at least on Android) is in a format that can be backed up and extracted/manipulated should go belly up.

Is it ideal? No, but it’s ~85-90% of the way IMO. Until something better comes along, I think I’ll be using Pepperplate to manage my dance collection going forward.

In a later post I’ll cover my experiences and tips with using the tool for this application: limitations I’ve found as a dance organizer (and even as a straight recipe) app, how I’ve set things up for ease of use/applicability and how I’ve fit Pepperplate into the dance collection workflow I lay out above. A quick preview: it has made things much easier!

gLabels Avery 5167 Template Problem

Was having trouble printing some 5167 Return Address labels using gLabels. The alignment was significantly off in my set-up using the default predefined template installed with gLabels on my Linux Mint LMDE netbook.

In comparing the template definition file with the stock measurements I found several things to be off slightly. In addition, my Samsung ML-2851ND laser printer appeared to be shifting the page image a bit also.

I created a custom template, adjusted for what I was experiencing, and now I can print consistent cleanly formatted labels within the stock outlines. Should you be experiencing similar issues, you could use my custom 5167 template. Just save into a file named as your_filename_here.template in the location set by your distribution (for Linux Mint LMDE, I discovered that was in ~/.confg/libglabels/templates).

BTW, should you need to customize the template further, see this documentation.

Good luck!

Recovery of Files from a Unbootable VirtualBox VDI

I do most everything computer-wise with open source software, but the one hold out remaining that requires the use of a proprietary OS is TurboTax. As a result, TT ran in a Windows XP virtual machine under VirtualBox on my Linux desktop. Unfortunately, after completing our most recent return, I got a little excited to do some basic housekeeping and tried to merge snapshots from the VM in order to save some disk space. Unfortunately, as the attempt at merging snapshots resulted in an error being reported by VirtualBox that basically amounted to “you’re really screwed, buddy” but put in much geekier terms with a bits and bytes error code. A later attempt to re-merge or boot the VM again did not work. The virtual machine claimed that key windows files (like the kernel) were not available. Argh!

OK, so I’m usually pretty careful and save off critical files from the Windows VM to the Linux host. I sadly did not do that for the very-last-as-filed TurboTax working file (I had an interim copy from several hours earlier but I know we made changes later). Had the pdf copies of our returns but not the final version of the .tax2011 file, which normally copies over key details to our next year’s return. And of course, hadn’t yet set up SpiderOak to backup the files from within the VM to the cloud. Double argh!

As the VM would not boot, I tried various alternative boot scenarios to get at the files but none of them worked, using either a Windows install CD or a Linux live CD image within the VM. Furious Googling finally turned up a useful working solution to allow access the files on the Virtual Disk Image (VDI) associated with the VM. Was then able to copy out the files needed from within the virtual Windows environment to native Linux file storage. Phew, dodged that bullet! Here’s what I did under Linux Mint LMDE 64-bit to get access and then clean up afterwards:

Install Required Packages
Using Synaptic, installed the qemu-utils package, which dragged along a bunch of dependency packages.
bridge-utils (1.5-6)
ipxe-qemu (1.0.0+git-20120202.f6840ba-3)
libaio1 (0.3.109-4)
libiscsi1 (1.4.0-3)
libspice-server1 (0.12.4-0nocelt1)
libusbredirparser0 (0.4.3-2)
libvdeplug2 (2.3.2-4)
qemu-keymaps (1.1.2+dfsg-6a)
qemu-kvm (1.1.2+dfsg-6)
qemu-utils (1.1.2+dfsg-6a)
seabios (1.7.3-1)
sharutils (1:4.11.1-2)
vgabios (0.7a-3)

Gain Access to the Disk Image
Within a terminal window, executed the following commands:
lsmod | grep -i nbd
Nothing was returned, so the nbd module was not loaded already. Loaded it:
sudo modprobe nbd max_part=16
Run qemu-nbd to expose the entire unbootable image as a block device named /dev/nbd0, and the partitions within it as subdevices.
sudo qemu-nbd -c /dev/nbd0 WinXP_VirtualBox.vdi
The referenced blog posting/commentary said to issue a partprobe command, but I got an error about it not being available and didn’t seem to need it as the partitions were visible without it. Could see this by:
ls -l /dev/nbd*
To determine partition details:
sudo fdisk /dev/nbd0
and press p
This revealed the desired Windows NTFS partition from the virtual disk:
Disk /dev/nbd0: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xdc94dc94

Device Boot Start End Blocks Id System
/dev/nbd0p1 * 63 20948759 10474348+ 7 HPFS/NTFS/exFAT

Access and Copy Off Files
OK, so create a mount point for the virtual disk and mount it READ ONLY:
cd /
sudo mkdir RECOVER
sudo mount -t ntfs -r /dev/nbd0p1 /RECOVER

Finally I could look at that mount point and recover the files:
cp -p /final/linux/resting/place/

Cleaning Up
Once I got all that I needed off the VDI, unmounted the image and shut down the qemu-nbd service:
sudo umount /RECOVER
sudo qemu-nbd -d /dev/nbd0

Then used Synaptic to remove all the qemu packages I’d just installed, to prevent the accretion of bloat hopefully never needed again. I’m trying to keep this Mint LMDE install tidy and avoid an OS reinstall for a good long time!

Train Firefox mailto: to use Google Apps – Take 2

In a prior post I’d detailed the method of using a javascript entry to add an external mail resource to allow clicking on mailto: links to use the Google Apps version of gmail. Unfortunately, when I tried to repeat that method on my newly reloaded Netbook running Linux Mint LMDE with the default Firefox 20, it didn’t work. I’d enter the javascript string in the browser URL bar but nothing happened this time. I wonder if it had something to do with copying the text from my prior blog post and it not containing a proper html entity for the ampersand (‘&’) character, but I found another way to fix it anyway that’s a little more geeky but actually easier to do, as there’s no about:config action required.

My solution was to track down where these options are set and then manually edit the mimetypes.rdf file in the user’s firefox profile folder with all instances of Firefox closed. Enabling the Google Apps selection required adding both a
NC:possibleApplication RDF:resource= and a
RDF:Description RDF:about="urn:handler:web:
entry. Once completed, the agent was selectable in the preference Applications setting and worked properly for me.

Here’s the entries I made (NOTE: replace in the below with your own Google Apps domain):

<RDF:Description RDF:about="urn:scheme:handler:mailto"
and add above the other similar entries below there the following:
<NC:possibleApplication RDF:resource="urn:handler:web:"/>

Find <RDF:Description RDF:about="urn:handler:web:"
and add below that entry the following:
<RDF:Description RDF:about="urn:handler:web:"
NC:prettyName=" email thru Gmail"
NC:uriTemplate="" />

Restart Firefox and change your application preferences for mailto: links to use the new agent and you’re all set.

Arch Linux and 1-Wire on a Seagate DockStar

Outline for now. This is currently improcess, but I’ve made much more progress than shown below – I now have all but the data logging/graphing set up and everything autostarts with new systemd service files. Yay!

Reinstall latest Arch following instructions.

Modifications to that installation process:

  • Create the system partition as ext3 instead using mke2fs -j /dev/sda1 and make sure the boot loader knows to use ext3:/usr/sbin/fw_setenv usb_rootfstype ext3
  • Perform the fw_setenv mods for rootdelay and an additional stop/start on usb drive/bus (figured this out the last time, required to ensure the usb drive will come ready before the DockStar tries to boot from it) /usr/sbin/fw_setenv usb_rootdelay 10 (should experiment to see if this can be reduced with the next item in place) and /usr/sbin/fw_setenv bootcmd 'usb start; usb stop; usb start; run force_rescue_bootcmd; run ubifs_bootcmd; run usb_bootcmd; usb stop; run rescue_bootcmd; run pogo_bootcmd; reset'. Otherwise the DockStar may boot into the original PogoPlug OS instead.

Change root password. Update hostname and locale per instruction at Arch Beginner’s Guide (HW reboot required for hostname to take effect)

update system: pacman -Syu

Install owfs, lighttpd, FastCGI and PHP: pacman -S owfs lighttpd fcgi php php-cgi (digitemp not available as a package yet, see AUR)

Set up lighttpd (including PHP and fcgi support, but DO NOT make the first set of mods shown right under the FastCGI heading, this is to enable Ruby on Rails but is incomplete and will bork the server start-up)

Set up passwordless login via key:

On your local machine, copy over your local public key to the new server using
user@localmachine ~ $ ssh-copy-id root@remotemachine
root@remotemachine's password:
Now try logging into the machine, with "ssh 'root@remotemachine'", and check in:


to make sure we haven't added extra keys that you weren't expecting.

Modify /etc/ssh/sshd_config to disable password authentication (without this, the passwordless authentication will work, but others could still try to log in with the root password):
PasswordAuthentication no
ChallengeResponseAuthentication no
PubkeyAuthentication yes

and restart the sshd service:
systemctl restart sshd


  • Get owfs suite working and create the proper config and daemon files to have it autostart and keep running [DONE, details to be added here – but all the magic happens via /etc/systemd/system].
  • Create web page(s) to autodisplay the local 1-wire sensors data as well as interesting data from a chosen wunderground feed [DONE, using the json API for wunderground, details to be added here].
  • Automate the data collection and graphing for sensors. [PENDING]

Windows Freedom Round 2: HomeBank and JStock

This is just a stub of a future post regarding more progress in ditching MS Windows altogether. In a prior Windows-freedom post I covered my list of remaining programs that keeps me booting a virtual machine installation of Windows XP in order to get things done.

That post included my list of remaining programs that I’d yet to find effective Linux alternatives for:

  • Quicken
  • TurboTax/HR Block at Home
  • GoToWebinar

Well now Quicken is crossed off the list as I’ve found HomeBank (banking) and JStock (stock accounting, basis tracking). More info on using/configuring these will come later.

Clearing out the Cruft with Linux Tools and Best Practices

As covered in an earlier post, I’ve got some serious work under way to sync and backup my computer files across several computing platforms and devices. I’m reinstalling some machines as part of that work. At that time, I’m cleaning out a bunch of accumulated cruft in the form of duplicate files and folders – plus old OS and application configuration data – some of which has been carried over from as far back as my Windows 98 and SunOS 4 days!

So here’s my acquired wisdom on how to get this done under Linux.

  • Delete or archive off all dot (.) files. UNIX/Linux type operating systems store configuration and option information in hidden “dot” files in the user home directory. When moving to a new version of the OS, it is best to start with fresh dot files in most instances (prior files may confuse newer versions of programs, etc.) – there are just a few which are desirable to copy back (like your ssh keys and .mozilla (Firefox/Thunderbird) configurations). If you uninstall programs, the associated dot files may be left behind, taking up space. So delete or archive these files off and copy back only what you need after the new installation is done.
  • Eliminate Duplicates. Over time, I did things like copy over from one machine to another a copy of an important directory, or upload the content of our camera’s memory card. This often results in duplicate files and folders. The best way to fix this is not to do it (which is what will be fixed with my sync/backup solution) but it can happen nonetheless. There are two good tools I’ve found to help clean this up.
    • FSlint: Among the capabilities of this tool is a duplicate files finder. It doesn’t just check for duplicate file names but does more comprehensive comparisons so that it will catch the same file under different names (even extensions!) and eliminate false positive matches using checksumming, etc. You can then delete off duplicate files, if desired, or replace them with hard links to a single copy to save space.
    • Meld: This tool includes a directory comparison capability, for up to three directories at once. It will tell you where the files in the directories are the same, where files exist in only some of them and/or if the file attributes match (eg: permissions, modification date or size). The tool allows for merging/moving files to consolidate down to a single “master” copy. Way better than trying to do the same through command line or file manager tools. Highly recommended!

Will add more later on ways to:
Slim down applications data.
Eliminate unused languages/localizations.
Clean out cached information.

Password and Files Encryption/Sync/Backup: Gettin ‘er Done!

One of my to-do list items for quite some time now has been to get my computer files organized and to set up automated backup and synchronization across my computing devices.

I’d kept putting this one off because I wanted to deal with some foundational issues first:

  • pruning down my files and eliminating duplicates both within my desktop machine’s file system as well as with my netbook’s files
  • selecting a sync solution
  • converting my password safe from my former J-Pilot/Palm solution
  • etc.

I did a ton of research and would get close to doing something then another priority would take charge and it would get put on the back burner again. Well, in recent months I’ve finally selected and put in place several needed building blocks:

  • Password Sync: originally I’d selected KeePassX but I then looked further into LastPass, which does much the same thing and has many more features bundled in – they key one being a native cloud sync and backup capability for all our passwords. Works on effectively every platform I would ever consider including my Debian (Linux Mint Debian Edition) and Linux Mint machines, my wife’s Mac and a possible future smart phone/tablet, etc. Firefox, Safari, Chrome, etc. all supported! Done on my machines, pending on the Mac (which got KeePassX in the interim). [Update 1 Jan 2014: Mac is finally done, had to update OS X to enable Safari to update to a version supported by LastPass. I’m not a Mac expert and it is just different enough from Linux/Unix that I had to figure a bit out.] Use my referral code and we’ll both get a free month of LastPass Premium!
  • File Sync and Cloud Backup: I selected SpiderOak because of great cross-platform support. Think of it as Dropbox but with built-in cloud encryption so no worries about the files being compromised on the server/network. I’d considered rolling my own solution using a power-sipping always-on Linux ARM-based device with rsync and/or a PogoPlug but realized SpiderOak did what I needed in much easier fashion. Done on my desktop, pending on the others.
  • Local File Encryption: Protecting our sensitive files in case of having a machine fall into someone else’s possession. I selected TrueCrypt because of (getting to sound like a broken record?) similar excellent cross-platform support. I’d considered other solutions including the built-in Windows, Linux and Mac filesystem encryption options, but what I wanted was a single solution that would work with all of them plus enable syncing the secured files across all our devices using SpiderOak. The kicker for me was when I figured out that what really required the local protection of encryption was actually quite small compared to our number of overall files – I don’t care if someone finds out what I paid for our gas bill or my various basic correspondence, yet our financial account details, tax records and similar would need protection (these files end up taking well under a Gig of space). Remember, LastPass protects all our passwords separately. Done on my desktop, others pending.

The best part of all this is that every one of these solutions is free for the basic features we need and they all work across all the machines we have and anticipate being interested in at any point in the future. If/when we grow to need additional features or capacity, they are priced quite attractively (SpiderOak and LastPass). TrueCrypt is totally free for all features. Most are open source too.

Once I realized the amount of encrypted storage required was so small, my interest in consolidating and eliminating file duplicates became a nice-to-have vs. a need (I had previously been concerned that syncing a large TrueCrypt volume over the internet would be a significant performance issue). Getting to a secure solution was more important and a brief scare when I left my netbook behind at a public dance a few weeks ago (with several financial files on it) pushed me to make that part happen sooner rather than later.

Getting a NewEgg mailing with a Shell Shocker special on a 500GB hybrid (solid state and conventional platter) drive for under $80 put the final bit in place – now my netbook could have more space than my total desktop disks, so it would all fit as is without further winnowing. And with the SSD portion of the new drive used as OS and program storage, the machine promises to scream along compared to before and last a lot longer on battery power.

In order to make this solution the best it can be, my first major consolidating step will be to start over with a totally fresh install of the latest Linux Mint Debian Edition (LMDE) with the new disk on my Asus EeePC 1000HA and then layer in the individual pieces as described above. I’m starting on that work now and will post more when done.

I’m excited to have this work finally coming to fruition! After my netbook is done, I’ll be moving on to finishing the same things on my desktop machine and my wife’s Mac (after a required Snow Leopard update there – to update Safari – to support LastPass). Wish me luck!

Tomato Router Update Triggers SSL Error

After updating my Asus WL-520gU router to the latest version of the Tomato firmware (with OpenVPN support), I ran into a strange error. While trying to access the admin interface via https:, I got the following error in my Firefox browser:
Cannot communicate securely with peer: no common encryption algorithm(s)
(Error code: ssl_error_no_cypher_overlap)

I couldn’t access via http: either (which was expected, as that’s how I’d set up the router with the prior FW version to enforce security).

Googling for the error didn’t turn up anything really useful. I at first thought that the update had somehow gone bad, but I was able to get out to the internet through the router so that brought some hope. I was also able to ssh in to the router so all seemed to be OK in general. Only problem was I couldn’t access the router’s controls.

On an off chance, I decided to check out the Firefox settings for SSL security. Under the Advanced tab, I tried turning off and on the SSL and TLS checkboxes. Nothing changed. Then I decided to delete/remove the Certificate entries for my router and try again. That turned out to be the trick. For some reason Firefox didn’t like the security certificate any more – this time I got the familiar “This connection is untrusted” (or effectively similar) warning and was able to accept the security exception for my self-signed SSL certificate once more and all was fine.

Just in case someone else runs into the same problem… try the above.

Palm GnuKeyring Conversion to KeepassX

I was a very early user of the original PalmPilot device. Way back when I actually had the PalmPersonal syncing with my ’90s era Sun Microsystems SPARCstation 4 work calendar and email, etc. I eventually moved on to a Treo90 which I think was the optimal personal organizer of its era (I ended up owning three of them over time, ultimately).

Sadly, the Palm solution no longer is feasible, even under Linux. The deal breaker for me was the lack of being able to dependably sync my google-based calendar, etc. with the Palm. So time to move on, which I did for most everything, but…

I had been using J-Pilot’s Keyring plug-in to manage my set of passwords – I hung on to this handy tool until I finally became unable to use J-Pilot to sync via USB with my Treo and was forced to manually sync my password info across my desktop and netbook. Enough became enough!

Research discovered that the excellent Windows application KeePass had been ported/reinvented for Linux, Mac (and even Win) as KeePassX. As a free open source application with excellent encryption, it was an obvious solution to fit my Linux-based environment (and my wife’s Mac). A side benefit was that there was even a KeePass version available for my J2ME-based mobile phone, so the Palm-type “on hand at all times” capability could be available once more. All these versions could work from the same password database file format, so syncing a file across them would enable the info to be always up to date anywhere I would be!

My final concern was how to get my all my existing Keyring data into that KeepassX solution. Well it turns out that someone else named Wouter blazed my trail there through a similar migration and it only required minor changes to work perfectly for me. Here’s what I did to modify Wouter’s method to suit my needs.

Note: when Wouter refers to extracting the file saxon.jar from the Saxon downloaded zip file, the actual file name is saxon9.jar. Also the Jochen Hoenicke conduit to export the Keyring file to XML is actually named export.jar, not xmlexport.jar as in Wouter’s command line.

So I gathered all the files into the working directory as Wouter recommended. I then executed the (modified) command line
java -jar export.jar Keys-Gtkr.pdb MY-KEYRING-PASSWORD-HERE > keyring.xml
which created the keyring.xml file.

I paused here to go into the XML file and make edits as required to clean up my old Keyring data, as it was much faster to do it here in bulk rather than the one-record-at-a-time editing that would be possible in the KeepassX GUI application. For instance, in Keyring there was no dedicated URL field like in KeepassX, so I had put them all in a notes field before. Now I moved them all over to the dedicated field. In other places I had comments in the user name or password fields, but these totally screw up the Autotype function in KeepassX, so I moved or deleted them. Once this was done I could move on to the next step from Wouter.

I executed
java -jar saxon9.jar -xsl:keyring-to-keypassx.xsl -s:keyring.xml -o:keypassx.xml
to create the final KeepassX XML import file. This was then able to be opened in KeepassX successfully with all my data in the categories I had originally set up, etc. Great stuff – thanks, Wouter!

Next step is to get KeepassX installed on my other machines and set up a Dropbox or similar synch mechanism to keep them all aligned automagically. That will have to wait for tomorrow!

Linphone: a VOIP Softphone for Linux (and others)

In an earlier blog post I mentioned I was using Twinkle as a softphone client for my VOIP service from Galaxyvoice (GV). As I also mentioned in this entry I’ve now switched my netbook over to a Debian Linux. I’d not yet got around to (re-)installing a VOIP client on the renewed EeePC. So when I saw that GV was now recommending something called Linphone, which seemed to be very cross-platform (Win, Lin, Mac, Android, etc.), I decided to check it out.

Turns out Linphone is available in the Debian repository, so it was a trivial task to install via Synaptic. As GV recommended Linphone, they also provided account settings info. – so about 3 minutes later, I was making my first call from the netbook – worked great!

I’ve not yet tried out the video calling, but the camera preview looks smooth and lag-free so I expect it will be great as well.

I recommend you check out Linphone!

Linux Mint Debian Edition (LMDE) on an Asus EeePC 1000HA

I’m a long time user of UNIX-based computers and have been using Linux exclusively for my primary computing for close to 10 years now. For the past couple of years Linux Mint has become my favorite distribution for desktop and laptop use.

This EeePC netbook had been running Eeebuntu Linux, which was fantastic. Eeebuntu 3.0 was based on Ubuntu 9.04 and Debian Unstable. Built with customization to various packages and a modified kernel it provided support to this netbook that was a perfect fit. That project enabled all the function keys to work and had outstanding power management that kept the machine running on battery for extended use.

Sadly, the Eeebuntu project seems to have broken down as they pursued new goals. IMHO they lost direction and got sidetracked with developing a fancy website for their proposed new release and expanding their project’s scope significantly. In the end this stalled any real end-user progress. In the mean time the old Eeebuntu became outdated and, being based on Ubuntu 9.04, stopped getting any updates. So things like Flash stopped working, etc. I waited as long as I could for their new release, but needed to move on.

Getting tired of the need for repeated reinstalls required by both Windows and Ubuntu-based Linux, I became very interested in Debian Linux. Debian is a rolling release, meaning updated software is available regularly for your existing installation. In practice, this means a software environment that should never require reinstallation but will still keep up with application development! And Linux Mint happened to announce the availability of a version based on Debian (called LMDE)… this gave me the push needed to give Debian a go on this netbook.

So I installed the latest available image of LMDE on my Eeepc in Fall 2011 from a USB stick. Everything went smoothly, no real hiccups at all. There was a minor issue with a package due to an upstream Debian problem which was fixed by marking one package to not update (this was covered in a note on the LMDE page). When installed, I had a good working system with most of the standard function keys working – the machine was totally usable but the dedicated keys for webcam switching, etc. did not operate (unlike how they had under Eeebuntu) and the power management was not tuned for battery preservation.

Luckily one of the former Eeebuntu developers (Andrew Wyatt, a.k.a. fewt) has made available an applet for power management of EeePCs (and other machines) that could be installed. Called Jupiter, it allows switching the CPU to one of three power scaling modes automatically on power events, enabling much longer battery life. It has other functions as well including video mode/external monitor selection and touchpad control.

The combination of LMDE and Jupiter have become a great solution for this netbook and I look forward to using them together for a long, long time to come!

Using DeposZip Under Linux (Mint 11/Ubuntu 11.04)

Our new credit union provides the capability to do on-line check deposits using an application called DeposZip. Of course, their web site only mentions support/instructions for Windows and MacOS, not Linux. Well, the application is actually server-hosted and uses a Java applet (or some ActiveX thing if on Windows) to get things done.

If it goes as planned, the application can work with your TWAIN-enabled scanner to get the check images directly within the application. Sadly, this did not work for me – it produced a pop-up window saying only “SK.gnome.TwainManager.getDefaultSource()LSK/gnome/twain/TwainSource;”. I figure this is refering to a value that is supposed to be defined somewhere (and is not?), but in looking at the file system and googling I came up with nothing. OK, so the application offers two more options under the applet, copying the image from the clipboard (which also did not work – with no error this time) and loading an image file, which does work.

To create the image file, I scanned the front and back of the check separately using XSane and saved each as a .png (or jpeg) image. I then loaded these images as requested by the application. DeposZip took the 200 dpi color scans and further processed them to what looked like high contrast greyscale or B&W images shown in a preview. The rest systematically worked OK from there, the deposit was accepted for processing.

BTW, DeposZip also offers a “zero client” version as a link in the footer of the applet. This seems to load another page totally doing away with the java applet and using instead a standard web form with upload link for the image files. This works similarly to the above, but without the image preview you see in the applet happening until the next step in the process.

So long as you follow the endorsement instructions exactly (which unfortunately require you to write a whole lot of stuff on the reverse of the check) and the check is below $1500, the deposit will go through fine. Nice way to avoid a drive to the bank or ATM!

Linux Mint 11 (was Debian/Xfce) on a HP Pavilion ze4600

My brother’s Win XP laptop died. He has limited computer needs, really just needs to use some web applications like Facebook, Yahoo mail, Hulu and Youtube. In the past he has had significant virus issues under Windows and I’ve been proposing to him for years to move over to Linux. This happenstance caused him to be finally open to it.

This HP laptop is fairly old, it has an old AMD mobile processor, USB 1.1 and no built-in wireless hardware. This meant that the operating system had to be fairly lightweight to make this solution work well. I personally use Linux Mint (currently Mint 11/Gnome) as my own desktop and was aware of the new Debian Edition of Mint, which is available in a version using the Xfce desktop (again, lightweight resource use) which I thought would be very suitable. Plus, I wanted to get more personal experience with Debian. 😉

So I launched a project to install Mint LMDE Xfce edition on this machine. This proved to be quite difficult. For some reason, the installer would run extremely slowly – but curously, would speed up if I kept the mouse moving. But seeing as I only discovered that the second time around, once the installer had run overnight the first time around, that was little help. I ended up installing LMDE twice, because the first time it would not work properly. The second time worked, and the machine was quite nice and snappy, despite the paltry resources of this machine.

So all was good, and I got things set up well and everything he would need to use was working. Delivered the unit, he was happy. Great. Project over…

Not quite. A few days later I hear he is having trouble. It is difficult to troubleshoot remotely because there seems to be some sort of permissions issue that is preventing him from running even the tools I would normally use to connect to the machine from my home. It was almost as if SELinux was somehow in play and blocking stuff, but it had all worked before and I did not create his account with privs to change anything sensitive.

I never did get LMDE back working on the machine. Instead I chose to reinstall from scratch using Linux Mint 11 LXDE. That went smoothly (and much quicker!) and the machine has been running trouble-free since. And I was smart enough to create an image of the install this time as a backup to slap back on the machine should he have any other problems. Everything will be right back to working state in just a few minutes.

In all fairness, Mint LMDE is new and “not for your average user”, so my having trouble is really not that unexpected. I’d hoped to be able to get it running and stable and then lock it down from any changes that would destabilize it, but that proved to be insufficient. I really do want to move to a Debian base to avoid the major reinstalls periodically required with Ubuntu-based systems (Debian systems have “rolling” upgrades which keep fresh without the need to reinstall) but I think that will be best attempted with my own desktop or netbook in the future. Best to keep the others I support on the more frequently traveled path.

Monster Desktop Renewal with Linux Mint 11

I’ve been having a spot of trouble with my Mint 9 desktop machine recently, where something would lock up Gnome/X periodically. Somehow it seemed related to running OpenOffice and Firefox at the same time with something “video” happening. (Now, to be honest, I’d done my Mint 9 installation in a “messy” way — I was too lazy to reconfigure stuff, so I just reused my home directory leaving all the “dot” (configuration) files in place — so I probably caused the problem myself.)

After having it happen to me several times in one day, I decided this was the perfect excuse to upgrade my 5+ year old hardware to more recent stuff. My pals at NewEgg were great in setting me up with some new gear I could swap into my existing box:

  • AMD Phenom II x4 925: 4-core processor
  • Asus M4A78LT-M Motherboard
  • 8 GB G. SKILL Ripjaws DDR3-1600 Memory

I was able to re-use all my other existing stuff, so I was able to jump up to a monster but energy efficient system for less than Netbook dollars.

I ended up choosing Linux Mint 11 (the release candidate version) as my OS to install. The great thing about Mint is their Mint Backup tool. Not only will it allow you to do simple home directory backups, but it provides an easy mechanism to move to a new installation and preserve your installed software packages selection. Even across architectures (which is what I was doing, moving from a 32-bit install to 64-bit).

The installation proceeded in a pretty much painless way, and in a short while I ended up with a system that can do pretty much anything I need with all my old files and applications in place. All my old HW worked without issue. Mint 11 includes Firefox 4 and the internet screams on this thing. Hulu/Flash worked out of the box. Only problem was needing to install the Gnome Alsamixer to mute the sound card capture until the TV tuner was started.

I’ll be continuing to make a few tweaks and bring back some of the old dot files for my prior customizations, but it looks like the sailing is going to be smooth. Especially notable given that this is not a final release. Thanks Mint Team!

Update 6/15/11: Mint 11 is now released, and all the packages updated automatically for me from the RC versions. Everything is still working smoothly but for one issue: my new Cyber Power CP1500 AVR UPS is apparently not playing well with the system.

Periodically I get a notification that the UPS battery is low and the system automatically hibernates. The battery is not low, and the UPS knows it based upon what shows on its built-in display. I did not install Cyber Power’s linux software previously because it appeared that everything was already working out of the box (there was an added tab under the GNOME Power Settings for what actions to take while on UPS power). I’ve now installed their SW to see if it makes any difference. My quick work-around was to unplug the USB connection so the system can’t get a power low signal and therefore doesn’t hibernate, but I’d rather use the automatic shutdown capability properly. So far the shutdowns seem to have stopped so the SW seems to be working, will see if it does so from here forward. Their documentation is clearly by a non-native English speaker so it is a bit tricky to understand, FYI.

1-Wire/owfs on Seagate Dockstar under PlugApps Linux

This post is currently just a set of notes as I blaze the trail to get this working. Ignore for now, unless you like just reading random technical thoughts from someone puzzling their way through something they don’t know a lot about… I’ve started a true step-by-step description at the end below as I make my way through this for a second time. Hopefully this will be completed and cleaned up shortly.

PlugApps is based on Arch Linux, follows their start-up sequence – which is loosely based on BSD’s. The file /etc/rc.conf is where a lot of the main settings are made. Daemons are initiated based on an array entered at the bottom of that file. The daemons exist as bash scripts in the directory /etc/rc.d . Config files and similar stuff appear to be are normally set/stored in /etc/___. See here and here for the Arch info.

The owfs package now created for PlugApps includes the owfs core application/commands PLUS temploggerd but only creates the related option and template files for temploggerd. No similar files are created for the core owfs stuff. Currently have verified that owserver and owhttpd can be started with the applicable options from command line and those basically work.

Next steps:

  • figure out the option/config files for owserver, etc. under PlugApps based on prior work with NSLU2 (these were in /opt/etc/owfs there)
  • does my application require FUSE and the sensor array to be mounted as a file system? (A: YES, it is handy and by using owserver as a front end it does not cause a burden.) if not, skip owfs itself, otherwise script creation of /tmp/1wire and do related stuff to make the array available (A: will need to script this, it will be part of the rc.d script).
  • figure out temploggerd operation – can it be verified without web server access at start? (A: Yes, but web server is now set up)
  • determine what web server to use – thttpd? something light weight but secure! (A: Cherokee is available, installed easily and works well with little load on server. Plus is it has a GUI for admin!)
  • generate temploggerd templates (reclaim from NSLU2 installation?), or do I want to use another prettier graphing toolset like (A: stay with temploggerd for now)
  • make the system survive a power outage -> install (what?) to NAND? or simply configure a static IP locked to MAC address on router and reboot from pogo OS if stuck?

Misc. Info:
From default installation of the owfs package on PlugApps:

[root@chicago /]# find . -name *emplog*
[root@chicago /]# find . -name owfs*
[root@chicago /]# find . -name owft*
[root@chicago /]# find . -name owht*
[root@chicago /]# find . -name owse*
[root@chicago /]#

How to Install PlugApps Linux on a Seagate Dockstar and Enable owfs and temploggerd

  1. Obtain ssh access for your Dockstar
  2. Perform basic installation of PlugApps
  3. Update pacman itself:

  4. [root@Plugbox ~]# pacman -Syu
    :: Synchronizing package databases...
    core 35.5K 172.4K/s 00:00:00 [######################] 100%
    extra 382.1K 457.7K/s 00:00:01 [######################] 100%
    community 371.6K 489.2K/s 00:00:01 [######################] 100%
    aur 5.9K 146.4K/s 00:00:00 [######################] 100%
    :: The following packages should be upgraded first :
    :: Do you want to cancel the current operation
    :: and upgrade these packages now? [Y/n] Y
    resolving dependencies...
    looking for inter-conflicts...
    Targets (1): pacman-3.5.1-1.2
    Total Download Size: 0.79 MB
    Total Installed Size: 2.72 MB
    Proceed with installation? [Y/n] Y
    :: Retrieving packages from core...
    pacman-3.5.1-1.2-arm 804.7K 583.5K/s 00:00:01 [######################] 100%
    checking package integrity...
    (1/1) checking for file conflicts [######################] 100%
    (1/1) upgrading pacman [######################] 100%
    >>> The pacman database format has changed as of pacman 3.5.0.
    >>> You will need to run `pacman-db-upgrade` as root.
    !!!!!! SERIOUSLY! Run pacman-db-upgrade or PACMAN WILL NOT WORK! !!!!!!
    [root@Plugbox ~]# pacman-db-upgrade
    ==> Pre-3.5 database format detected - upgrading...
    ==> Done.

  5. The Dockstar does not have a hardware clock, so it will always be off at start up unless you take action to fix that. The easiest way is to set up a network time protocol client. Install openntpd (automatic time sync client) and start it before the password change below, to avoid lockout due to password aging (30+ years will seem to have passed between the default date in 1970 and now).
  6. Change ssh login password for security (optional: install ssh public key in ~/.ssh/authorized_keys for more security [+ disable password authentication in /etc/sshd_config for even more security & to avoid having to type in your (long, strong) password])
  7. Install ddclient if you are going to want to access the machine over the internet with a DHCP address.

Now that we have the Dockstar basically set up and functioning under PlugApps, we can move on to the 1-Wire and owfs related items. (more to come…)
[root@chicago ~]# owserver -F -s 4304 -d /dev/ttyUSB0
[root@chicago ~]# owhttpd -F --readonly -s 4304 -p 3001
[root@chicago ~]# mkdir /tmp/1wire
[root@chicago ~]# owfs -F -s 4304 /tmp/1wire
[root@chicago ~]# ls /tmp/1wire

Post-install: disable telnet under pogoplug os, in case of PlugApps reboot failure (provide details here – did enabling ssh via pogoplug portal already do this?)

pyRenamer Rules!

Just a quick shout out to the creator of pyRenamer — a supremely useful renaming tool written in Python, which installs with a quick sudo apt-get install pyrenamer on Ubuntu Linux (but is also available for OS X & Windows).

I love your tool.

I do a lot of computer work that involves managing files. For example, financial statements or other “store it away” sorts of files you download from a company. For some reason, the default file name of these downloads is almost always something either unusable (every file named the same!) or silly (“statement 27 July 11.pdf” – why does nobody responsible for these things name them a much more logical pattern like “statement_2011-07-11.pdf”? Well, almost nobody – Discover Card actually does – thanks, random IT person!). As a result, I’m always having to rename stuff I’m going to save, so I can see my statements in order when I look in the directory.

With pyRenamer, I have a simple yet powerful GUI tool to get this done on large numbers of files at one time. I can do it based on patterns in the file name, so I can bulk rename all those statement files for last year to be sortable in no time. It will even rename image or music files based on their tags data! It works very quickly and lets you preview what you’re about to do, before you cause any harm to your prior stuff. This is one of my go-to applications for getting stuff done.

Put simply, nice work!

BTW, if you use pyRenamer, you may not know that it will support other regexp qualifiers in the “Patterns” tab other than what appears in the tooltips! For instance, you can enter ^{#}.pdf as a pattern to work with just the pdf files that start with a number. Omitting the carat (^) is default and will get any pdf file that contains a number, which may be what you want, or not…

Another rung on the Windows-freedom ladder: gLabels

As you might have gathered from other posts on this blog, I’m an open source fan with a liking for Linux. I’ve used UNIX, VAX, MS Windows of all flavors, OS X, etc. but have come to prefer Linux (currently Mint 9 and Ubuntu 10.10). There are just a few applications which have tethered me to the need to keep a Windows machine (virtual machine in my case) around.

These applications are:
TurboTax/HR Block at Home
Avery Labelmaker/ID Automation Barcode Labeling

Today I discovered an excellent open source replacement for that last entry, gLabels. This Linux application is very actively maintained and was just a free Synaptic download away. In under 15 minutes I’d downloaded the app and used it to create the Christmas card address list labels for next year. It was intuitive and worked great!

Just three more apps to go…

Fun with 1-Wire Devices in Linux using Digitemp and owfs

I’m currently working on a Christmas present for my new in-laws.

The goal is to be able to remotely monitor the temperature and humidity at another location over the internet. I want to do this at a reasonably low cost, using very little power and not taking up a lot of space in the target location.

To meet these goals, I’ve designed a solution using Dallas/Maxim 1-Wire devices. These are low cost sensors that interface using a very simple parallel bus, wired (with, despite the name, 2-3 wires) over either a standard cat-5 ethernet cable or RJ-11/RJ-12 style cable. I’m going to drive them using a small embedded Linux system for silent and low power operation.

The sensors and adapters arrived a couple of days ago. I had other projects under way, so playing with them had to wait until today. I’m posting the following because the adapter I’m using is relatively little known, meaning that almost everything I could find for software usage examples did not cover this adapter. After several hours, I have now found the secret sauce and I’m going to pass along the recipe to you (and me, in case I forget!).

Here’s the parts I’m using:

  • Linksys NSLU-2 Network Storage Link, hacked to run Unslung 6.8 (a modification of the original embedded Linux provided by Cisco on this unit, which allows running many other SW packages)
  • iButtonLink LinkUSB-SD 1-Wire Bus Adapter
  • iButtonLink T-Sense-SD Temperature Sensor
  • and, eventually, a AAG TAI-8540 humidity sensor

I’d planned to use a DS-9490R adapter, which is very well documented for SW use, but was finding them sold out wherever I looked. Same deal for the TAI-8540 (but we could always add the humidity sensor later). Looking for alternatives so the gift would be available on time got me to iButtonLink. Their LinkUSB series are supposed to be superior performing masters, and they were available.

My first task was to determine and get the required software installed. My original desire was to use the oww (1-Wire Weather) package, which was developed to work with a dedicated hobbyist weather station product that Dallas once had available. It has since expanded to support other standalone sensors and is available in the Unslung ipkg feed, so it can be installed and used without needing to be compiled. It has a bunch of great features, including the ability to upload/push data to a server for viewing, so no firewall and DNS shenanigans would be involved. This all would be great, but I’ve not yet been able to get it to work with the LinkUSB adapter. So searching on… I landed on two other packages: owfs (1-wire file system) and DigiTemp. Again, both of these are available as native ipkg feeds for Unslung (the selection is impressive).


I tried owfs first, as there were many on-line comments about DigiTemp being limited to temperature measurement only, and I knew we wanted to do humidity too. I installed the software according to the instructions on the owfs site. Here is what I did, running as root:

ipkg update
ipkg install owfs
ipkg install owshell owcapi

This automatically installed the start-up scripts too, so owfs would be there after a reboot. Unfortunately, the parameters it used were not compatible with this adapter and no sensors showed when viewing the resulting summary page via . I also found the output on that web page to be very confusing at first glance and not made plain on their site, so I couldn’t figure out why I wasn’t seeing any sensors and temps. I poked around using lsusb, etc. but couldn’t find the 1-wire file system entries that were supposedly created by this package. After a frustrating period, I punted and moved on to DigiTemp. But I’d be back later… mere SW can’t beat my persistence!


DigiTemp is a package designed to do just what it sounds like, grab temperatures from digital devices. In this case, 1-Wire sensors. Some people have gone all out with it. We had more modest goals, and it turns out that the humidity sensor I wanted to use is actually supported under DigiTemp with a serial (not USB) adapter. There might be hope yet…

So I install DigiTemp following instructions on the NSLU2-Linux site, but run into the problem with using a non-DS9490-R adapter once more (all examples were for that one). It turns out there is no single actual digitemp command, but rather a different one based on each adapter type, and mine isn’t listed as one of them. Out of frustration, I decide to try all of them. Amazingly, the last one works! I can see devices!

# digitemp_DS9097U -w -s /dev/ttyUSB0
DigiTemp v3.5.0 Copyright 1996-2007 by Brian C. Lane
GNU Public License v2.0 -
Turning off all DS2409 Couplers
Devices on the Main LAN
280097A702000098 : DS18B20 Temperature Sensor
28848BD50200005C : DS18B20 Temperature Sensor

So here’s my commands to install, configure and use DigiTemp with the LinkUSB adapter:

# digitemp_DS9097U -i -c /etc/digitemp.conf -s /dev/ttyUSB0
DigiTemp v3.5.0 Copyright 1996-2007 by Brian C. Lane
GNU Public License v2.0 -
Turning off all DS2409 Couplers
Searching the 1-Wire LAN
280097A702000098 : DS18B20 Temperature Sensor
28848BD50200005C : DS18B20 Temperature Sensor
ROM #0 : 280097A702000098
ROM #1 : 28848BD50200005C
Wrote /etc/digitemp.conf
# digitemp_DS9097U -q -c /etc/digitemp.conf -a
Dec 03 18:55:38 Sensor 0 C: 19.50 F: 67.10
Dec 03 18:55:39 Sensor 1 C: 21.19 F: 70.14

Yay, we see sensors and actual temperatures! It works. Now from this, I get some clues that I think might help with owfs. So back to crack that nut.

OWFS Redux

From the above, I know I have a device that looks like a DS9097U to software, and it appears to be seen on /dev/ttyUSB0. Armed with this, I poke around owfs and force it to use other settings than the default start-up scripts supply. This means killing all owftpd, owhttpd and owfs processes (via ps -ef and kill -9). I then restart them using /dev/ttyUSB0:

owfs -F -d /dev/ttyUSB0 -m /tmp/1wire
owhttpd -F -p 3001 -d /dev/ttyUSB0

Note: I’d actually installed a second adapter and sensor in-between activities here, so I modified those commands to add the ttyUSB1 as well (you will see those in the output below). And finally I see something in the /tmp/1wire directory!

# ls
28.0097A7020000 28.75BAA7020000 28.848BD5020000 alarm bus.0 bus.1 settings simultaneous statistics structure system uncached

You can then get some data from a sensor by treating it as a normal text file under Linux, e.g.:

# cat 28.75BAA7020000/temperature

And when I look at the web interface, I can finally browse these devices as well!

Other Magic

There is one other thing I did a couple of times along the way, but I can’t now remember in what exact order I did it for the last time. This may or may not impact the above for you. I got the clue from this oww page. In there is a bunch of info on using the NSLU-2 with oww, which of course includes that more standard adapter but listed other possibilities. From that, I installed the kernel modules that I thought applied to the LinkUSB:

ipkg install kernel-module-usbserial
ipkg install kernel-module-ftdi-sio

After doing so, I then had to load those modules:
insmod usbserial
insmod ftdi_sio
(note the underscore instead of dash here, tricky!).

After this, the NSLU-2 reported via dmesg seeing the LinkUSB adapters (which are connected via a USB hub):

hub.c: new USB device 00:01.0-1, assigned address 3
Device descriptor:8 bytes received.
Device descriptor:18 bytes received.
hub.c: USB hub found
hub.c: 4 ports detected
hub.c: new USB device 00:01.0-1.1, assigned address 4
Device descriptor:8 bytes received.
Device descriptor:18 bytes received.
usbserial.c: FTDI FT232BM Compatible converter detected
usbserial.c: FTDI FT232BM Compatible converter now attached to ttyUSB0 (or usb/tts/0 for devfs)
hub.c: new USB device 00:01.0-1.3, assigned address 5
Device descriptor:8 bytes received.
Device descriptor:18 bytes received.
usbserial.c: FTDI FT232BM Compatible converter detected
usbserial.c: FTDI FT232BM Compatible converter now attached to ttyUSB1 (or usb/tts/1 for devfs)
hub.c: new USB device 00:01.0-1.4, assigned address 6

So this was most likely required for DigiTemp (and/or owfs) to see these devices on ttyUSBn. I will verify this later when I have more time to set up the rest of this project, meaning to: collect the sensor output, create pretty information from it and make it available for remote access.

Off to dinner, but more to come!

Adobe Acrobat Connect Pro with Firefox on Linux

I’m a PMI-certified Project Management Professional. As part of my PMI membership, I have access to Communities of Practice (CoP). Among these, I participate in the Innovation and New Product Development CoP.

We had a webinar today on Design for Innovation in Manufacturing that planned to use an uncommon package, Adobe Acrobat Connect Pro (vs. Go to Meeting or WebEx, etc.) for the meeting. So I tried the typical “test your system’s connection” page with my primary Linux desktop and I get the result that everything else is OK, but fail the “Acrobat Connect add-in test”. The suggested solution is to install the add-in. But of course, none is available for Linux.

So off to my Windows XP virtual machine which I keep for these sorts of situations, and I go through the gymnastics of installing the add-on in Windows and participate in the webinar.

Afterwards, I’m struggling with finding a way to download the presentation slides (which it turns out is impossible – they have to be viewed through another “Adobe Presents” thing and can’t be saved from there). As part of this process, I tried to see whether the slides link would perform differently under Linux.

It doesn’t — it opened the presentation slides right there in FF on Linux as if I was on Windows. OK, that’s interesting. So I try the original webinar URL in Linux, and darn if that doesn’t work the same as well.

So, bottom line:

  • No Adobe Acrobat Connect Pro add-in appears to be required — so why do they make you download it on Windows/Mac?
  • Adobe Acrobat Connect Pro and Adobe Presents works fine on my relatively recent Linux box with Firefox – one less reason to head over to the virtual Windows world.

Update May 2011: Sadly, the above is no longer the case. It appears that Adobe changed something in their Connect Pro application which now makes it unworkable with Linux out of the box. It now requires a higher level of Flash than before, so neither my existing machine nor my new Ubuntu 11.04/Mint 11 release which has that Flash revision will work. The application loads but cannot connect to the meeting room server. They apparently also now have added a download for installing their add-in on Linux (reported to be for hosting meetings), which I have not tried, but others report having no success with that either on the Adobe forums. Note however, that the Adobe Presents application still works (for now, at least).

Update December 2011: I was again able to connect to a PMI webinar via Adobe Connect today on my Linux Mint 11 x64 box, using Firefox 8 and Flash (current Mint default installed versions). The connection process seemed to hang several times in the browser (a prolonged “Waiting for…” in the status bar). In parallel I had connected to the meeting using my Windows XP virtual machine, so I knew it was in process. So I stopped the page load in Mint and then reloaded it, which moved things along. I had to do this a couple of times but was then able to join successfully to get both the audio and meeting materials.

Accessline TeleDesk/iControl on Linux

While I was a remote worker for Sun Microsystems, Accessline was the chosen solution for integrating us into the corporate telephone systems. This employer was a major UNIX supplier and user, so they ensured there was a version of AccessLine’s TeleDesk/iControl application (which I’ll call TDiC hereafter – this controlled the service and provided incoming call notifications) created for their UNIX version, Solaris.

As this is a Java application and Linux is a close cousin of UNIX, I figured I could get this version working on my Linux box, which proved true (on several versions of Ubuntu and its derivatives). But there was a little magic required to do so (much less so in the latest version, thankfully). In case you are an AccessLine user looking for a Linux solution, you should be able to make it work for you too by following these steps. The below examples are on a Linux Mint 9 (derivative of Ubuntu 10.04) box, and assumes you have the appropriate Java installed.

Get the software

My former employer was acquired in 2010 and the combined firm has since ceased contracting the AccessLine service. However, as of today, the link to download the UNIX version is still available here. Download this file and save it to your home directory – the following instructions will expect that location, so if you do otherwise, you’ll need to adjust accordingly.

The downloaded file is a script for installing the software.

Run the Installation Script

I set executable permission and tried both double clicking and clicking right on the file in Nautilus to execute/run the script, without success (it would offer to open it in Gedit/Text Editor and would then complain about character encoding, but not offer to run). So the easiest way around this was to execute the following command in a terminal window:

sh ~/

This starts the installation. You have two places to make decisions or answer questions:

Installation for AccessLine TeleDesk 2.3-010-1
Copyright (c) 2002-2009 AccessLine Communications Corporation, Bellevue, WA (USA)
Java found in PATH environment.
Enter the install directory for TeleDesk.
[default (return) /home/username/AccessLine/TeleDesk]:

I was perfectly happy with the default, so I hit enter/return here and installation continued.

Creating /home/username/AccessLine/TeleDesk ...
Installing in /home/username/AccessLine/TeleDesk ...
Uncompressing installation package using 'gzip -d '...
Extracting installation package...
Update PATH in your shell's profile? [yes or no]

The answer to this determines whether you will need to specify the full path to the TDiC application when you want to run it. If yes, then the path will be added to your defaults and you can start TDiC by simply typing TeleDesk at any terminal prompt.

Update PATH in your shell's profile? [yes or no] yes
/home/username/.profile modified
Installation of TeleDesk complete.

Opening the Application

Once installed, start the application:
username@system:~$ TeleDesk
which will open a window:

You can then enter your applicable information and configure the selected sounds for alerts, etc. through clicking the SETUP icon. This will open another window:

From here forward, it works like it is documented on the AccessLine site for other platforms. The one problem is you’ll need to open a terminal and type that command each time you want to run it, which is inconvenient. No problem, we can fix that by creating a Launcher.

Adding a Launcher under the Applications Menu
You’ll want to have a nice image for your launcher, so you could use this one: . Save it as AccessLine.png in the same AccessLine subdirectory under your home as above.

For a GNOME desktop, you can add a launcher to the Applications menu by clicking on System => Preferences => Main Menu.

Here we select the Office section and click on New Item. Make the entries in the new window look like below.

To add the nice icon to the launcher, click on the launcher (platform on a spring) icon to open the Choose an icon dialog, and pick that image file you downloaded:

and then click Open to set it as the image. Click OK on the Create Launcher window. You can now launch TeleDesk from the main menu:

Adding a Launcher to a GNOME Panel

Two ways to do this. If you didn’t add the launcher to the main menu as above, then you can click right on an open space on a panel and select Add to Panel...

then select Custom Application Launcher and Add, which will continue just like the main menu approach above to create the launcher.

But if you did add TeleDesk to the main menu, then the easiest way to create the panel launcher is to simply drag the TeleDesk icon from the main menu to where you want it on the panel. It will be added there automagically.

Now you can enjoy having TeleDesk on your Linux box!

Linux Tweaks & Programs Selections

This is mostly a list for my own use, but I hope you may find utility in it as well. I’m currently running Linux Mint 9 on my desktop (with Windows in a VM) and Eeebuntu 3 plus Win XP, dual boot, on my netbook.

This post will be updated over time, and some form of organization may appear. 😉

  • Change default google search in Firefox on Mint: Mint uses a google custom search engine to fund their work on the distro. Unfortunately, they change the search results, such that it is unusable in my opinion. “Mint searches” may return nothing while the same terms on the google homepage bring results. To fix, click the down arrow next to the search box in FF and select “Manage Search Engines”. Search for a normal google one (I used the ssh version) and set that as default. Now send Mint a small donation, to offset the revenue that they’ve lost.
  • pdf-shuffler: allows pdf files manipulation: add/remove and rearrange pages
  • pdftk (and pdftk-gui): another set of pdf tools, best for command line use, like removing the useless pages from bank and credit card statements automatically. pdf-shuffler now does most pdf things easier on a one-off basis.
  • gpicview: very fast image browser – does a great job as a photos viewer when resources are tight or files are large/high res. Unfortunately, no auto slide show capability.
  • Kiosk use: the x screen saver can play videos. This could be used for a NEFFA kiosk, by using a video to entice people to complete the festival survey and automatically stop playing, once they hit a key, to let them do so.
  • gscan2pdf: a great tool to create efficient/small pdf files from optically scanned pages. It also outputs to other formats and enables Optical Character Recognition (OCR) as well.
  • GoogSysTray: a nice all-in-one cross-platform notifier for google services like calendar, mail, voice, etc. Makes sure you know about new appointments, mails, etc. without having to have your browser running.

Disk Partition Cloning with Live Linux Tools

This post is currently more of an in-process note to myself to remember stuff I had to just rediscover. Last time I did this I was going from Ubuntu 6.06 x86 to 8.04 x64. Should it help you too, all the better! (And yes, this will work on Windows files just as well, I used it when moving from XP to Linux originally.)

  • Use live Linux CD to boot system (this time, used an old 5.x series Knoppix CD I had already — the latest Knoppix 6.2.1 seems to have changed dramatically and did not leave me feeling familiar enough to recreate what I’d done the last time, x years ago)
  • Make sure disc partitions to be imaged aren’t mounted to prevent activity changing contents. With this version of Knoppix it was easy, with the disk icons showing on the desktop
  • Use df -h to confirm what is mounted where. I had originally tried doing all this with the SystemRescueCD v0.2.19 but I must have been doing something wrong with mounting the partitions, which I was able to do correctly here
  • Use sudo to mount /media/sdc3 partition (the target destination on removable USB disk) as root, then sudo partimage to start the PartitionImage tool

PartImage’s use is pretty straightforward, but need to be sure to specify the entire path to the image file! Eg: /mount/sdc3/IMAGE-FILE-BASE-NAME . Pretty much all the rest was intuitive and just using the defaults, which produces image files in appx. 2GB image chunks (so a further backup to DVDs for offsite storage would have 2 or 4 of them per single or double layer disk, respectively).

Don’t walk away until you have confirmed the result of the disk check, or it will just wait for you and do nothing more!

Backing up the entire appx. 4GB Ubuntu 8.04 system file set took just 1-2 minutes to create the resulting compressed 1.4GB IMAGE-FILE-BASE-NAME.gz.000 file. Doing the same for the appx. 40GB of /home directories of the same installation is predicted by PartImage to take about 1.5 hours and is well underway. Given it is currently at 37% completion and the image files are now about 14GB, there is likely to be much less compression benefit for these files. This makes sense, as much of the content is already compressed music, photo and video files.

This is a great free (in all meanings) tool set. Once done, I will have great confidence that installing a fresh Linux Mint 9 or Ubuntu 10.04 on my main desktop will be without serious risk to recovery of my precious prior work.

Train Firefox mailto: to use Google Apps

Update 5/10/13: With Firefox 20 the method below of using a javascript entry to add the external mail resource apparently did not work on Linux Mint LMDE. Instead, I found it necessary to manually edit the mimetypes.rdf file in the user’s profile folder to add the resource. See my later post Train Firefox mailto: to use Google Apps – Take 2 for details on this method.

This post’s text is lifted very closely from a similar one (which appears to have been similarly lifted from others going back to a Lifehacker article). Why repost? To make note of what actually worked for me (to remind myself later, when upgrading requires it again) and to hopefully inform you. I used the correct wordpress tagging to make sure the commands and javascript come through correctly here (without WP mangling the quotes), so all copy & pastes will actually work.

In your browser address (URL) bar, type: About:config

Answer “yes” to “you will be careful”

In the filter bar on the screen type (or copy and paste): gecko.handlerService.allowRegisterFromDifferentHost

This will bring up the specific entry. Double click on the line to change the value from “false” to “true

Do the same as the above, in the filter bar, for: network.protocol-handler.external.mailto, toggling it from “true” to “false“.

Go back to the browser address (URL) bar and copy and paste the string below. Be sure to change “” to your actual domain name and the label at the end similarly:

javascript:window.navigator.registerProtocolHandler("mailto","","V.C at Google Apps")

Hit return and a message line above the page should appear (like for pop-up blocking messages) in the browser asking if you want to add this application to your mail. Click on “yes”.

Go to Firefox Edit > Preferences > Applications (using Linux FF – for Windows it is under Tools > Options > Applications), and select your new Google Apps entry as your default mailto: handler.

Return to about:config in the address (URL) bar and reset the values for the two variables to their original default values by repeating what you did before, reversing the toggling of values.

Close and restart Firefox, log in to your Google apps account and then try clicking on a mailto: link on any other web page. You should get a compose mail window loading, using your Google apps account. Yay!

Previously, I’d figured out how to have it open a new window to compose your message – but I can’t figure it out right now. Will update this post, if I remember how I did that.

Wireless Woes Resolved

In an earlier post, I’d detailed how Tomato on the Asus WL-520gU brought the internet back to our home in a delightful way. However, there had been an ongoing wireless problem with my EEE PC running Eeebuntu linux 3.0.

I’d been puttering with those issues on and off for a while, having got to a state where things mostly worked, most of the time. My biggest complaint was that I’d have to try multiple times to connect to some access points, but others would work straight away. Unfortunately, the router at the in-laws was one of those, and the new Asus at home turned out to be even worse. All the other clients liked these routers just fine, so I knew the issue was with this PC.

Sooo, long story short, I figured out that there were multiple drivers competing from all my attempts, trying to take care of the same wireless connection. I ended up finally disabling all of them but for the ath5k driver (which connects great, but has a speed fluctuation issue) in /etc/modprobe.d and then forced the connection speed with an iwconfig setting. Now the PC has a great connection all the way to the full extent of our property and streaming hulu is without a glitch.

Another minor irritation apparently solved. 🙂

Wireless Tomato

The other day the internet died. One moment all was good and the next, nothing. Gasp!

Poked around and all the lights on the router were out. Of course, this happens right as we’re flying out the door. So out comes the volt-ohm meter and find out the power brick failed. No problem, I’ve got a universal brick with changeable tips! I’ll just swap that out and we can get on our way… well, not exactly. Router still dead. Seems the connector is mating fine… funny. Ok, now really need to get out the door. Drop in a basic router from way back (no wireless) so the VOIP phone is back up and off we go.

Later, I use the V-O-M again to find my error. I had set one polarity on that universal brick, but that was not the one the router wanted. So I swap and plug it in — lights — yay! But it won’t stay up for more than a few seconds at a time now. Either the old brick killed it before, or I gave it the final shove with that reversed power.

OK, so now we’re short wireless. We’ve got a few mobile devices here and some of them have no ethernet jacks, so it’s off to NewEgg and I see there’s a very popular inexpensive router (Asus WL-520gU) on sale and with rebate. It seems to be a snap to convert it to a very full featured router/print server/NAS device, by use of an open source firmware package. Within just a couple of days I have the new toy. Google-ing ensues to find the best way to get the firmware updated.

The open source firmware package “Tomato” is already popular, but a person called “teddy_bear” created a custom version for this router to enable USB support for the print server and NAS capabilities. Before I install it, I try out the router with Asus’ own package. It seems pretty nice, but somewhat confusing in a Chinese-English language hybrid sort of way, and I can’t get the router to hold an internet connection to the WAN. If I reboot, it works for a few minutes and then goes away. Thinking Comcast may want a specific MAC address, I clone it from my original PC. Still no joy. Time to toss in a little Tomato.

There are some complicated processes on line for updating this router to Tomato by using the windows CD that comes with the router, and then loading another open source firmware, DD-WRT, and then using that to update the firmware to teddy_bear’s version of Tomato. Luckily, I found another post indicating success in downgrading the router’s own firmware from v to and then renaming the Tomato image file to v and loading that, all using the Asus web page interface.

Well, my unit came with v Wondering if the downgrade was solely to get the router to accept a “higher” revision number, I try renaming the Tomato image to fake a v and load that. No dice. Firmware update fails. Meanwhile I’m having other issues with the linux laptop I’m using, so I think that is the cause. After futzing with it and then booting into WinXP to try again with the same result, I finally decide to just try the massive downgrade. I load version on the machine and it works (albeit providing a very primitive interface)! I then load Tomato, renamed to fake a v, and it works straight away!

Tomato is definitely the secret sauce for this machine. Way, way easier to navigate through and the performance is now rock solid, keeping an internet connection (with the native MAC address, even) with no problem. All the devices but the linux laptop seem to love the wireless*, and the wired connections all work great.

I haven’t tried the print server function yet (I have a NSLU-2 unSLUng box that does that still) but the NAS function works fine and it even supplies an ftp service. And I was finally able to set up Mrs. V’s Mac to print wirelessly via the router to NSLU-2 as well.

One more set of tasks checked off. 😉

*This EEE PC netbook has had wireless issues all along, both under WinXP and linux. It seems to just not like certain routers. I can change drivers under linux, using either ath5k or ndiswrapper, and the solution will work for some and not for others. The opposite set up will work with those others. Go figure.

And Twinkle Takes the Call…

Finally found a reasonably up to date SIP client/VOIP softphone that will work directly under Ubuntu 9.04 on the EeePC:

Twinkle!  twinkle48

Tried many, many alternatives without success (most commercial clients that are otherwise available for Windows or Mac have not been updated for Linux for years, and the GPL/open ones are not so much better).  This one pretty much worked straight out of the “apt-get”/Synaptic box for me.  A little playing with settings to figure out the right places to put in my Galaxyvoice account info and I was off.  Phew, another task checked off.

BTW, I really like Galaxyvoice.  They’ve provided my home phone service for several years now.  Much better call quality than Vonage in my experience.  Billing is weird — the on-line account records never match what my CC gets charged… but since it is never more than $10 in a given month at my usage level, I don’t care.  Sometimes it is as low as $2!  Really, $2 to make and receive phone calls for a month – and they have a profitable ongoing business. And real people in Massachusetts take my calls for service.  You have an alternative…