You really can do bash completion with spaces!

April 28th, 2016 No comments

A short period ago when I was working on my little ec2ssh project I ran into the issue of tab completion not working for things with spaces in them.

This was my first foray into the world of bash completion, so I incorrectly thought "this can't be that hard - everything has completion now."

After Googling for hours, trying various things, finding statements saying that it wasn't possible - I stumbled upon the solution. It works properly and was surprisingly easy.

The compgen command typically works with spaces for the word separators. What I had to do was make sure the source data was separated by something else, such as a linebreak (\n) and then tell compgen that was the separator.

So this line before the compgen command was key:

local IFS=$'\n'

Next, the complete command still won't work without feeding it the -o filenames option. This seems to be critical, telling it to behave like we are dealing with filenames makes it work properly. Note that I couldn't even find the original example that led me to this solution right now. It's that undocumented.

complete -o filenames

If you'd like to see this put altogether, in a very easy to read, semi-documented autocomplete script, check it out here:

Categories: PHP

CrashPlan in Docker on Synology

December 4th, 2015 3 comments

TL;DR - fun stuff is at the bottom.

It's time to spend a few minutes to share something that hopefully will solve a headache of mine - and maybe one of yours.

I own Synology DS2411+ units. They're very space efficient, and quiet - the form factor is great, and something I haven't found anywhere else (WHY?!?!) - there is a premium for their specifically designed chassis and software - and in the past I've found it to be worth it. Nowadays with ZFS or Btrfs around, cheaper hardware and such... I'm not as happy, but I'm also not in the market for a new setup. 🙂

Anyway - one of the main things I want to do is to try to back up everything on my physical units to the cloud. The first barrier to entry is that there is only one truly "set-it-and-forget-it" *unlimited* provider that has a Linux client - and that is CrashPlan. The pricing is great, but the client is bloated, buggy and weird. I've begged Backblaze to create a Linux client for years and there was some chatter about it, but still nothing public. At this point with B2 being launched, I'd be surprised if they do it at all, or just have a B2-based app instead (in which case it will be utility based billing. I want unlimited!)

Back to our only option - CrashPlan.

Due to the terms of distribution with CrashPlan it looks like it cannot be officially packaged by Synology. Which is where the PC Load Letter option came in handy. However it had some issues over time, and the Java distribution would require reinstall periodically. Ultimately, it wasn't the most reliable solution.

So I decided - I'll use the instructions to guide me on a manual install, one that isn't subject to issues with the package manager or Java changes. After using the instructions on the Synology wiki and what I thought was a clean, successful installation that worked for a couple weeks, it too crashed. Somehow during one of CrashPlan's own updates (I believe) it wound up wiping all the .jar files away. Trying to reinstall CrashPlan (the same version) actually failed after that for some unknown reason (how?!?!)

Recently I had some issues when trying to setup a Redmine install for a client. Even following the exact instructions it wouldn't work. No idea why. So I decided to look into a Docker-based setup. Wouldn't you know, I found a package that worked perfectly out of the gate. Docker to the rescue.

I realized not too long ago that Synology added Docker to it's package center. While I dismissed it as being more bloatware and attention being paid to the wrong aspects (just make a rock solid storage platform, I don't need an app store concept on my NAS!) I decided I should take a peek at the possibility of running CrashPlan in a Docker container. That way, it was self-contained in a richer Linux environment, with it's own management of the Java runtime stuff.

As of right now, I will say - "Docker to the rescue" - again. After fixing up the right command line arguments and finding a Docker image that seems to work well, it's been running stable and inherited my old backup perfectly. I use -v to expose the /volume1 off my Synology to the container and it picks up exactly where it left off.

That's quite a lot of explanation for what boils down to the magic of it all. Here is the working image and my command line arguments to it, to expose the ports and volumes and such properly. Enjoy.

docker pull jrcs/crashplan
docker run -d -p 4242:4242 -p 4243:4243 -v /volume1:/volume1 jrcs/crashplan:latest

Add more -v's if needed, and change the ports if you wish. Remember to grab the /var/lib/crashplan/.ui_info file to get the right key so you can connect to it from a CrashPlan Desktop application (once of my other complaints with CP)

Categories: Software

Speeding up Drupal cache flushing

April 21st, 2015 No comments

Are your cache clears slow?

Do you use features? It's great, but can become quite the beast when you run a "drush cc all" or a "drush fra" - and today we figured out why the "drush cc all" is an issue. Because when hook_flush_caches() runs, if you don't explicitly define this as FALSE, it will run the "fra" for you inside of your cache clear!

Add this to your settings.php:

$conf['features_rebuild_on_flush'] = FALSE;

Since we run our fra separately, we've disabled this, and noticed quite a reduction in time to "flush caches" (which was not only flushing caches but also reverting features, apparently!)

It's that unknown-to-us-before-today variable in the snippet below...

 * Implements hook_flush_caches().
function features_flush_caches() {
  if (variable_get('features_rebuild_on_flush', TRUE)) {
    // Don't flush the modules cache during installation, for performance reasons.
    if (variable_get('install_task') == 'done') {
      features_get_modules(NULL, TRUE);
  return array();
Categories: Drupal

Custom XHProf run IDs

December 21st, 2014 No comments

XHProf is an awesome tool, but ships with a very annoying restriction out of the box. It names the run IDs in a cryptic randomized fashion (which makes sense - should be no collisions, and there's no "good" default way to name them otherwise.)

The good news however is that it does support custom run IDs to be named in the save_run() method. Awesome! Except for an odd "bug" or "feature" - a custom run ID won't display in the runs UI if it has non-hex characters in it (see the end of this post for information about that.)

This makes profiling specific requests a lot easier - you can put in a unique query string parameter and see it come up in the run ID, if the URL isn't already obvious. You could also simply define a standard query string parameter for your users/developers to use instead of the entire [scrubbed] URL as a run ID. Since I run a multi-tenant development server with a bunch of developers each having the ability to have a bunch of unique hostnames, it makes the most sense to use every piece of the URL for the run ID.

To start using custom XHProf run IDs, enable XHProf the standard way, at the earliest point in your application (at the top of the front-controller index.php, for example):

if (isset($_GET['xhprof']) && !empty($_SERVER['XHPROF_ROOT']) ) {
  include_once $_SERVER['XHPROF_ROOT'] . '/xhprof_lib/utils/xhprof_lib.php';
  include_once $_SERVER['XHPROF_ROOT']. '/xhprof_lib/utils/xhprof_runs.php';

The key to the custom naming is is when it saves the run output. In this example, I made a simple function to take the URL, remove all special characters and change them into dashes and remove any repetitive "dashing", and assign that as the third parameter to the save_runs() method, which is the "run ID" name.

if (isset($_GET['xhprof']) && !empty($_SERVER['XHPROF_ROOT'])) {
  function _make_xhprof_run_id() {
    if (isset($_SERVER['HTTPS'])) {
      $run_id = 'https-';
    else {
      $run_id = 'http-';
    $run_id .= urldecode($_SERVER['HTTP_HOST'] . '/' . $_SERVER['REQUEST_URI']) . '-' . microtime(TRUE);
    $run_id = trim(preg_replace('|([^A-Za-z0-9])|', '-', $run_id), '-');
    while (strstr($run_id, '--')) {
      $run_id = str_replace('--' , '-', $run_id);
    return $run_id;
  $xhprof_data = xhprof_disable();
  $xhprof_runs = new XHProfRuns_Default();
  $run_id = $xhprof_runs->save_run($xhprof_data, 'xhprof_testing', _make_xhprof_run_id()); 

If you wanted to use a simple query string parameter, I would still use the same type of safeguards so that the filename comes out in a sane way (no high ASCII or other characters which the filesystem wouldn't handle well) - for example, using the "run_id" parameter (I haven't tested this code, but it *should* work :))

if (isset($_GET['xhprof']) && !empty($_SERVER['XHPROF_ROOT'])) {
  function _make_xhprof_run_id() {
    // return null and it will handle it like usual
    if (!isset($_GET['run_id'])) {
      return null;
    $run_id = trim(preg_replace('|([^A-Za-z0-9])|', '-', urldecode($_GET['run_id']) . '-' . microtime(TRUE)), '-');
    while (strstr($run_id, '--')) {
      $run_id = str_replace('--' , '-', $run_id);
    return $run_id;
  $xhprof_data = xhprof_disable();
  $xhprof_runs = new XHProfRuns_Default();
  $run_id = $xhprof_runs->save_run($xhprof_data, 'xhprof_testing', _make_xhprof_run_id()); 

NOTE: Both of these methods require one thing to be fixed on the display side. Currently, the XHProf display UI that comes with the stock package *will* list all the XHProf runs, but won't load them if there's non-hex characters in the run ID. I don't know why this naming limitation is being forced. I've filed a bug[1] for now to ask why, or to propose removing the restriction (ideally) - however, for now I've commented out those three lines and everything seems to work fine so far.


Categories: PHP

Facebook Messenger and the illusion of privacy

August 27th, 2014 No comments

There are a lot of people who continue to spread the FUD around Messenger, and it is making me crazy.

Anyone who thinks they have privacy on a third party service needs a reality check.

As Scott McNealy said in 1999, "You have zero privacy anyway. Get over it." Anyone who still believes using a service someone else owns (especially for free) entitles them to privacy is sadly mistaken. Most services will try to do the right thing, but there will always be a chance of an accidental data breach or hack, even from the most trustworthy services. You can only hope that the company wants to do good by the consumer.

Simply put, if you don't control it, you can't expect it to be controlled for you.

There are two primary complaints I've heard during this whole Messenger debacle.

The first is that Messenger terms now allow them to spy on you and you can incriminate yourself by using it. First, companies are required to comply with law enforcement if they want to do business in this country. Second, if you are doing anything illegal, you shouldn't be discussing it on something someone else owns anyway. This has always been the case with your telephone, text messaging, etc... that's just common sense.

The other complaint is about unrestricted access to your camera and photos and contacts. Once again, that's nothing new - you authorize your apps all the time for access. Let's take some popular apps for example: Snapchat? You probably authorized it for camera, photo album and contact access. Apps need photo album access to be able to get past photos. They need camera access to take pictures/videos inside of the app. Instagram? You most likely authorized those for camera and/or the photo album too. You probably authorized LinkedIn to your contacts. You most likely authorized the original Facebook app to your camera and photo album (otherwise you can't post any images from inside the application!) - you probably even authorized it to access your contacts.

I feel bad for Facebook having to deal with such unoriginal claims. Yes, it sucks to have to install yet another app, but they have reasons for it and it works well in conjunction with the original Facebook app. It's free, and it isn't a space hog, so there's no real "cost" associated. Battery drainage is the only complaint I consider to be reasonable.

Just remember - nothing is truly private. Even your encrypted end-to-end messaging - someone can take a screenshot or save it and share it. It comes back to what Jon Voight said in Enemy of the State, "The only privacy that's left is the inside of your head."

Categories: Consumerism

How to compile eggdrop on Ubuntu 14.04.1 LTS "trusty"

August 8th, 2014 1 comment

It's been a while. Updates? I'm going to try to post more, and I also put the entire site on SSL. I've switched hosts to Linode as well. Isn't that amazing?

Back to compiling eggdrop. When it gives that annoying paragraph:

configure: error:

Tcl cannot be found on this system.

Eggdrop requires Tcl and the Tcl development files to compile. If you already have Tcl installed on this system, make sure you also have the development files (common package names include 'tcl-dev' and 'tcl-devel'). If I just wasn't looking in the right place for it, re-run ./configure using the --with-tcllib='/path/to/' and --with-tclinc='/path/to/tcl.h' options.

See doc/COMPILE-GUIDE's 'Tcl Detection and Installation' section for more information.

Install these 2 packages (and let the dependencies also install), and then this configure string works. For some reason, we no longer have a plain old, and eggdrop requires BOTH of these options to be provided or neither one works...

sudo apt-get install -y libtcl8.5 tcl8.5-dev
./configure --with-tcllib=/usr/lib/x86_64-linux-gnu/ --with-tclinc=/usr/include/tcl8.5/tcl.h

Oh yeah, this does assume you're running x86_64. Because there's really no reason you wouldn't be anymore.

Categories: Tech Tips

Docker HTTP proxy settings in Upstart

October 10th, 2013 No comments

This was driving me crazy. There's some bug reports about it, but nobody has a plain and simple example. So here's mine. Enjoy.


description "Run docker"

start on filesystem or runlevel [2345]
stop on runlevel [!2345]


  /usr/bin/docker -d
end script


description "Run docker"

start on filesystem or runlevel [2345]
stop on runlevel [!2345]


env HTTP_PROXY="http://your.address:port"
env HTTPS_PROXY="http://your.address:port"

  /usr/bin/docker -d
end script
Categories: Software

My first Scout plugin!

September 8th, 2013 No comments

I'm digging Scout so far, and it has almost all the plugins I would want already in their plugin directory. However, I did want to add in a Solr "healthcheck", since we've noticed some oddities with our search index.

Here is a quick-and-dirty way to get the number of results based on an empty search (i.e. the entire index) for a single Solr core on localhost. Maybe this will help somebody else out there. I suppose it could be paramterized with hostnames, search strings, etc... and it wouldn't be that hard either from what it looks like.


Filename: solr.rb

class SolrResultCount < Scout::Plugin

  needs "rubygems"
  needs "json"
  needs "net/http"

  def build_report
    url = "http://localhost:8983/solr/select?q=&rows=3&fl=bundle&wt=json"
    r = Net::HTTP.get_response(URI.parse(url))
    parsed = JSON.parse(r.body)

Categories: Development

Setting up chrooted SFTP-only access on a Synology DiskStation

August 28th, 2013 No comments

This has been on my list to try to figure out for a long time. I wanted SFTP only access to specific accounts, and to be able to chroot them. It took me a while and various attempts, only to get wind up landing on the most basic solution, of course.

I originally tried scponly and scponlyc (which I've used in the past) and rssh, however none of them worked properly for me.

Sure enough, the openssh package from optware worked right out of the box.*

ipkg install openssh openssh-sftp-server

Then edit /opt/etc/openssh/sshd_config, and put in:

Match User username
        ChrootDirectory /some/directory
        ForceCommand internal-sftp

Also edit the user account in /etc/passwd, change the home dir to the /some/directory, and give it "/bin/sh" for a shell.

Viola... when sshd is restarted next time it will just work.

The guys at optware made a neat startup script that will start their sshd on boot. So nothing to do there.

Make sure to disable synology's built-in ssh (Control Panel > Terminal) or you'll probably be hitting the wrong one!

If you are concerned about privileges, the way that Synology runs its units isn't very UNIX permission friendly (most files are world writable on the filesystem, and the expectation is the daemons will properly control the access.) I wound up creating a little cron job that chmods and chowns files to keep the secondary account I've created to be a "read only" account to that directory.

* As always with my tips, YMMV - this worked fine on my Atom-based DS2411+ unit. but when I tried the same setup on a DS213, it didn't seem to work. No idea why, there aren't much diagnostics or logs provided to use. Sorry.

UPDATE: After running this on the "working" NAS unit for a bit, it stopped working. The culprit was the ChrootDirectory became owned by the user, not by root:root. Changing it back (chown root:root /some/directory) fixes that. So it looks like OpenSSH wants that in place for the chroot stuff to work. That could have been the issue mentioned in the previous paragraph (couldn't test it anymore)

Categories: Software

Tackling the "to-do" list problem

August 8th, 2013 No comments

I have a to-do list (surprise!) - actually I have a couple. Possibly even a few... I've even got an item on one of them to consolidate the lists together.

It is so easy to keep adding new items, and it isn't as easy to mark them off. Tasks change status or become mini "projects" with multiple steps (get car oil changed = schedule oil change appointment, which leads to an appointment being booked, which will be the item until it can be marked complete.) At some point I really want to visit the whole "to-do item status" concept as well.

Anyway, this is how the inner workings of my engineer brain function. Some tasks are insanely simple, but mixed in with more complex ones that require perquisite tasks, specific times of the day (business hours for example), specific people, or specific locations that may or may not be possible to get to easily.

I've struggled with trying to tame the never-ending lists. Last week I had a night open and plans to be really productive and knock some things off my list - which wound up not happening, but other "productive" tasks did get taken care of. Those were not planned, but still helpful. Someone said "what a productive day" and I felt like it wasn't the "productive" I actually wanted.

Working in the world of software/web development, this kind of stuff has parallels in the engineering world. I guess you can call that "technical debt" - new stuff is coming up and old stuff isn't being taken care of.

That won't work for me. I need to be making progress, I've got a lot of tasks that need to be finished. That's why they made the list to begin with.

While I've always had the desire to finish these things, I haven't had the proper personal accountability for actually checking things off the list(s) - I love marking things done, but I don't make it a regular habit of checking them enough.

Enter Beeminder - a personal accountability system for reaching goals. A co-worker introduced me to the site. I started thinking about the usual suspects - like losing weight and thought about some of the things he had put in - like making sure he spends X hours a day being productive on personal projects. What I liked most is the work spent on the system to make it statistically sound, measurable (a goal needs to be measurable), capable of automating, and with appropriate notifications to remind me to log whatever data isn't being supplied automatically via other devices or services (your scale can talk to it, for example.)

I started thinking about other ideas to put down as goals. One of them was being "productive" by marking one thing off my to-do list per day. However, that can vary. Some items are easier than others, and what if I have a day where other "productive" tasks come up that take away from that list (as a lot of days do...) - so I began thinking more about it.

I came up with the idea that each task should have some sort of amount of effort associated to it. The effort is derived from the amount of work the task requires and/or the amount of coordination due to location, time constraints, people constraints, etc. For example, getting my passport - that has been on my list for over a year. I paid for it, filed the paperwork, just need the photo and actual submission done. I tried a few times last year but never had all my ducks in a row, and then it fell by the wayside (I had no real need for it and got busy with other things.)

Being a obsessed with completing things like I am, it's still on my list, and I want to get it done. I won't be surprised if my payment is now forfeited due to government accounting needing to close the books each year or something like that. Nevertheless, it's something I should have anyway, it's staying on the list. That would get a value of 1.0 - the highest a task can have, in my world. It requires finding a place that is open, a place that does photos, possibly trying to haggle with them to honor my payment made last year, etc. - a lot of possible effort there, along with time constraints (business hours, sort of) and location (specific locations handle specific things.)

Once I thought about it in that way, I could put in a Beeminder task of "finish at least 1.0 units of effort per day" or something. I started thinking of other tasks on my list, and it looked like that would be an easy way to try to knock things out. It almost becomes a game.

Then I started expanding the idea further. Those random unplanned tasks that come up, such as helping my Parents move? That took a lot of physical work and time. I can't really be taking care of other items if I am busy helping them out. I don't want to be penalized for it. So we can introduce the concept of "bonus units" - I could say "I didn't do anything on the list, but that was definitely worth 1.0" - and feel like the day is fulfilled still.

I believe every day people should do something to advance their life (or someone elses), and this way of tracking makes it easier to be accountable.

Extending it even more, random daily tasks that may or may not be done, based on laziness, distractions, whatever - those can have some value too. Maybe you don't go to the store that often - so going to the store is 0.2, or getting a haircut is 0.2. What about laundry? Those things do count. They are productive. Most of us probably wouldn't think of giving ourselves some credit for them when introducing something as detailed as this.

Those people who prefer the stick vs. the carrot could even put in negative units. Maybe you want to stop watching so much TV, so you deduct 0.1 units per hour. Combine that with earning 0.2 units per hour of gym time, and you are doing the equivalent of calorie counting but with productivity.

The gamification makes it kinda fun, and it might make it easier to adopt as part of a daily routine. Tying it to a system such as Beeminder could be useful too, if you like their method of being punished for non-compliance. Some of us need that, and I may wind up using their site for tracking each day's units.

I tried to come up with a name for a unit, the best I could come up with was "Personal Productivity Unit" (or PPU, since it needs an acronym) - but it was an idea that has been brewing for a while, and I think I have come up with enough structure now to give it a shot in real life and see how it fares.

Finally, I am sure I am not the first one to think of something like this, there are probably books written on this, but I haven't seen anything myself and it makes my engineering brain happy to be able to weigh tasks and set a goal based on that. If anyone else has any more ideas on the subject, I'm all ears!

Categories: Lifestyle