Automatically installing New Relic on Elastic Beanstalk

July 10th, 2016 No comments

Nearly 100% automated - it looks like the only way to grab an environment variable right now is through some nasty hacks.

Other than the one hard-coded thing at the moment, this seems to work well. Sets up the New Relic configuration as desired, on instance creation. Assuming stock PHP 5.x Amazon Linux AMI.

Just drop this in .ebextensions/00_newrelic.config or something of the sort, and rebuild your environment.

files:
  "/etc/php.d/newrelic.ini":
    mode: "00644"
    owner: root
    group: root
    encoding: plain
    content: |
      extension=newrelic.so

      [newrelic]
      newrelic.appname = "`{ "Ref" : "AWSEBEnvironmentName" }`"
      newrelic.browser_monitoring.auto_instrument = true
      newrelic.capture_params = true
      newrelic.enabled = true
      newrelic.error_collector.enabled = true
      newrelic.error_collector.record_database_errors = true
      newrelic.high_security = false
      newrelic.license = "YOUR LICENSE KEY HERE"
      newrelic.transaction_tracer.detail = 1
      newrelic.transaction_tracer.enabled = true
      newrelic.transaction_tracer.explain_enabled = true
      newrelic.transaction_tracer.explain_threshold = 2000
      newrelic.transaction_tracer.record_sql = "raw"
      newrelic.transaction_tracer.slow_sql = true

  "/tmp/install-newrelic.sh":
    mode: "00555"
    owner: root
    group: root
    encoding: plain
    content: |
      #!/bin/bash
      rpm -Uvh http://yum.newrelic.com/pub/newrelic/el5/x86_64/newrelic-repo-5-3.noarch.rpm
      yum -y install newrelic-php5

commands:
  00_install_newrelic:
    command: "/tmp/install-newrelic.sh"
Categories: AWS, PHP

You really can do bash completion with spaces!

April 28th, 2016 No comments

A short period ago when I was working on my little ec2ssh project I ran into the issue of tab completion not working for things with spaces in them.

This was my first foray into the world of bash completion, so I incorrectly thought "this can't be that hard - everything has completion now."

After Googling for hours, trying various things, finding statements saying that it wasn't possible - I stumbled upon the solution. It works properly and was surprisingly easy.

The compgen command typically works with spaces for the word separators. What I had to do was make sure the source data was separated by something else, such as a linebreak (\n) and then tell compgen that was the separator.

So this line before the compgen command was key:

local IFS=$'\n'

Next, the complete command still won't work without feeding it the -o filenames option. This seems to be critical, telling it to behave like we are dealing with filenames makes it work properly. Note that I couldn't even find the original example that led me to this solution right now. It's that undocumented.

complete -o filenames

If you'd like to see this put altogether, in a very easy to read, semi-documented autocomplete script, check it out here:
https://github.com/mike503/ec2ssh/blob/master/ec2ssh.bash_completion.sh

Categories: PHP

CrashPlan in Docker on Synology

December 4th, 2015 3 comments

TL;DR - fun stuff is at the bottom.

It's time to spend a few minutes to share something that hopefully will solve a headache of mine - and maybe one of yours.

I own Synology DS2411+ units. They're very space efficient, and quiet - the form factor is great, and something I haven't found anywhere else (WHY?!?!) - there is a premium for their specifically designed chassis and software - and in the past I've found it to be worth it. Nowadays with ZFS or Btrfs around, cheaper hardware and such... I'm not as happy, but I'm also not in the market for a new setup. 🙂

Anyway - one of the main things I want to do is to try to back up everything on my physical units to the cloud. The first barrier to entry is that there is only one truly "set-it-and-forget-it" *unlimited* provider that has a Linux client - and that is CrashPlan. The pricing is great, but the client is bloated, buggy and weird. I've begged Backblaze to create a Linux client for years and there was some chatter about it, but still nothing public. At this point with B2 being launched, I'd be surprised if they do it at all, or just have a B2-based app instead (in which case it will be utility based billing. I want unlimited!)

Back to our only option - CrashPlan.

Due to the terms of distribution with CrashPlan it looks like it cannot be officially packaged by Synology. Which is where the PC Load Letter option came in handy. However it had some issues over time, and the Java distribution would require reinstall periodically. Ultimately, it wasn't the most reliable solution.

So I decided - I'll use the instructions to guide me on a manual install, one that isn't subject to issues with the package manager or Java changes. After using the instructions on the Synology wiki and what I thought was a clean, successful installation that worked for a couple weeks, it too crashed. Somehow during one of CrashPlan's own updates (I believe) it wound up wiping all the .jar files away. Trying to reinstall CrashPlan (the same version) actually failed after that for some unknown reason (how?!?!)

Recently I had some issues when trying to setup a Redmine install for a client. Even following the exact instructions it wouldn't work. No idea why. So I decided to look into a Docker-based setup. Wouldn't you know, I found a package that worked perfectly out of the gate. Docker to the rescue.

I realized not too long ago that Synology added Docker to it's package center. While I dismissed it as being more bloatware and attention being paid to the wrong aspects (just make a rock solid storage platform, I don't need an app store concept on my NAS!) I decided I should take a peek at the possibility of running CrashPlan in a Docker container. That way, it was self-contained in a richer Linux environment, with it's own management of the Java runtime stuff.

As of right now, I will say - "Docker to the rescue" - again. After fixing up the right command line arguments and finding a Docker image that seems to work well, it's been running stable and inherited my old backup perfectly. I use -v to expose the /volume1 off my Synology to the container and it picks up exactly where it left off.

That's quite a lot of explanation for what boils down to the magic of it all. Here is the working image and my command line arguments to it, to expose the ports and volumes and such properly. Enjoy.

docker pull jrcs/crashplan
docker run -d -p 4242:4242 -p 4243:4243 -v /volume1:/volume1 jrcs/crashplan:latest

Add more -v's if needed, and change the ports if you wish. Remember to grab the /var/lib/crashplan/.ui_info file to get the right key so you can connect to it from a CrashPlan Desktop application (once of my other complaints with CP)

Categories: Software

Speeding up Drupal cache flushing

April 21st, 2015 No comments

Are your cache clears slow?

Do you use features? It's great, but can become quite the beast when you run a "drush cc all" or a "drush fra" - and today we figured out why the "drush cc all" is an issue. Because when hook_flush_caches() runs, if you don't explicitly define this as FALSE, it will run the "fra" for you inside of your cache clear!

Add this to your settings.php:

$conf['features_rebuild_on_flush'] = FALSE;

Since we run our fra separately, we've disabled this, and noticed quite a reduction in time to "flush caches" (which was not only flushing caches but also reverting features, apparently!)

It's that unknown-to-us-before-today variable in the snippet below...

/**
 * Implements hook_flush_caches().
 */
function features_flush_caches() {
  if (variable_get('features_rebuild_on_flush', TRUE)) {
    features_rebuild();       
    // Don't flush the modules cache during installation, for performance reasons.
    if (variable_get('install_task') == 'done') {
      features_get_modules(NULL, TRUE);
    }
  }
  return array();
}
Categories: Drupal

Custom XHProf run IDs

December 21st, 2014 No comments

XHProf is an awesome tool, but ships with a very annoying restriction out of the box. It names the run IDs in a cryptic randomized fashion (which makes sense - should be no collisions, and there's no "good" default way to name them otherwise.)

The good news however is that it does support custom run IDs to be named in the save_run() method. Awesome! Except for an odd "bug" or "feature" - a custom run ID won't display in the runs UI if it has non-hex characters in it (see the end of this post for information about that.)

This makes profiling specific requests a lot easier - you can put in a unique query string parameter and see it come up in the run ID, if the URL isn't already obvious. You could also simply define a standard query string parameter for your users/developers to use instead of the entire [scrubbed] URL as a run ID. Since I run a multi-tenant development server with a bunch of developers each having the ability to have a bunch of unique hostnames, it makes the most sense to use every piece of the URL for the run ID.

To start using custom XHProf run IDs, enable XHProf the standard way, at the earliest point in your application (at the top of the front-controller index.php, for example):

if (isset($_GET['xhprof']) && !empty($_SERVER['XHPROF_ROOT']) ) {
  include_once $_SERVER['XHPROF_ROOT'] . '/xhprof_lib/utils/xhprof_lib.php';
  include_once $_SERVER['XHPROF_ROOT']. '/xhprof_lib/utils/xhprof_runs.php';
  xhprof_enable(XHPROF_FLAGS_CPU | XHPROF_FLAGS_MEMORY);
}

The key to the custom naming is is when it saves the run output. In this example, I made a simple function to take the URL, remove all special characters and change them into dashes and remove any repetitive "dashing", and assign that as the third parameter to the save_runs() method, which is the "run ID" name.

if (isset($_GET['xhprof']) && !empty($_SERVER['XHPROF_ROOT'])) {
  function _make_xhprof_run_id() {
    if (isset($_SERVER['HTTPS'])) {
      $run_id = 'https-';
    }
    else {
      $run_id = 'http-';
    }
    $run_id .= urldecode($_SERVER['HTTP_HOST'] . '/' . $_SERVER['REQUEST_URI']) . '-' . microtime(TRUE);
    $run_id = trim(preg_replace('|([^A-Za-z0-9])|', '-', $run_id), '-');
    while (strstr($run_id, '--')) {
      $run_id = str_replace('--' , '-', $run_id);
    }
    return $run_id;
  }
  $xhprof_data = xhprof_disable();
  $xhprof_runs = new XHProfRuns_Default();
  $run_id = $xhprof_runs->save_run($xhprof_data, 'xhprof_testing', _make_xhprof_run_id()); 
}

If you wanted to use a simple query string parameter, I would still use the same type of safeguards so that the filename comes out in a sane way (no high ASCII or other characters which the filesystem wouldn't handle well) - for example, using the "run_id" parameter (I haven't tested this code, but it *should* work :))

if (isset($_GET['xhprof']) && !empty($_SERVER['XHPROF_ROOT'])) {
  function _make_xhprof_run_id() {
    // return null and it will handle it like usual
    if (!isset($_GET['run_id'])) {
      return null;
    }
    $run_id = trim(preg_replace('|([^A-Za-z0-9])|', '-', urldecode($_GET['run_id']) . '-' . microtime(TRUE)), '-');
    while (strstr($run_id, '--')) {
      $run_id = str_replace('--' , '-', $run_id);
    }
    return $run_id;
  }
  $xhprof_data = xhprof_disable();
  $xhprof_runs = new XHProfRuns_Default();
  $run_id = $xhprof_runs->save_run($xhprof_data, 'xhprof_testing', _make_xhprof_run_id()); 
}

NOTE: Both of these methods require one thing to be fixed on the display side. Currently, the XHProf display UI that comes with the stock package *will* list all the XHProf runs, but won't load them if there's non-hex characters in the run ID. I don't know why this naming limitation is being forced. I've filed a bug[1] for now to ask why, or to propose removing the restriction (ideally) - however, for now I've commented out those three lines and everything seems to work fine so far.

[1] https://github.com/phacility/xhprof/issues/58

Categories: PHP

Facebook Messenger and the illusion of privacy

August 27th, 2014 No comments

There are a lot of people who continue to spread the FUD around Messenger, and it is making me crazy.

Anyone who thinks they have privacy on a third party service needs a reality check.

As Scott McNealy said in 1999, "You have zero privacy anyway. Get over it." Anyone who still believes using a service someone else owns (especially for free) entitles them to privacy is sadly mistaken. Most services will try to do the right thing, but there will always be a chance of an accidental data breach or hack, even from the most trustworthy services. You can only hope that the company wants to do good by the consumer.

Simply put, if you don't control it, you can't expect it to be controlled for you.

There are two primary complaints I've heard during this whole Messenger debacle.

The first is that Messenger terms now allow them to spy on you and you can incriminate yourself by using it. First, companies are required to comply with law enforcement if they want to do business in this country. Second, if you are doing anything illegal, you shouldn't be discussing it on something someone else owns anyway. This has always been the case with your telephone, text messaging, etc... that's just common sense.

The other complaint is about unrestricted access to your camera and photos and contacts. Once again, that's nothing new - you authorize your apps all the time for access. Let's take some popular apps for example: Snapchat? You probably authorized it for camera, photo album and contact access. Apps need photo album access to be able to get past photos. They need camera access to take pictures/videos inside of the app. Instagram? You most likely authorized those for camera and/or the photo album too. You probably authorized LinkedIn to your contacts. You most likely authorized the original Facebook app to your camera and photo album (otherwise you can't post any images from inside the application!) - you probably even authorized it to access your contacts.

I feel bad for Facebook having to deal with such unoriginal claims. Yes, it sucks to have to install yet another app, but they have reasons for it and it works well in conjunction with the original Facebook app. It's free, and it isn't a space hog, so there's no real "cost" associated. Battery drainage is the only complaint I consider to be reasonable.

Just remember - nothing is truly private. Even your encrypted end-to-end messaging - someone can take a screenshot or save it and share it. It comes back to what Jon Voight said in Enemy of the State, "The only privacy that's left is the inside of your head."

Categories: Consumerism

How to compile eggdrop on Ubuntu 14.04.1 LTS "trusty"

August 8th, 2014 1 comment

It's been a while. Updates? I'm going to try to post more, and I also put the entire site on SSL. I've switched hosts to Linode as well. Isn't that amazing?

Back to compiling eggdrop. When it gives that annoying paragraph:

configure: error:

Tcl cannot be found on this system.

Eggdrop requires Tcl and the Tcl development files to compile. If you already have Tcl installed on this system, make sure you also have the development files (common package names include 'tcl-dev' and 'tcl-devel'). If I just wasn't looking in the right place for it, re-run ./configure using the --with-tcllib='/path/to/libtcl.so' and --with-tclinc='/path/to/tcl.h' options.

See doc/COMPILE-GUIDE's 'Tcl Detection and Installation' section for more information.

Install these 2 packages (and let the dependencies also install), and then this configure string works. For some reason, we no longer have a plain old libtcl.so, and eggdrop requires BOTH of these options to be provided or neither one works...

sudo apt-get install -y libtcl8.5 tcl8.5-dev
./configure --with-tcllib=/usr/lib/x86_64-linux-gnu/libtcl8.5.so --with-tclinc=/usr/include/tcl8.5/tcl.h

Oh yeah, this does assume you're running x86_64. Because there's really no reason you wouldn't be anymore.

Categories: Tech Tips

Docker HTTP proxy settings in Upstart

October 10th, 2013 No comments

This was driving me crazy. There's some bug reports about it, but nobody has a plain and simple example. So here's mine. Enjoy.

Old:

description "Run docker"

start on filesystem or runlevel [2345]
stop on runlevel [!2345]

respawn

script
  /usr/bin/docker -d
end script

New:

description "Run docker"

start on filesystem or runlevel [2345]
stop on runlevel [!2345]

respawn

env HTTP_PROXY="http://your.address:port"
env HTTPS_PROXY="http://your.address:port"

script
  /usr/bin/docker -d
end script
Categories: Software

My first Scout plugin!

September 8th, 2013 No comments

I'm digging Scout so far, and it has almost all the plugins I would want already in their plugin directory. However, I did want to add in a Solr "healthcheck", since we've noticed some oddities with our search index.

Here is a quick-and-dirty way to get the number of results based on an empty search (i.e. the entire index) for a single Solr core on localhost. Maybe this will help somebody else out there. I suppose it could be paramterized with hostnames, search strings, etc... and it wouldn't be that hard either from what it looks like.

Enjoy.

Filename: solr.rb

class SolrResultCount < Scout::Plugin

  needs "rubygems"
  needs "json"
  needs "net/http"

  def build_report
    url = "http://localhost:8983/solr/select?q=&rows=3&fl=bundle&wt=json"
    r = Net::HTTP.get_response(URI.parse(url))
    parsed = JSON.parse(r.body)
    report(:results=>parsed["response"]["numFound"])
  end

end
Categories: Development

Setting up chrooted SFTP-only access on a Synology DiskStation

August 28th, 2013 No comments

This has been on my list to try to figure out for a long time. I wanted SFTP only access to specific accounts, and to be able to chroot them. It took me a while and various attempts, only to get wind up landing on the most basic solution, of course.

I originally tried scponly and scponlyc (which I've used in the past) and rssh, however none of them worked properly for me.

Sure enough, the openssh package from optware worked right out of the box.*

wget http://wizjos.endofinternet.net/synology/archief/syno-mvkw-bootstrap_1.2-7_arm-ds111.xsh
ipkg install openssh openssh-sftp-server

Then edit /opt/etc/openssh/sshd_config, and put in:

Match User username
        ChrootDirectory /some/directory
        ForceCommand internal-sftp

Also edit the user account in /etc/passwd, change the home dir to the /some/directory, and give it "/bin/sh" for a shell.

Viola... when sshd is restarted next time it will just work.

The guys at optware made a neat startup script that will start their sshd on boot. So nothing to do there.

Make sure to disable synology's built-in ssh (Control Panel > Terminal) or you'll probably be hitting the wrong one!

If you are concerned about privileges, the way that Synology runs its units isn't very UNIX permission friendly (most files are world writable on the filesystem, and the expectation is the daemons will properly control the access.) I wound up creating a little cron job that chmods and chowns files to keep the secondary account I've created to be a "read only" account to that directory.

* As always with my tips, YMMV - this worked fine on my Atom-based DS2411+ unit. but when I tried the same setup on a DS213, it didn't seem to work. No idea why, there aren't much diagnostics or logs provided to use. Sorry.

UPDATE: After running this on the "working" NAS unit for a bit, it stopped working. The culprit was the ChrootDirectory became owned by the user, not by root:root. Changing it back (chown root:root /some/directory) fixes that. So it looks like OpenSSH wants that in place for the chroot stuff to work. That could have been the issue mentioned in the previous paragraph (couldn't test it anymore)

Categories: Software