Log configuration for Laravel multi-line backtrace logs in Datadog

April 12th, 2021 Comments off

Thanks to this link I was able to find this snippet to help quick. I wanted to share this with the world, since it was the only result Google had for this.

    log_processing_rules:
       - type: multi_line
         name: new_log_start_with_date
         pattern: \[\d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01])

Here is my full YAML file, for example (in a custom config directory. filename: /etc/datadog-agent/conf.d/custom.d/conf.yaml) - based off a Forge created server. I used wildcards to ensure that it was universally available to all hosts and sites.

logs:
  - type: file
    path: /home/forge/*/storage/logs/*.log
    service: laravel
    source: laravel
    log_processing_rules:
       - type: multi_line
         name: new_log_start_with_date
         pattern: \[\d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01])
Categories: PHP

CrashPlan - what a joke.

August 22nd, 2017 No comments

Why would I be using CrashPlan one might ask? Well, I've been waiting years for Backblaze to come up with a consumer/unlimited Linux client for my NAS boxes, and now that B2 is out, I doubt that's happening. 🙂 CP was the only unlimited + Linux option there is.

I've been getting "convert to CrashPlan Pro" emails for a month or two now. I didn't have any interest. But now it is being forced, as they've announced they are exiting the consumer market. They provide an "easy" way and a discount for 12 months to migrate you over to their "small business platform" - except get this.

Some cloud backups will be restarted

Some of your devices have more than 5 TB of data backed up to CrashPlan Central. Due to technical platform constraints of the migration process, each device's cloud backup must be less than 5 TB.

If you continue, these cloud backups will be permanently removed after you migrate your account to CrashPlan for Small Business, and these devices will start new cloud backups.

So two of my NAS units are actually close to 100% backed up (it only took YEARS) - one at 14.7TB, the other at 23TB. Both of those backups, which took forever, are going to be removed. Even though the consumer CrashPlan could support it, and the CrashPlan Pro talks about unlimited backups ("it can be gigabytes or terabytes!") they give me no way to migrate that. That has to start over from scratch.

Need I remind you that CP's upload performance is abysmal, the client is bloated and "Java-ey" and resource intensive. The whole thing is garbage. Sadly, it is the only Linux capable unlimited client out there though still.

If you don't need Linux, don't touch CP. Go with Backblaze. CP has tons of complaints of lost or broken backups, horrible speeds, then this kind of shit.

To allow you time to transition to a new backup solution, we've extended your subscription (at no cost to you) by 60 days. Your new subscription expiration date is 09/21/2017.

How is that 60 days? It's 30 days from today.

I just feel bad for all the people complaining on Twitter about prepaying and CP's policy is no refunds. Their backups will sit on the "Crashplan for Home" system until it expires, I guess - but a backup system with an expiration isn't much of a backup system at all. 🙂

Categories: Consumerism, Software

More WordPress woes - degradation over time

March 13th, 2017 No comments

Another day, another WP headache.

One of my clients contacted me saying his site is not performing properly and it has been getting worse. It's on a server I setup, so the server shouldn't be the issue, but I wanted to make sure there's not something weird going on with it, and see if I can find some causes.

He's using my CloudFlare full page caching method - so that's good. His frontend performance (for cached pages) is lightning fast. But the backend and uncached pages (and 404s) are abysmal.

I notice the standard stuff - UberMenu, Genesis theme - both very general purpose things, which we all know sacrifices performance for flexibility. UberMenu can be a beast. "I can do everything" themes often are too. Anyway, the site WAS just fine previously with those, so it's something else. In the PHP-FPM slowlog and New Relic I see "maybe_unserialize" keep coming up over and over. That function ideally would never exist if it wasn't for WP storing serialized and unserialized strings in the same places and having to detect if it's working with one or not (such an amazing design...) - but either way, it shouldn't be as slow as it is.

That function is used all over the place - but there are a couple key places to look. Especially when cron is disabled in wp-config.php and I am seeing calls to wp_get_schedule, wp_next_scheduled, and other cron-looking functions, on what seems nearly every page load. I wound up finding the culprit. The client had used a plugin called "Easy Social Metrics" and what it would do is add to the cron queue at least one row per post. So there were hundreds if not thousands of "easy_social_metrics_update_single_post" one-time cron items that were being autoloaded EVERY PAGE LOAD - because the "cron" option is an autoloaded option. So every page load inside of the WP frontend or in the admin area was loading tons of serialized text in this single variable, checking if it was serialized, unserializing it, possibly updating/modifying it, reserializing it, etc.

The sad thing was he didn't like the plugin so he had deleted it. But it left behind all the residue.

I've seen similar issues with other options, the "rewrite_rules" one was abused by some other plugin as well in the past on another site (I don't have the details anymore) but this is the second time I've had to forcefully kill off an option to actually rescue performance. Sadly, this is nothing any normal end user who is using only plugins to interact with their site would ever notice, until they installed some sort of cron manager and saw all these excess items hanging around. Not many people would install a cron manager though, or should need to. So that would just mean they have to hire someone now to diagnose why this magical CMS that can do anything is so abysmally slow.

It does have a nice looking admin interface, low barrier to entry for end users, but the foundation is horrible, the plugin and theme ecosystem seems rich because of the number of options and downloads, but the quality and performance on most are lacking.

Here's a chart of the performance. You'll see the first time I killed the "cron" option around 3:13pm, which was rewritten in full again and caused the second spike, and then when I finally purged it for good around 3:17pm. Site is back with sub-1 second average and remained that way. Which is about as good as you're gonna get with 20+ plugins including UberMenu and a kitchen sink theme. 🙂

Categories: WordPress

GoDaddy SSL cert limitations - 10 domains? Not really.

February 14th, 2017 No comments

Just a quick heads up, if planning on buying a UCC cert (totally overpriced!) that claims 10 domains - you actually only get ten FQDNs total - use them wisely. Consider this example, only covering five domains:

  1. a.com
  2. www.a.com
  3. b.com
  4. www.b.com
  5. c.com
  6. www.c.com
  7. d.com
  8. www.d.com
  9. e.com
  10. www.e.com

This is why we use things like CloudFlare and Let's Encrypt. Depending on your needs. It's 2017, who actually needs to buy SSL certs anymore?

Before my customer purchased the certificate I had to ask, since "domain" means something different to me  (you know, an actual domain) and this is what support told me - I verified this after the fact. Now as she states "the primary covers both [www and non-www]" - you can squeeze in an 11th host, but since I needed the domains fully covered, I could only use it for five domains either way.

Categories: Consumerism

Beware: S3 doesn't support "+" in filenames (properly)

January 12th, 2017 1 comment

Discovered this issue the other day.

While you can upload a file with a plus "+" in it, it won't reject you. However, you cannot directly access it. You can only access it by encoding the "+" to "%2b" - so unless you are doing that in your links, you'll wind up with "access denied" S3 XML error. Because the "+" is interpreted as a space - instead of the actual "+"

While they could fix this, it looks like they won't at this point, as it will change the behavior that everyone is used to. I found some threads from years ago complaining about it, even back then there was little expectation the behavior would be changed.

Total complaints with S3: 3
Total happiness points with S3: 1,292

Categories: AWS

nginx rate limiting with a combination of IP and user agent

September 7th, 2016 No comments

Here's a quick and dirty way to use IP-based rate limiting (very common) but override it for specific user agents (or basically, this is just a method of chaining geo {} and map {} and other things together - you have to recycle the variables as each following statement's "default" value.

# whitelisted IP ranges - will not have limits applied
geo $geo_whitelist {
  default 0;
  1.2.3.4 1;
  2.3.4.5/24 1;
}

# whitelisted user agents - will not have limits applied
map $http_user_agent $whitelist {
  default $geo_whitelist;
  ~*(google) 1;
}

# if whitelist is 0, put the binary IP address in $limit so the rate limiting has something to use
map $whitelist $limit {
  0 $binary_remote_addr;
  1 "";
}

limit_req_zone $limit zone=perip:30m rate=1r/s;

References:

Categories: nginx

Automatically installing New Relic on Elastic Beanstalk

July 10th, 2016 No comments

Nearly 100% automated - it looks like the only way to grab an environment variable right now is through some nasty hacks.

Other than the one hard-coded thing at the moment, this seems to work well. Sets up the New Relic configuration as desired, on instance creation. Assuming stock PHP 5.x Amazon Linux AMI.

Just drop this in .ebextensions/newrelic.config or something of the sort, and rebuild your environment. Replace API_KEY with your API key, and change or remove the newrelic.framework line if not using WordPress (this page has a list of supported frameworks)

packages:
  yum:
    newrelic-sysmond: []
    newrelic-php5: []
  rpm:
    newrelic: http://yum.newrelic.com/pub/newrelic/el5/x86_64/newrelic-repo-5-3.noarch.rpm
commands:
  "01_configure_sysmond":
    command: nrsysmond-config --set license_key=API_KEY
  "02_start_sysmond":
    command: /etc/init.d/newrelic-sysmond start
  "03_install_php_agent":
    command: newrelic-install install
    env:
      NR_INSTALL_SILENT: true
      NR_INSTALL_KEY: API_KEY
files:
  "/etc/php.d/newrelic.ini":
    mode: "000644"
    owner: root
    group: root
    content: |
      extension=newrelic.so

      [newrelic]
      newrelic.appname = "`{ "Ref" : "AWSEBEnvironmentName" }`"
      newrelic.browser_monitoring.auto_instrument = true
      newrelic.capture_params = true
      newrelic.enabled = true
      newrelic.error_collector.enabled = true
      newrelic.error_collector.record_database_errors = true
      newrelic.high_security = false
      newrelic.license = "API_KEY"
      newrelic.transaction_tracer.detail = 1
      newrelic.transaction_tracer.enabled = true
      newrelic.transaction_tracer.explain_enabled = true
      newrelic.transaction_tracer.explain_threshold = 2000
      newrelic.transaction_tracer.record_sql = "raw"
      newrelic.transaction_tracer.slow_sql = true
      newrelic.framework = "wordpress"
Categories: AWS, PHP

You really can do bash completion with spaces!

April 28th, 2016 No comments

A short period ago when I was working on my little ec2ssh project I ran into the issue of tab completion not working for things with spaces in them.

This was my first foray into the world of bash completion, so I incorrectly thought "this can't be that hard - everything has completion now."

After Googling for hours, trying various things, finding statements saying that it wasn't possible - I stumbled upon the solution. It works properly and was surprisingly easy.

The compgen command typically works with spaces for the word separators. What I had to do was make sure the source data was separated by something else, such as a linebreak (\n) and then tell compgen that was the separator.

So this line before the compgen command was key:

local IFS=$'\n'

Next, the complete command still won't work without feeding it the -o filenames option. This seems to be critical, telling it to behave like we are dealing with filenames makes it work properly. Note that I couldn't even find the original example that led me to this solution right now. It's that undocumented.

complete -o filenames

If you'd like to see this put altogether, in a very easy to read, semi-documented autocomplete script, check it out here:
https://github.com/mike503/ec2ssh/blob/master/ec2ssh.bash_completion.sh

Categories: PHP

CrashPlan in Docker on Synology

December 4th, 2015 3 comments

TL;DR - fun stuff is at the bottom.

It's time to spend a few minutes to share something that hopefully will solve a headache of mine - and maybe one of yours.

I own Synology DS2411+ units. They're very space efficient, and quiet - the form factor is great, and something I haven't found anywhere else (WHY?!?!) - there is a premium for their specifically designed chassis and software - and in the past I've found it to be worth it. Nowadays with ZFS or Btrfs around, cheaper hardware and such... I'm not as happy, but I'm also not in the market for a new setup. 🙂

Anyway - one of the main things I want to do is to try to back up everything on my physical units to the cloud. The first barrier to entry is that there is only one truly "set-it-and-forget-it" *unlimited* provider that has a Linux client - and that is CrashPlan. The pricing is great, but the client is bloated, buggy and weird. I've begged Backblaze to create a Linux client for years and there was some chatter about it, but still nothing public. At this point with B2 being launched, I'd be surprised if they do it at all, or just have a B2-based app instead (in which case it will be utility based billing. I want unlimited!)

Back to our only option - CrashPlan.

Due to the terms of distribution with CrashPlan it looks like it cannot be officially packaged by Synology. Which is where the PC Load Letter option came in handy. However it had some issues over time, and the Java distribution would require reinstall periodically. Ultimately, it wasn't the most reliable solution.

So I decided - I'll use the instructions to guide me on a manual install, one that isn't subject to issues with the package manager or Java changes. After using the instructions on the Synology wiki and what I thought was a clean, successful installation that worked for a couple weeks, it too crashed. Somehow during one of CrashPlan's own updates (I believe) it wound up wiping all the .jar files away. Trying to reinstall CrashPlan (the same version) actually failed after that for some unknown reason (how?!?!)

Recently I had some issues when trying to setup a Redmine install for a client. Even following the exact instructions it wouldn't work. No idea why. So I decided to look into a Docker-based setup. Wouldn't you know, I found a package that worked perfectly out of the gate. Docker to the rescue.

I realized not too long ago that Synology added Docker to it's package center. While I dismissed it as being more bloatware and attention being paid to the wrong aspects (just make a rock solid storage platform, I don't need an app store concept on my NAS!) I decided I should take a peek at the possibility of running CrashPlan in a Docker container. That way, it was self-contained in a richer Linux environment, with it's own management of the Java runtime stuff.

As of right now, I will say - "Docker to the rescue" - again. After fixing up the right command line arguments and finding a Docker image that seems to work well, it's been running stable and inherited my old backup perfectly. I use -v to expose the /volume1 off my Synology to the container and it picks up exactly where it left off.

That's quite a lot of explanation for what boils down to the magic of it all. Here is the working image and my command line arguments to it, to expose the ports and volumes and such properly. Enjoy.

docker pull jrcs/crashplan
docker run -d -p 4242:4242 -p 4243:4243 -v /volume1:/volume1 jrcs/crashplan:latest

Add more -v's if needed, and change the ports if you wish. Remember to grab the /var/lib/crashplan/.ui_info file to get the right key so you can connect to it from a CrashPlan Desktop application (once of my other complaints with CP)

UPDATE 2017/01/07 - after running this for months, I'll share my script (I believe it actually requires installing bash from ikpg to really be solid) - I put it on my data volume (volume1) in the crashplan directory and it seems to persist. Any time I want to update the container or something seems to have crashed (since CrashPlan can crash often, thanks mainly to Java and the large amount of files and how it is RAM heavy) I can just run this. It's able to persist the relevant configuration between docker runs, and runs more stable than any other solution I've used or tried. CrashPlan is still much slower than Backblaze but it's still the only service with a Linux compatible client and unlimited data (I don't consider ACD to fit the mold exactly yet)

cat /volume1/crashplan/crashplan-docker.sh 
#!/opt/bin/bash -v

docker rm -f `docker ps -qa`
rm -f /volume1/crashplan/log/*

docker pull jrcs/crashplan:latest

# ref: https://hub.docker.com/r/jrcs/crashplan/
docker run \
-d \
--name=crashplan \
--restart=always \
-h $HOSTNAME \
-p 4242:4242 \
-p 4243:4243 \
-v /volume1:/volume1 \
-v /volume1/crashplan:/var/crashplan \
jrcs/crashplan:latest
Categories: Software

Speeding up Drupal cache flushing

April 21st, 2015 No comments

Are your cache clears slow?

Do you use features? It's great, but can become quite the beast when you run a "drush cc all" or a "drush fra" - and today we figured out why the "drush cc all" is an issue. Because when hook_flush_caches() runs, if you don't explicitly define this as FALSE, it will run the "fra" for you inside of your cache clear!

Add this to your settings.php:

$conf['features_rebuild_on_flush'] = FALSE;

Since we run our fra separately, we've disabled this, and noticed quite a reduction in time to "flush caches" (which was not only flushing caches but also reverting features, apparently!)

It's that unknown-to-us-before-today variable in the snippet below...

/**
 * Implements hook_flush_caches().
 */
function features_flush_caches() {
  if (variable_get('features_rebuild_on_flush', TRUE)) {
    features_rebuild();       
    // Don't flush the modules cache during installation, for performance reasons.
    if (variable_get('install_task') == 'done') {
      features_get_modules(NULL, TRUE);
    }
  }
  return array();
}
Categories: Drupal