Archive

Archive for the ‘Software’ Category

CrashPlan - what a joke.

August 22nd, 2017 No comments

Why would I be using CrashPlan one might ask? Well, I've been waiting years for Backblaze to come up with a consumer/unlimited Linux client for my NAS boxes, and now that B2 is out, I doubt that's happening. 🙂 CP was the only unlimited + Linux option there is.

I've been getting "convert to CrashPlan Pro" emails for a month or two now. I didn't have any interest. But now it is being forced, as they've announced they are exiting the consumer market. They provide an "easy" way and a discount for 12 months to migrate you over to their "small business platform" - except get this.

Some cloud backups will be restarted

Some of your devices have more than 5 TB of data backed up to CrashPlan Central. Due to technical platform constraints of the migration process, each device's cloud backup must be less than 5 TB.

If you continue, these cloud backups will be permanently removed after you migrate your account to CrashPlan for Small Business, and these devices will start new cloud backups.

So two of my NAS units are actually close to 100% backed up (it only took YEARS) - one at 14.7TB, the other at 23TB. Both of those backups, which took forever, are going to be removed. Even though the consumer CrashPlan could support it, and the CrashPlan Pro talks about unlimited backups ("it can be gigabytes or terabytes!") they give me no way to migrate that. That has to start over from scratch.

Need I remind you that CP's upload performance is abysmal, the client is bloated and "Java-ey" and resource intensive. The whole thing is garbage. Sadly, it is the only Linux capable unlimited client out there though still.

If you don't need Linux, don't touch CP. Go with Backblaze. CP has tons of complaints of lost or broken backups, horrible speeds, then this kind of shit.

To allow you time to transition to a new backup solution, we've extended your subscription (at no cost to you) by 60 days. Your new subscription expiration date is 09/21/2017.

How is that 60 days? It's 30 days from today.

I just feel bad for all the people complaining on Twitter about prepaying and CP's policy is no refunds. Their backups will sit on the "Crashplan for Home" system until it expires, I guess - but a backup system with an expiration isn't much of a backup system at all. 🙂

Categories: Consumerism, Software

CrashPlan in Docker on Synology

December 4th, 2015 3 comments

TL;DR - fun stuff is at the bottom.

It's time to spend a few minutes to share something that hopefully will solve a headache of mine - and maybe one of yours.

I own Synology DS2411+ units. They're very space efficient, and quiet - the form factor is great, and something I haven't found anywhere else (WHY?!?!) - there is a premium for their specifically designed chassis and software - and in the past I've found it to be worth it. Nowadays with ZFS or Btrfs around, cheaper hardware and such... I'm not as happy, but I'm also not in the market for a new setup. 🙂

Anyway - one of the main things I want to do is to try to back up everything on my physical units to the cloud. The first barrier to entry is that there is only one truly "set-it-and-forget-it" *unlimited* provider that has a Linux client - and that is CrashPlan. The pricing is great, but the client is bloated, buggy and weird. I've begged Backblaze to create a Linux client for years and there was some chatter about it, but still nothing public. At this point with B2 being launched, I'd be surprised if they do it at all, or just have a B2-based app instead (in which case it will be utility based billing. I want unlimited!)

Back to our only option - CrashPlan.

Due to the terms of distribution with CrashPlan it looks like it cannot be officially packaged by Synology. Which is where the PC Load Letter option came in handy. However it had some issues over time, and the Java distribution would require reinstall periodically. Ultimately, it wasn't the most reliable solution.

So I decided - I'll use the instructions to guide me on a manual install, one that isn't subject to issues with the package manager or Java changes. After using the instructions on the Synology wiki and what I thought was a clean, successful installation that worked for a couple weeks, it too crashed. Somehow during one of CrashPlan's own updates (I believe) it wound up wiping all the .jar files away. Trying to reinstall CrashPlan (the same version) actually failed after that for some unknown reason (how?!?!)

Recently I had some issues when trying to setup a Redmine install for a client. Even following the exact instructions it wouldn't work. No idea why. So I decided to look into a Docker-based setup. Wouldn't you know, I found a package that worked perfectly out of the gate. Docker to the rescue.

I realized not too long ago that Synology added Docker to it's package center. While I dismissed it as being more bloatware and attention being paid to the wrong aspects (just make a rock solid storage platform, I don't need an app store concept on my NAS!) I decided I should take a peek at the possibility of running CrashPlan in a Docker container. That way, it was self-contained in a richer Linux environment, with it's own management of the Java runtime stuff.

As of right now, I will say - "Docker to the rescue" - again. After fixing up the right command line arguments and finding a Docker image that seems to work well, it's been running stable and inherited my old backup perfectly. I use -v to expose the /volume1 off my Synology to the container and it picks up exactly where it left off.

That's quite a lot of explanation for what boils down to the magic of it all. Here is the working image and my command line arguments to it, to expose the ports and volumes and such properly. Enjoy.

docker pull jrcs/crashplan
docker run -d -p 4242:4242 -p 4243:4243 -v /volume1:/volume1 jrcs/crashplan:latest

Add more -v's if needed, and change the ports if you wish. Remember to grab the /var/lib/crashplan/.ui_info file to get the right key so you can connect to it from a CrashPlan Desktop application (once of my other complaints with CP)

UPDATE 2017/01/07 - after running this for months, I'll share my script (I believe it actually requires installing bash from ikpg to really be solid) - I put it on my data volume (volume1) in the crashplan directory and it seems to persist. Any time I want to update the container or something seems to have crashed (since CrashPlan can crash often, thanks mainly to Java and the large amount of files and how it is RAM heavy) I can just run this. It's able to persist the relevant configuration between docker runs, and runs more stable than any other solution I've used or tried. CrashPlan is still much slower than Backblaze but it's still the only service with a Linux compatible client and unlimited data (I don't consider ACD to fit the mold exactly yet)

cat /volume1/crashplan/crashplan-docker.sh 
#!/opt/bin/bash -v

docker rm -f `docker ps -qa`
rm -f /volume1/crashplan/log/*

docker pull jrcs/crashplan:latest

# ref: https://hub.docker.com/r/jrcs/crashplan/
docker run \
-d \
--name=crashplan \
--restart=always \
-h $HOSTNAME \
-p 4242:4242 \
-p 4243:4243 \
-v /volume1:/volume1 \
-v /volume1/crashplan:/var/crashplan \
jrcs/crashplan:latest
Categories: Software

Docker HTTP proxy settings in Upstart

October 10th, 2013 No comments

This was driving me crazy. There's some bug reports about it, but nobody has a plain and simple example. So here's mine. Enjoy.

Old:

description "Run docker"

start on filesystem or runlevel [2345]
stop on runlevel [!2345]

respawn

script
  /usr/bin/docker -d
end script

New:

description "Run docker"

start on filesystem or runlevel [2345]
stop on runlevel [!2345]

respawn

env HTTP_PROXY="http://your.address:port"
env HTTPS_PROXY="http://your.address:port"

script
  /usr/bin/docker -d
end script
Categories: Software

Setting up chrooted SFTP-only access on a Synology DiskStation

August 28th, 2013 No comments

This has been on my list to try to figure out for a long time. I wanted SFTP only access to specific accounts, and to be able to chroot them. It took me a while and various attempts, only to get wind up landing on the most basic solution, of course.

I originally tried scponly and scponlyc (which I've used in the past) and rssh, however none of them worked properly for me.

Sure enough, the openssh package from optware worked right out of the box.*

wget http://wizjos.endofinternet.net/synology/archief/syno-mvkw-bootstrap_1.2-7_arm-ds111.xsh
ipkg install openssh openssh-sftp-server

Then edit /opt/etc/openssh/sshd_config, and put in:

Match User username
        ChrootDirectory /some/directory
        ForceCommand internal-sftp

Also edit the user account in /etc/passwd, change the home dir to the /some/directory, and give it "/bin/sh" for a shell.

Viola... when sshd is restarted next time it will just work.

The guys at optware made a neat startup script that will start their sshd on boot. So nothing to do there.

Make sure to disable synology's built-in ssh (Control Panel > Terminal) or you'll probably be hitting the wrong one!

If you are concerned about privileges, the way that Synology runs its units isn't very UNIX permission friendly (most files are world writable on the filesystem, and the expectation is the daemons will properly control the access.) I wound up creating a little cron job that chmods and chowns files to keep the secondary account I've created to be a "read only" account to that directory.

* As always with my tips, YMMV - this worked fine on my Atom-based DS2411+ unit. but when I tried the same setup on a DS213, it didn't seem to work. No idea why, there aren't much diagnostics or logs provided to use. Sorry.

UPDATE: After running this on the "working" NAS unit for a bit, it stopped working. The culprit was the ChrootDirectory became owned by the user, not by root:root. Changing it back (chown root:root /some/directory) fixes that. So it looks like OpenSSH wants that in place for the chroot stuff to work. That could have been the issue mentioned in the previous paragraph (couldn't test it anymore)

Categories: Software

Upstart script for Apache Solr

July 11th, 2013 No comments

(Apologies for not sharing my technical thoughts for six months now!)

I was trying to figure out the best way to launch Apache Solr on Ubuntu - and was having issues finding a nice clean way to do it. I decided after some misc init scripts, that I should look at Upstart. Thankfully someone (mentioned below) already had a working script available to start from.

Assumes:

  • Solr will be running as user "solr" group "solr"
  • Solr's root where start.jar is located is /home/solr
  • This works well using Solr 4.3.1, Ubuntu 12.04.2 LTS (precise)

Put this in /etc/init/solr.conf:

start on runlevel [2345]
stop on runlevel [!2345]

kill timeout 30
respawn

setuid solr
setgid solr

script
   chdir /home/solr
   exec /usr/bin/java -jar start.jar
end script

Big thanks to Eric Wilson's blog for the initial script. I tweaked it for my specific user/group/location desires.

Categories: Software

Shout out to CrashPlan!

August 24th, 2012 No comments

While I am typically a BackBlaze fan boy ("we'll always be unlimited" and so cost-effective) I somehow stumbled upon CrashPlan. Which when looking at it and seeing it's a Java-based client initially scared me, but the support for Linux and other OSes got me interested. Not only can I stick it on servers and home Linux boxes (and I have now...) but they even give you tips on how to use an SSH tunnel to connect to their local service. So I can launch a desktop application on my Windows machine and connect to my CrashPlan backup daemon on my server at SoftLayer. Neat.

It is supposed to be totally unlimited as well, and they only charge $12.99 or something for 2-10 computers, vs. a per-computer model from BackBlaze. Also, they don't list support for file shares, but it had no problem backing up one of my samba mounts. (Please don't fix that if it's a bug!)

So while I still consider BackBlaze to be more efficient and easier to use (just "set it and forget it" I will say that CrashPlan has a lot more options, is an opt-in policy (akin to Mozy, etc.) instead of an opt-out by default policy (BackBlaze) and it makes it really easy to list the entire filesystem, and select/deselect at any level of it.

The other interesting/neat thing is you can set it up to backup to friends machines, local storage, attached drives or their CrashPlan Central cloud (which is what they charge the monthly for.)

Since BackBlaze isn't playing in the Linux space yet, and has special ways to check if a filesystem is "local" and such, it looks like I will be using the best of both systems for now. The Java UI does feel a bit "Java-ey" but the price, features and performance of the actual network backups seem well worth it.

So +1 to CrashPlan!

Categories: Software

How I sped up my MySQL restores

February 25th, 2011 No comments

I want to share this with the world, as it may have been helpful up front for me. I had to move a database that is 13gb on the filesystem (not including the shared ibdata file) - the database is a mixture of MyISAM and InnoDB tables. That's not an extremely large or complex database, however, when I ran the export script, it only took a couple minutes. Great, I figured import would take longer, but not as long as it actually was originally.

I didn't do the math, but it would have probably taken over 10-15 hours to restore the database from the mysqldump. There's a couple easy tweaks I did not use. For one, I used --skip-opt and made my mysqldump files full INSERT statements (for verbosity and the ability to "diff" them if I ever needed to) - this was stolen from a backup script I wrote.

If you read the documentation/blogs, it says to use --opt when running mysqldump for faster imports. Well, duh! While I was at it, I also tweaked a couple other things. Right now it is moving MUCH faster. What did I do?

  • On the source, I used mysqldump --opt (it seemed to dump the database faster too)
  • On the destination, I set innodb_flush_log_at_trx_commit to "0" in my.cnf for the time being. This server isn't used yet, so that's safe.
  • I also put "SET AUTOCOMMIT=0;" at the top of the script, and "COMMIT;" at the bottom of the script. I don't need any commits until the end, this is a fresh import.

The results are not very scientific, but here's how it breaks down so far (still in the middle of the process)

  • Without these tweaks, at 107 minutes it was only at 2.2gb out of 13gb.
  • Without these tweaks, at 12 minutes it was at 4.5gb out of 13gb.

I think this will save my bacon, I wish I had done this sooner and not wasted that two hours originally. Someone in #mysql recommended I look at XtraBackup, but it seemed like too much to learn and attempt my first run at it while I was having to do a  production migration.

Categories: Software

Mozy stops "unlimited" plan... and I mosey on

February 1st, 2011 No comments

I've been a fan of Backblaze for a long time, and prefer it and recommend it over Mozy time and time again. Mozy was the golden child for a bit, but now the prize goes to Backblaze, with its more efficient backup client, faster network speeds and same price. I've been using Backblaze for over a year by itself on many machines and have been quite happy. For the sake of redundancy though, a couple months ago I decided to subscribe to Mozy as well, just out of paranoia.

Due to the fact that their service always uses over 100 megs of RAM, and seems to continuously get stuck on certain files, I was planning on getting rid of it soon. Today's announcement made this decision even easier though, as now they've decided to go the way of other companies with tiered pricing models. With how cheap technology continuously gets, any company marking prices up really pisses me off.

So, I give a profane salute to you, Mozy, as you have now joined the ranks of companies I feel personally displeased with, and definitely will not recommend (not that I really did anyway.)

Even AT&T (one of the main companies I despise) let people grandfather in their unlimited plans, and cell networks take a lot more beating than a backup service with hard drive prices going down every day. Adding more servers to a rack is a lot harder than adding cell tower capacity. That type of "next month you'll be forced to change" does not sit well with me.

Dear Michael,

Thanks for being a valued Mozy subscriber. For the first time since 2006, we're adjusting the price of our MozyHome service and wanted to give you a heads up. As part of this change, we’re replacing our MozyHome Unlimited backup plan and introducing the following tiered storage plans:

50 GB for $5.99 per month (includes backup for 1 computer)
125 GB for $9.99 per month (includes backup for up to 3 computers)

You may add additional computers (up to 5 in total) or 20 GB increments of storage to either of the plans, each for a monthly cost of $2.00.

While this policy takes effect for new MozyHome customers starting today, your MozyHome Unlimited subscription is still valid for the duration of your current monthly term. In order to ensure uninterrupted service, you'll need to select a new renewal plan.

Categories: Consumerism, Software

True "Incognito" mode for Google chrome

January 8th, 2011 No comments

I hate Windows. I do.

This is a very hacky, no-garbage-collection, but still "working good enough" script. At the advice of #chromium on freenode, when asked about cookie sharing between Incognito windows, I was told it's been discussed before, and I got the information on how to make sure that your Incognito windows don't share information or cookies by forcing separate user data directories.

I'm not really worried about privacy, I'm more annoyed that I launch separate Incognito windows and it shares cookies between them, which is sort of against the point. I have to login to the same sites over and over under different accounts for different clients, and it's a PITA.

Major things to note:

  • This assumes you'll run some sort of "temp directory cleanup" tool on your own for Windows. This doesn't have any concept of "oh yeah, I have to cleanup that temp directory I made"
  • None of your extensions, bookmarks, settings, etc. will be remembered in this session. It's completely barren.
  • You will never (assuming the GUID is unique) get the same session more than once.

As I said, it's hacky, and you'll need to change a couple of the paths. I couldn't figure it out elegantly, and I was getting tired of trying to find script examples on the net (why is it so hard to find code that works together for Microsoft languages?)

Perhaps someday soon Chrome or someone will develop something more robust for this. For now, if you want - this does seem to work, at least on my XP SP3 system.

Note: This is vbscript. Make a file called "incognito.vbs" or something and it should work.

' keep us honest
Option Explicit

' because we have to
Dim strDirectory
Dim strTempDirectory
Dim TypeLib
Dim objFSO
Dim objShell
Dim strChromePath

' change this if you want - anything with spaces has to have be wrapped in triple quotes
strChromePath = """C:\Documents and Settings\mike\Local Settings\Application Data\Google\Chrome\Application\chrome.exe"""
strTempDirectory = "C:\Windows\Temp"

' make a clean guid
Set TypeLib = CreateObject("Scriptlet.TypeLib")
strDirectory = TypeLib.Guid
strDirectory = Replace(strDirectory, "{", "")
strDirectory = Replace(strDirectory, "}", "")
strDirectory = strTempDirectory & "\" & strDirectory

' create the directory
Set objFSO = CreateObject("Scripting.FileSystemObject")
objFSO.CreateFolder(strDirectory)

' launch it, or fail
If err.number = vbEmpty then
   Set objShell = CreateObject("WScript.Shell")
   objShell.run (strChromePath & " --incognito --no-first-run --user-data-dir=" & strDirectory)
Else
   WScript.echo "VBScript Error: " & err.number
End If

' cleanup
Set TypeLib = nothing
set objFSO = nothing
Set objShell = nothing

' quit
WScript.Quit()
Categories: Development, Software

Pidoco - rapid prototyping/wireframing - why didn't I find this before?

November 11th, 2010 No comments

A while ago, I was looking for tools to do prototyping/wireframing so I could explain my ideas a bit better than some crappy sketched out "wireframes" on paper. For some reason, this one did not come up, so I want to help them gain exposure for being so awesome.

I just discovered this tool in the last hour. It not only  allows you to make wireframes, but actually usable prototypes - with links to external sites and other pages inside of the prototype, pull in external images and content, has layers like photoshop ... the list goes on and on. Best part is the learning curve was quite simple. I found another tool which was an Adobe AIR-based app, if I recall, but it was a bit cryptic and hard to use. This thing allows you to even invite people to do usability testing on your prototype, record their movements, leave comments, etc. Best of all, the cost is extremely reasonable!

There's simply too much to name off and now all I want to do for the next month is prototype out all my ideas!

https://pidoco.com/

Categories: Development, Software