Provide separate Rails log files for each Unicorn worker

Here's a snippet we're using to keep our Rails 4 Unicorn log files in separate files (one per Unicorn process) so that tools like Splunk can parse a single request as an atomic chunk. If you don't do this then the log files become interwoven with each other and would be really hard for tools to separate out.

In your unicorn config file:

after_fork do |server, worker|
  # This log hack provides separate log files for each unicorn worker.
  # Since Unicorn forks worker processes after loggers are already initialized, by this point other
  # things (like ActiveRecord::Base.logger) are already pointing directly at the current
  # Rails.logger instance so we can't just point `Rails.logger` elsewhere.
  logdev = Rails.logger.instance_variable_get(:@logdev)

  ext = File.extname(
  path = /#{Regexp.escape(ext)}$/, ".#{}#{ext}"

  # open the file in the same way rails does:
  file =, 'a')
  file.sync = Rails.application.config.autoflush_log
  logdev.instance_variable_set(:@dev, file)

Here is a gist.

Keep Splunk's search bar open for multiline queries

I've always wished Splunk wouldn't auto-collapse the search bar for multiline queries. This morning I dove in and figured out how to customize it.

Here's some CSS that will keep the search bar expanded up to about 5 lines (tweak the max-height settings for different amounts).

Append this to
(If application.css doesn't exist already you'll need to restart splunkweb)

Tested on Splunk 4.3.2 and 5.0.1

/* keep search bar expanded to several lines for multiline queries */
.SearchBar .searchFieldWrapperInner {
    height: auto;
    max-height: 80px;
    overflow-y: scroll;
.SearchBar .searchFieldWrapperInner.searchFieldActive {
    max-height: none;
.SearchBar .searchFieldWrapper {
    height: auto;
    max-height: 88px;
    padding-bottom: 2px;
.SearchBar .searchFieldWrapperInner::-webkit-scrollbar {
    -webkit-appearance: none;
    width: 5px;
.SearchBar .searchFieldWrapperInner::-webkit-scrollbar-thumb {
    border-radius: 4px;
    background-color: rgba(0,0,0,.5);
    -webkit-box-shadow: 0 0 1px rgba(255,255,255,.5);

Monitoring Google Analytics for iOS

While trying to verify some Google Analytics tracking on Animoto's iOS app I discovered that the GA iOS SDK (at least version 1.3 that we're using) makes itself hard to monitor because it doesn't use the HTTP Proxy configured for the iOS device when it makes its tracking calls, so I can't use a debugging proxy like Charles or Fiddler to observe the traffic. i.e., this method doesn't work.

Instead, I have to share my ethernet internet connection to my iOS device wirelessly and then use a tool like ngrep or Wireshark to monitor the traffic going through the wireless interface (usually "en1").

How I verified:

Using two laptops, I shared my ethernet connection from my first laptop to both my iPod Touch and to my second laptop. I started Charles Proxy on my second laptop and configured my iPod Touch to use it as its HTTP Proxy. Like this:

Using Wireshark to inspect network traffic on both laptops I saw most http traffic coming through on both laptops. However, the GA traffic was showing up only on the first laptop. So, for some reason the GA iOS SDK isn't using the iOS-configured HTTP proxy to send its traffic. I'm not sure if this is a bug or by design, but it does make things harder to monitor.

Interestingly, even when you run the app through the iOS simulator via Xcode, the iOS simulator also refuses to use the OSX-configured proxy for GA traffic (i.e., you still can't use Charles).

So to monitor GA traffic from an iPhone app you have two options:

  1. Using one Mac and one iOS device, share your ethernet internet connection from your Mac to your iOS device
  2. If you have the source code, you can use the iOS simulator via Xcode and the simulator will be using your Mac's internet connection.

And in both cases you need to use ngrep or Wireshark on your Mac to monitor the traffic.

Some discussion on StackOverflow.

Download task for Capistrano


cap download_file FILE=/some/remote/file.ext DEST=/local/save/path.ext

Downloads files from the remote servers and saves them as "/local/save/path.ext-#{hostname}" locally.


desc 'Download a file.  Local filenames are postfixed with the origin server hostname.'
task :download_file do
  abort "Please specify a file or directory to download (via the FILE environment variable)" if ENV["FILE"].to_s.empty?
  abort "Please specify a destination file or directory (via the DEST environment variable)" if ENV["DEST"].to_s.empty?

  file = ENV["FILE"]
  dest = "#{ENV["DEST"]}-$CAPISTRANO:HOST$"

  download file, dest

Enumerable#count_by for Ruby

Handy function for inspecting (counting) array-like data. Usage:

> [9,10,11,11,11,10].count_by(&:odd?)
=> {true=>4, false=>2}

>> class Foo <; end
>> ["a"),"b"),"a")].count_by(&:x)
=> {"a"=>2, "b"=>1}

Source: (I put the first of these in my ~/.irbrc file)

# Ruby >= 1.8.7 (incl 1.9.x)

module Enumerable
  def count_by(&block)
    Hash[group_by(&block).map { |key,vals| [key, vals.size] }]


# Ruby <= 1.8.6 (Hash#[] behaves differently in <=1.8.6.  note: breaks when group_by key is an array)

module Enumerable
  def count_by(&block)
    Hash[*group_by(&block).map { |key,vals| [key, vals.size] }.flatten]

mvsum for Splunk: Summing multi-valued fields within a single event

Here's a way to sum up multi-valued fields within events in Splunk.

Splunk provides "mvcount" and "mvjoin" but doesn't provide anything like "mvsum". So I wrote a custom search command (splunk reference docs here) that can be plugged into Splunk to provide functionality like this:

* | rex max_match=10 "(?<nums>\d+)" | mvsum nums as total_nums | table nums total_nums

Which generates output like this:

Installation is as easy as installing this source code and restarting Splunk. Tested on Splunk 4.3.2.

Github permalink bookmarklet

Linking to a URL on github like

is fragile because "master" moves around. It's much better to link to a specific version of the file like

But, github makes it easy to arrive at the former and hard to transition to the latter. I wish they would make it easy, but until that happens here's a handy bookmarklet I crafted that:

  • Redirects you to the associated permalink page
  • Shrinks the commit hash to the standard 7 character abbreviation
  • Maintains any urls hashes (e.g., highlighted line numbers)

The bookmarklet: github permalink (drag to your bookmark bar)

Use on any "blob" page, such as

Running rspec tests in Sublime Text 2

To be able to run spec tests from within Sublime Text 2:

Menu choice: Tools > Build System > New Build System...

Then enter:

      "cmd": ["bundle", "exec", "rspec", "$file"],
      "working_dir": "${project_path:${folder}}",
      "file_regex": "^(...*?):([0-9]*):?([0-9]*)",
      "selector": "source.ruby"

Thanks to

USB Safety

Hilarious enough to get me to post on my blog. :)

Safety Check

Funny comment from notch8 following Github's production database slip-up.

Installing PostgreSQL for use in Rails testing on Mac OS X Snow Leopard

These are instructions for setting up 64-bit postgres on Mac OS X 10.6, Snow Leopard using MacPorts. My purpose was to be able to run Rails' tests when creating patches for ActiveRecord.

This is a pure Snow Leopard install (not upgraded from 10.5 Leopard). There may be some complications if you upgraded. Good luck. ;)

Install with macports:

sudo port install postgresql84 postgresql84-server

While that's happening, update PATH in the appropriate file

(mine is ~/.bash_login, yours may be ~/.profile, ~/.bash_profile, or something else)

(in the correct file)
export PATH="/opt/local/lib/postgresql84/bin:$PATH"

Follow the instructions that 'port install' printed out:

"To create a database instance, after install do"

sudo mkdir -p /opt/local/var/db/postgresql84/defaultdb  
sudo chown postgres:postgres /opt/local/var/db/postgresql84/defaultdb  
sudo su postgres -c '/opt/local/lib/postgresql84/bin/initdb -D  /opt/local/var/db/postgresql84/defaultdb'

"Execute the following command to start it, and to cause it to launch at startup:"

sudo port load postgresql84-server

Create a postgres superuser for your username:

sudo -u postgres createuser your-username-here
Shall the new role be a superuser? (y/n) y

Install the ruby gem:

sudo gem install pg

Set up the rails activerecord test db:

cd your-rails-checkout-dir/activerecord
rake postgresql:build_databases

Verify test passage:

rake test_postgresql

It works! Hopefully. :)

Some possibly helpful references

Easy PDF Generation on Heroku

Discovering the sleekness of wkhtmltopdf + wicked_pdf and recalling old wounds incurred in past PDF generation coding made me smile so much I just had to make a sample app out of it. What better way to do so than a plug-and-play Heroku app?

Voilá heroku-pdf. Working example.

Animoto's 5-word acceptance speech

Presenting our CFO:

Animoto's Webby acceptance speech

Surely this is the greatest company on earth :)

Animoto highlights the Webbys

Animoto was asked to create a highlight clip for the Webby's (which we also got to accept an award from) --

Highlight Reel 2009

Animoto wins a Webby

The awesome team here at Animoto has taken home an 'oscar of the internet' in this year's Webby Awards in the Services & Applications category. Go Animoto!

Slow Ruby on Rails tests fixed!

I can't thank Barry Hess and Andi Schacke enough for writing and commenting on this post and saving me from all the darkness and despair ;) I've experienced over the past few months as my Rails test times plummetted on this shiny new MacBook Pro. I recompiled as per Andi's comment and my times are now about a fourth what they were. Thank you both.

Revised Hivelogic instructions:

curl -O
tar xzvf ruby-1.8.7-p72.tar.gz
cd ruby-1.8.7-p72
sudo make install
cd ..

Happy Valentines Day!

Two videos from our family to yours. :)

Enterprise Ruby

(from Maik Schmidt's Enterprise Recipes with Ruby and Rails)

"Back in the mid-90s, an experiment started as a way to make average developers more effective, because the demand continued (as it does today) to outstrip the supply of good developers. If the software industry can figure out a way to make mediocre developers productive, software development can expand to enterprise scales. Thus, we saw the rise of languages like Visual Basic and Java and later C#. These languages were specifically made less powerful than alternatives (like Smalltalk). The goal of the Lockdown Experiment: make tools to keep average developers out of trouble while still being able to write code. But then a couple of interesting things happened. First, creating restrictive tools and languages didn’t really keep average developers out of trouble, because average developers sometimes apply great ingenuity to coming up with ridiculously complex solutions to problems. But while this didn’t really make the average developers better, it put a serious governor on the best developers. The whole industry seemed to be optimizing for the wrong thing: safety at the expense of power, with the stated goal of creating software faster. Yet, we didn’t produce software faster; we just annoyed the best developers. The second effect was this new wave of languages was so restrictive that they immediately had to start supplementing them to get real work done. For example, in the Java world, the second version added a bunch of new features (like anonymous inner classes), and eventually some limited metaprogramming was added to Java via aspect-oriented programming.

The real underlying problem with lots of "enterprise languages" is one that Stuart Halloway of Relevance software summed up brilliantly: ceremony vs. essence. Languages that require you to jump through hoops to achieve results are highly ceremonious, whereas languages that make it easy to do sophisticated things are more essential. At the end of the day, you have to solve problems. You want languages and frameworks that lessen the distance from intent to result. Ceremonious languages sometimes make that distance quite far, requiring lots of work that doesn’t really move your solution forward. More essential languages get out of your way, making the distance from intent to result shorter. "

Merry Christmas!

"True happiness comes only by making others happy—the practical application of the Savior’s doctrine of losing one’s life to gain it. In short, the Christmas spirit is the Christ spirit, that makes our hearts glow in brotherly love and friendship and prompts us to kind deeds of service." 1

Extra Flash crossdomain.xml wrinkles

Had a lovely time trying to figure out why my flex widget couldn't talk to our sever via https. Turns out that (even with a crossdomain.xml file) a swf served from http cannot access https, unless you add an extra special parameter of 'secure="false"' to the crossdomain file. I really wish flash returned more helpful error messages than 'Security Error'.

We're using this to allow secure communication from our non-https page for some ajax login & fetch behavior. Using the flash widget as a proxy since same origin policy for javascript prohibits just about everything if you need a secure communication w/o having the whole page in https. Ajax requests are prohibited, the script-tag hack doesn't work (login params would have to go (unencrypted) in the url), and iframes suffer from the same problem. Google uses the iframe trick on some of it's pages --

(make sure you're not logged in)

but it seems that that only works because they redirect the whole page when successful, which we didn't want to do. Looks like the flash widget will work.

About Me