Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 06 2014

18:43

Writing Robust Web Applications – The Lost Art of Exception Handling

As developers, we want the applications we build to be resilient when it comes to failure, but how do you achieve this goal? If you believe the hype, micro-services and a clever communication protocol are the answer to all your problems, or maybe automatic DNS failover. While that kind of stuff has its place and makes for an interesting conference presentation, the somewhat less glamorous truth is that making a robust application begins with your code. But, even well designed and well tested applications are often lacking a vital component of resilient code – exception handling.

I never fail to be amazed by just how under-used exception handling tends to be even within mature codebases. Let’s look at an example.


What Can Possibly Go Wrong?

Say we have a Rails app, and one of the things we can do using this app is fetch a list of the latest tweets for a user, given their handle. Our TweetsController might look like this:

class TweetsController < ApplicationController
  def show
    person = Person.find_or_create_by(handle: params[:handle])
    if person.persisted?
      @tweets = person.fetch_tweets
    else
      flash[:error] = "Unable to create person with handle: #{person.handle}"
    end
  end
end

And the Person model that we used might be similar to the following:

class Person < ActiveRecord::Base
  def fetch_tweets
    client = Twitter::REST::Client.new do |config|
      config.consumer_key        = configatron.twitter.consumer_key
      config.consumer_secret     = configatron.twitter.consumer_secret
      config.access_token        = configatron.twitter.access_token
      config.access_token_secret = configatron.twitter.access_token_secret
    end
    client.user_timeline(handle).map{|tweet| tweet.text}
  end
end

This code seems perfectly reasonable, there are dozens of apps that have code just like this sitting in production, but let’s look a little more closely.

  • find_or_create_by is a Rails method, it’s not a ‘bang’ method, so it shouldn’t throw exceptions, but if we look at the documentation we can see that due to the way this method works, it can raise an ActiveRecord::RecordNotUnique error. This won’t happen often, but if our application has a decent amount of traffic it’s occurring more likely than you might expect (I’ve seen it happen many times).
  • While we’re on the subject, any library you use can throw unexpected errors due to bugs within the library itself and Rails is no exception. Depending on our level of paranoia we might expect our find_or_create_by to throw any kind of unexpected error at any time (a healthy level of paranoia is a good thing when it comes to building robust software). If we have no global way of handling unexpected errors (we’ll discuss this below), we might want to handle these individually.
  • Then there is person.fetch_tweets which instantiates a Twitter client and tries to fetch some tweets. This will be a network call and is prone to all sorts of failure. We may want to read the documentation to figure out what the possible errors we might expect are, but we know that errors are not only possible here, but quite likely (for example, the Twitter API might be down, a person with that handle might not exist etc.). Not putting some exception handling logic around network calls is asking for trouble.

Our tiny amount of code has some serious issues, let’s try and make it better.


The Right Amount of Exception Handling

We’ll wrap our find_or_create_by and push it down into the Person model:

class Person < ActiveRecord::Base
  class << self
    def find_or_create_by_handle(handle)
      begin
        Person.find_or_create_by(handle: handle)
      rescue ActiveRecord::RecordNotUnique
        Rails.logger.warn { "Encountered a non-fatal RecordNotUnique error for: #{handle}" }
        retry
      rescue => e
        Rails.logger.error { "Encountered an error when trying to find or create Person for: #{handle}, #{e.message} #{e.backtrace.join("\n")}" }
        nil
      end
    end
  end
end

We’ve handled the ActiveRecord::RecordNotUnique according to the documentation and now we know for a fact that we’ll either get a Person object or nil if something goes wrong. This code is now solid, but what about fetching our tweets:

class Person < ActiveRecord::Base
  def fetch_tweets
    client.user_timeline(handle).map{|tweet| tweet.text}
  rescue => e
    Rails.logger.error { "Error while fetching tweets for: #{handle}, #{e.message} #{e.backtrace.join("\n")}" }
    nil
  end

  private

  def client
    @client ||= Twitter::REST::Client.new do |config|
      config.consumer_key        = configatron.twitter.consumer_key
      config.consumer_secret     = configatron.twitter.consumer_secret
      config.access_token        = configatron.twitter.access_token
      config.access_token_secret = configatron.twitter.access_token_secret
    end
  end
end

We push instantiating the Twitter client down into its own private method and since we didn’t know what could go wrong when we fetch the tweets, we rescue everything.

You may have heard somewhere that you should always catch specific errors. This is a laudable goal, but people often misinterpret it as, “if I can’t catch something specific, I won’t catch anything”. In reality, if you can’t catch something specific you should catch everything! This way at least you have an opportunity to do something even if it’s only to log and re-raise the error.

An Aside on OO Design

In order to make our code more robust, we were forced to refactor and now our code is arguably better than it was before. You can use your desire for more resilient code to inform your design decisions.

An Aside on Testing

Every time you add some exception handling logic to a method, it’s also an extra path through that method and it needs to be tested. It’s vital you test the exceptional path, perhaps more so than testing the happy path. If something goes wrong on the happy path you now have the extra insurance of the rescue block to prevent your app from falling over. However, any logic inside the rescue block itself has no such insurance. Test your exceptional path well, so that silly things like mistyping a variable name inside the rescue block don’t cause your application to blow up (this has happened to me so many times – seriously, just test your rescue blocks).


What to Do With the Errors We Catch

I’ve seen this kind of code countless times through the years:

begin
  widgetron.create
rescue
  # don't need to do anything
end

We rescue an exception and don’t do anything with it. This is almost always a bad idea. When you’re debugging a production issue six months from now, trying to figure our why your ‘widgetron’ isn’t showing up in the database, you won’t remember that innocent comment and hours of frustration will follow.

Don’t swallow exceptions! At the very least you should log any exception that you catch, for example:

begin
  foo.bar
rescue => e
  Rails.logger.error { "#{e.message} #{e.backtrace.join("\n")}" }
end

This way we can trawl the logs and we’ll have the cause and stack trace of the error to look at.

Better yet, you may use an error monitoring service, such as Rollbar which is pretty nice. There are many advantages to this:

  • Your error messages aren’t interspersed with other log messages
  • You will get stats on how often the same error has happened (so you can figure out if it’s a serious issue or not)
  • You can send extra information along with the error to help you diagnose the problem
  • You can get notifications (via email, pagerduty etc.) when errors occur in your app
  • You can track deploys to see when particular errors were introduced or fixed
  • etc.
begin
  foo.bar
rescue => e
  Rails.logger.error { "#{e.message} #{e.backtrace.join("\n")}" }
  Rollbar.report_exception(e)
end

You can, of course, both log and use a monitoring service as above.

If your rescue block is the last thing in a method, I recommend having an explicit return:

def my_method
  begin
    foo.bar
  rescue => e
    Rails.logger.error { "#{e.message} #{e.backtrace.join("\n")}" }
    Rollbar.report_exception(e)
    nil
  end
end

You may not always want to return nil, sometimes you might be better off with a null object or whatever else makes sense in the context of your application. Consistently using explicit return values will save everyone a lot of confusion.

You can also re-raise the same error or raise a different one inside your rescue block. One pattern that I often find useful is to wrap the existing exception in a new one and raise that one so as not to lose the original stack trace (I even wrote a gem for this since Ruby doesn’t provide this functionality out of the box). Later on in the article when we talk about external services, I will show you why this can be useful.


Handling Errors Globally

Rails lets you specify how to handle requests for resources of a certain format (HTML, XML, JSON) by using respond_to and respond_with. I rarely see apps that correctly use this functionality, after all if you don’t use a respond_to block everything works fine and Rails renders your template correctly. We hit our tweets controller via /tweets/yukihiro_matz and get an HTML page full of Matzs’ latest tweets. What people often forget is that it’s very easy to try and request a different format of the same resource e.g. /tweets/yukihiro_matz.json. At this point Rails will valiantly try to return a JSON representation of Matzs’ tweets, but it won’t go well since the view for it doesn’t exist. An ActionView::MissingTemplate error will get raised and our app blows up in a spectacular fashion. And JSON is a legitimate format, in a high traffic application you’re just as likely to get a request for /tweets/yukihiro_matz.foobar. Tuts+ gets these kinds of requests all the time (likely from bots trying to be clever).

The lesson is this, if you’re not planning to return a legitimate response for a particular format, restrict your controllers from trying to fulfill requests for those formats. In the case of our TweetsController:

class TweetsController < ApplicationController
  respond_to :html

  def show
    ...
    respond_to do |format|
      format.html
    end
  end
end

Now when we get requests for spurious formats we’ll get a more relevant ActionController::UnknownFormat error. Our controllers feel somewhat tighter which is a great thing when it comes to making them more robust.

Handling Errors the Rails Way

The problem we have now, is that despite our semantically pleasing error, our application is still blowing up in our users’ face. This is where global exception handling comes in. Sometimes our application will produce errors that we want to respond to consistently, no matter where they come from (like our ActionController::UnknownFormat). There are also errors that can get raised by the framework before any of our code comes into play. A perfect example of this is ActionController::RoutingError. When someone requests a URL that doesn’t exist, like /tweets2/yukihiro_matz, there is nowhere for us to hook in to rescue this error, using traditional exception handling. This is where Rails’ exceptions_app comes in.

You can configure a Rack app in application.rb to be called when an error that we haven’t handled is produced (like our ActionController::RoutingError or ActionController::UnknownFormat). The way you will normally see this used is to configure your routes app as the exceptions_app, then define the various routes for the errors you want to handle and route them to a special errors controller that you create. So our application.rb would look like this:

...
config.exceptions_app = self.routes
...

Our routes.rb will then contain the following:

...
match '/404' => 'errors#not_found', via: :all
match '/406' => 'errors#not_acceptable', via: :all
match '/500' => 'errors#internal_server_error', via: :all
...

In this case our ActionController::RoutingError would be picked up by the 404 route and the ActionController::UnknownFormat will be picked up by the 406 route. There are many possible errors that can crop up. But as long as you handle the common ones (404, 500, 422 etc.) to start with, you can add others if and when they happen.

Within our errors controller we can now render the relevant templates for each kind of error along with our layout (if it’s not a 500) to maintain the branding. We can also log the errors and send them to our monitoring service, although most monitoring services will hook in to this process automatically so you don’t have to send the errors yourself. Now when our application blows up it does so gently, with the right status code depending on the error and a page where we can give the user some idea regarding what happened and what they can do (contact support) – an infinitely better experience. More importantly, our app will seem (and will actually be) much more solid.

Multiple Errors of the Same Type in a Controller

In any Rails controller we can define specific errors to be handled globally within that controller (no matter which action they get produced in) – we do this via rescue_from. The question is when to use rescue_from? I usually find that a good pattern is to use it for errors that can occur in multiple actions (for example, the same error in more than one action). If an error will only be produced by one action, handle it via the traditional begin...rescue...end mechanism, but if we’re likely to get the same error in multiple places and we want to handle it the same way – it’s a good candidate for a rescue_from. Let’s say our TweetsController also has a create action:

class TweetsController < ApplicationController
  respond_to :html

  def show
    ...
    respond_to do |format|
      format.html
    end
  end

  def create
    ...
  end
end

Let’s also say that both of these actions can encounter a TwitterError and if they do we want to tell the user that something is wrong with Twitter. This is where rescue_from can be really handy:

class TweetsController < ApplicationController
  respond_to :html

  rescue_from TwitterError, with: twitter_error

  private

  def twitter_error
    render :twitter_error
  end
end

Now we don’t need to worry about handling this in our actions and they will look much cleaner and we can/should – of course – log our error and/or notify our error monitoring service within the twitter_error method. If you use rescue_from correctly it can not only help you make your application more robust, but can also make your controller code cleaner. This will make it easier to maintain and test your code making your application that little bit more resilient yet again.


Using External Services in Your Application

It’s difficult to write a significant application these days without using a number of external services/APIs. In the case of our TweetsController, Twitter came into play via a Ruby gem that wraps the Twitter API. Ideally we would make all our external API calls asynchronously, but we’re not covering asynchronous processing in this article and there are plenty of applications out there that make at least some API/network calls in-process.

Making network calls is an extremely error prone task and good exception handling is a must. You can get authentication errors, configuration problems, and connectivity errors. The library you use can produce any number of code errors and then there is a matter of slow connections. I am glossing over this point, but it’s oh so crucial since you can’t deal with slow connections via exception handling. You need to appropriately configure timeouts in your network library, or if you’re using an API wrapper make sure it provides hooks to configure timeouts. There is no worse experience for a user than having to sit there waiting without your application giving any indication of what’s happening. Just about everyone forgets to configure timeouts appropriately (I know I have), so take heed.

If you’re using an external service in multiple places within your application (multiple models for example), you expose large parts of your application to the full landscape of errors that can be produced. This is not a good situation. What we want to do is limit our exposure and one way we can do this is putting all access to our external services behind a facade, rescuing all errors there and re-raising one semantically appropriate error (raise that TwitterError that we talked about if any errors occur when we try to hit the Twitter API). We can then easily use techniques like rescue_from to deal with these errors and we don’t expose large parts of our application to an unknown number of errors from external sources.

An even better idea might be to make your facade an error free API. Return all successful responses as is and return nils or null objects when you rescue any sort of error (we do still need to log/notify ourselves of the errors via some of the methods we discussed above). This way we don’t need to mix different types of control flow (exception control flow vs if…else) which may gain us significantly cleaner code. For example, let’s wrap our Twitter API access in a TwitterClient object:

class TwitterClient
  attr_reader :client

  def initialize
    @client = Twitter::REST::Client.new do |config|
      config.consumer_key        = configatron.twitter.consumer_key
      config.consumer_secret     = configatron.twitter.consumer_secret
      config.access_token        = configatron.twitter.access_token
      config.access_token_secret = configatron.twitter.access_token_secret
    end
  end

  def latest_tweets(handle)
    client.user_timeline(handle).map{|tweet| tweet.text}
  rescue => e
    Rails.logger.error { "#{e.message} #{e.backtrace.join("\n")}" }
    nil
  end
end

We can now do this: TwitterClient.new.latest_tweets('yukihiro_matz'), anywhere in our code and we know that it will never produce an error, or rather it will never propagate the error beyond TwitterClient. We’ve isolated an external system to make sure that glitches in that system won’t bring down our main application.


But What if I Have Excellent Test Coverage?

If you do have well-tested code, I commend you on your diligence, it will take you a long way towards having a more robust application. But a good test suite can often provide a false sense of security. Good tests can help you refactor with confidence and protect you against regression. But, you can only write tests for things you expect to happen. Bugs are, by their very nature, unexpected. To use our tweets example, until we choose to write a test for our fetch_tweets method where client.user_timeline(handle) raises an error thereby forcing us to wrap a rescue block around the code, all our tests will have been green and our code would have remained failure-prone.

Writing tests, doesn’t absolve us of the responsibility of casting a critical eye over our code to figure out how this code can potentially break. On the other hand, doing this kind of evaluation can definitely help us write better, more complete test suites.


Conclusion

Resilient systems don’t spring forth fully formed from a weekend hack session. Making an application robust, is an ongoing process. You discover bugs, fix them, and write tests to make sure they don’t come back. When your application goes down due to an external system failure, you isolate that system to make sure the failure can’t snowball again. Exception handling is your best friend when it comes to doing this. Even the most failure-prone application can be turned into a robust one if you apply good exception handling practices consistently, over time.

Of course, exception handling is not the only tool in your arsenal when it comes to making applications more resilient. In subsequent articles we will talk about asynchronous processing, how and when to apply it and what it can do in terms of making your application fault tolerant. We will also look at some deployment and infrastructure tips that can have a significant impact without breaking the bank in terms of both money and time – stay tuned.

December 30 2013

16:44

Using New Relic Custom Metrics to Monitor All the Things

When you first get introduced to New Relic, it’s easy to be overwhelmed by all the features. But like with most tools, as you slowly climb the learning curve and get familiar with the features provided out of the box, you begin to wonder how it all hangs together under the hood and if there is more you can do with what you have at hand.

Today I will look at how New Relic actually monitors transactions and how you can hook into the process. We will take a quick glance at the support provided by New Relic for monitoring background jobs and we’ll also dig into New Relic custom metrics, how you would use them and the benefits you get from doing so. By the end you will have a deeper understanding of how New Relic works and be able to utilize it more fully. In other words, we will have a deeper knowledge of our tools—something every developer should strive for.

Let’s begin by taking a quick look at how New Relic actually hooks-in to track the performance of your application.


How New Relic Tracks Transactions and How You Can Do the Same

It may seem a little like magic, you include a gem in your Gemfile:

gem 'newrelic_rpm'

And somehow New Relic monitors all your code. Of course it is just code, so let’s see how New Relic actually instruments your app so it can begin monitoring it when the agent gem is required. We will do this in the context of a Rails 4 app.

The first place to look is newrelic_rpm.rb, which has the following relevant code in it:

...
if Rails::VERSION::MAJOR.to_i >= 3
  module NewRelic
    class Railtie < Rails::Railtie

      initializer "newrelic_rpm.start_plugin" do |app|
        NewRelic::Control.instance.init_plugin(:config => app.config)
      end
    end
  end
else
...

So a Railtie gets created when the Rails version is above three, this gets a NewRelic::Control singleton instance (when it is initialized) and calls init_plugin. When the NewRelic::Control instance is created it figures out what framework is running (Rails 4 in our case) and loads some relevant code; we can see this in new_relic/control/class_methods#load_framework_class. The init_plugin method that gets executed lives in new_relic/control/instance_methods. The interesting code here is:

...
if Agent.config[:agent_enabled] && !NewRelic::Agent.instance.started?
  start_agent
  install_instrumentation
  load_samplers unless Agent.config[:disable_samplers]
...

The install_instrumentation call is the important one. The implementation lives in new_relic/control/instrumentation. Skipping over the boilerplate bits, this works out which instrumentation files it needs to load and then requires them one by one. In our case, it will load files under new_relic/agent/instrumentation/rails4. One of the files here is action_controller.rb, when this gets required it eventually executes the following code via some metaprogramming magic:

executes do
  class ActionController::Base
    include NewRelic::Agent::Instrumentation::ControllerInstrumentation
    include NewRelic::Agent::Instrumentation::Rails4::ActionController
  end
  NewRelic::Agent::Instrumentation::ActionControllerSubscriber \
    .subscribe(/^process_action.action_controller$/)
end

And here we get to the crux of it: ActionController::Base, from which all your controllers inherit and it gets a couple of modules included in it, the most important one, which is NewRelic::Agent::Instrumentation::ControllerInstrumentation. This is the start of how New Relic begins to monitor all your controller actions as ‘transactions’. Of course, it’s a somewhat simplified view and we’re glossing over a lot of detail, but it gives you an idea of how New Relic monitors your code. The question is, how can you use this information?

Monitoring Custom Frameworks & Background Jobs

You’re highly unlikely to find yourself in a situation where you’re using a web framework that New Relic does not already have instrumentation for (in the Ruby world), but let’s say you did. Knowing what we know now, we can easily, manually instrument the controller actions of this custom framework. If we have a controller like this:

class CustomController
  def custom_action
    ...
  end
end

We can instrument it like so:

class CustomController
  include NewRelic::Agent::Instrumentation::ControllerInstrumentation

  def custom_action
    ...
  end
  add_transaction_tracer :custom_action
end

Now your controller method will be tracked like a transaction in the same way Rails actions get tracked. Of course, if you’ve rolled your own web framework which includes database handling code, you will need to do extra work to instrument much of that code, to allow New Relic to monitor more than just controller actions. But the general idea remains sound.

The above pattern becomes more useful when you want New Relic to track background jobs in your application. You’re a lot more likely to have rolled some custom background job handling code than you are to have written your own web framework. In fact, we did just this with Tuts+ initially, although we’re now migrating to Sidekiq. If you’re using one of the well-known background jobs systems like Sidekiq, Resque or Delayed Job, New Relic already has instrumentation built in, but if you did roll your own, the above pattern is all you need to monitor your tasks.

For example, our custom Tuts+ background jobs were regular ruby classes that responded to the execute method, so all we have to do is this:

class SomeBackgroundJob
  include NewRelic::Agent::Instrumentation::ControllerInstrumentation

  def execute
    ...
  end
  add_transaction_tracer :custom_action, category: :task
end

The last bit, category: :task, is to ensure that New Relic does not track it as a web transaction, but instead treats it as a background tasks and makes it appear under the background tasks tab in the New Relic UI. If we create a base class for all our jobs, we can put the instrumentation there and child classes will inherit it, so we don’t have to worry about doing the above in every job class.

Customizing a Transaction Even More

Interestingly enough, even the web transactions that New Relic automatically monitors are not sacrosanct. You can, for example, add some custom parameters to be sent to New Relic for the transaction that is currently executing (if you’ve switched on capturing parameters).

01_capture_parameters

You can do this at any point during the transaction. All you have to do is call ::NewRelic::Agent.add_custom_parameters(:key => 'value') at any time and the parameters you pass in will get added to the parameter data you see inside New Relic. For example, if we had a controller that looked like this:

class HelloController < ApplicationController
  def index
    ::NewRelic::Agent.add_custom_parameters(:hello => 'world')
  end
end

Slow transactions would give us the following:

02_custom_parameter_display

This isn’t all we can do. We can segment a transaction while it’s running by renaming it. Let’s say we want to treat a transaction as special when it is performed by a particular user. You may do something like this:

class HelloController < ApplicationController
  def index
    new_relic_name = NewRelic::Agent.get_transaction_name
    if current_user.name == 'Joe Customer'
      NewRelic::Agent.set_transaction_name("#{new_relic_name} - Joe Customer")
    end
  end
end

Now this transaction will be treated as a separate transaction in the New Relic UI:

03_segmented_transaction

Even default New Relic instrumentation has some room for customization, but sometimes, just like Captain Kirk, you just need more power. This is where custom metrics come in.


New Relic Custom Metrics & How They Are Useful

Back in the day, you would have used custom metrics to monitor things like external service communication and use of various common tools like Redis. These days, New Relic has better ways of monitoring that stuff, so what do we need custom metrics for? I’ve found custom metrics to be useful in four situations:

  • monitoring code that New Relic can’t see
  • monitoring code you don’t control
  • monitoring scripts
  • monitoring totally custom events

Let’s take a quick look at each of them.

Monitoring Code New Relic Can’t See

New Relic is pretty good at breaking down the performance of your various application methods in a transaction trace, but sometimes you will see something like this in a trace:

04_uninstrumented_code

It seems there is application code that New Relic couldn’t instrument for some reason. What we can do is help New Relic (and ourselves) out with some custom metrics. We need to figure out which method New Relic had trouble monitoring and hook in some custom metrics to track how long this method took to execute. This will then appear in all subsequent traces. Let’s say we have a class with a method we want to monitor via custom metrics:

class Order
  def amount
    ...
  end
end

We can begin to track the amount method like so:

require 'new_relic/agent/method_tracer'

class Order
  include ::NewRelic::Agent::MethodTracer

  def amount
    ...
  end
  add_method_tracer :amount, 'Custom/amount'
end

The second parameter to add_method_tracer is the name that this custom metric will get in the New Relic UI. Metric names are slash separated strings and all custom metrics should start with ‘Custom/’. You could, for example, name your custom metric as ‘Custom/<class_name>/<method_name>’. At this point you will begin seeing the amount method in your transaction traces, in the New Relic UI. But, what if our amount method is highly complex and we want to monitor parts of it which we suspect are slow? My advice is that you should refactor your method—it’s too big, but if you can’t do that, you can instrument random code like so:

class Order
  extend ::NewRelic::Agent::MethodTracer

  def amount
    ...
    self.class.trace_execution_scoped(['Custom/amount/complex_code']) do
      ...complex code...
    end
    ...
  end
end

Now the instrumented part of the method will be reported separately in your transaction traces. If you’ve previously instrumented the method itself, your new ‘inner’ metric will be grouped under the previous one.

This is by far the most common way you’ll end up using custom metrics in your code, but let’s look at the others anyway.

Monitoring Code You Don’t Control

Often you’re using a library which, you suspect, is slowing down your application. New Relic won’t instrument any random gem for you by default, so what can you do? You can fork the gem and add some instrumentation using the method we saw above, but an even simpler solution exists – use initializers. Let’s say you’re using the foobar library which has a class Foo with a method bar that you suspect has some slow code in it. All you need to do is create an initializer foobar_instrumentation.rb, and put the following in it:

require 'new_relic/agent/method_tracer'

Foo.class_eval do
  include ::NewRelic::Agent::MethodTracer

  add_method_tracer :bar
end

As you can see the code is very similar to what we had above, New Relic will work out a sensible name for your new custom metric based on the class and method name and you will begin to see it in your transaction traces. Use this to work out if that suspicious library is actually making your code perform badly, but don’t keep this instrumentation around permanently. It spams unnecessary initializers in your Rails app and pollutes your New Relic UI with custom metrics that you don’t really need to track permanently.

Monitoring Scripts

One part of web applications that is often neglected is scripts. To paraphrase a presentation I did recently – it’s still production code and should be treated as such. You don’t need production code to be performing badly, especially if you execute it on an ongoing basis via cron jobs (or similar method that is not a background job in your system), so we can use New Relic to figure out if your scripts are slow.

You may instrument your script code using custom metrics as described above. It won’t show up in transaction traces as it won’t be part of a transaction, what you will be able to do though, is create a custom dashboard from the metrics you collect which should give you an idea if your script is performing badly.

The other thing you can do is to treat your script as a type of background job and instrument it accordingly (include NewRelic::Agent::Instrumentation::ControllerInstrumentation etc.), it will get lumped together with other background jobs in the UI, but you don’t have to worry about custom dashboards.

The only caveat with scripts is this. New Relic only sends data over the wire periodically, with a one-off script that executes quickly you will need to ensure that the collected data actually gets sent, so you may need to shut down the New Relic agent manually. A good rule of thumb is to manually start the agent at the beginning of each script and to shut it down at the end:

require 'newrelic_rpm'

::NewRelic::Agent.manual_start

... codez ...

::NewRelic::Agent.shutdown

This way you won’t ever need to wonder why your data is not appearing in the UI.

Monitoring Totally Custom Events

One of the interesting things about New Relic, is that it lets you take advantage of its UI and data aggregation facilities for metrics that have nothing to do with performance (in theory). For example, you may want to have some visibility on how often users sign up to your application, the frequency of sales, the total amount users pay when they make purchases etc. These are more business metrics than performance ones, but if it’s too much trouble to track these separately, you may use New Relic to do so.

New Relic lets you record custom metrics directly via two API calls:

  • record_metric
  • increment_metric

You can use record_metric to track any metric that has an amount and increment_metric is pretty self explanatory. So we may, for example, do this:

...
def purchase(amount)
...
  ::NewRelic::Agent.record_metric('Custom/purchase_amount', amount)
  ::NewRelic::Agent.increment_metric('Custom/purchase_count')
...
end
...

The only way for you to see these metrics in the UI would be to create some custom dashboards. I have to mention that this would be a somewhat “creative” use of the New Relic API as it’s designed with performance data in mind, but it is certainly a handy thing to know when you need to throw a quick dashboard together and don’t want to set up a bunch of extra infrastructure.


The Dangers of Monitoring Too Much

Of course, all this power has a cost. If you collect too many custom metrics, it can begin to slow down your application. It can also slow down the New Relic UI and make it difficult to interpret the data as New Relic will collapse similar metrics into a summary one. New Relic recommends keeping the number of custom metrics you collect below 2000. I have found that custom metrics are best used periodically. Instrument the code you need, use the instrumentation to solve the problem you’re having and then remove the instrumentation. This way you get to solve your performance issues and the number of custom metrics you use is unlikely to grow too high.


Conclusion

We’ve dug into the internals of the newrelic_rpm gem and saw how to tell New Relic about code that you consider a web transaction. We saw how to alter transactions on the fly, how to monitor background jobs and looked at the various situations where it makes sense to use custom metrics. There is much you can do with New Relic beyond the functionality it provides out of the box and we’re now much more able to utilize it to its full potential. But, there is always more to learn, like how to build custom dashboards out of the metrics that we capture or how to monitor infrastructure with plugins. We’ll be covering those topics and more in subsequent articles, so make sure to check back often. And as always if you have a question, want to share your own New Relic story or just want to say hi, don’t forget to leave a comment.

Sponsored post
soup-sponsored
05:36
Reposted bySchrammelhammelMrCoffeinmybetterworldkonikonikonikonikoniambassadorofdumbgroeschtlNaitliszpikkumyygittimmoejeschgeKameeel

October 22 2013

06:30

Bento: One-Stop Shop for Online-Courses on Web Development


  

It’s merely a few weeks ago, that we brought you a comprehensive overview of 22 of the online learning resources the planet has to offer. Just today we stumbled upon another effort for web designers, developers and programmers. An offering by the name of Bento curates the best learning resources in its field. The collection is community-based and as such more helpful than any Google search result could ever be ;-)

September 27 2013

15:05

5 Reasons Why New Relic Is a Developer’s Best Friend

Once you start digging around New Relic you begin to realise just how many interesting features the service has to help monitor the performance and health of your application. It was truly difficult to pick just five things to talk about, so rather than focusing on the obvious features let’s look at some of the less hyped functionality that New Relic provides and how we can use it in interesting and sometimes unorthodox ways.

When we left you last time, we had a basic ‘Hello World’ Rails application (called New Relic_rails1, living in ~/project/tmp/New Relic). We will continue using this app, extend it and see if we can use it to demonstrate the features of New Relic that we’ll be looking at.


Availability Monitoring

This is one New Relic feature that usually doesn’t make the front page of the marketing material. There is not a lot to it, but if you think about it, what’s more important that making sure your app is actually up and running and accessible by your users?

Firstly, when you set up availability monitoring, your application gets a nice asterisk on your main applications dashboard:

newrelic_availability_asterisk

It’s a nice visual reminder, so you can see which apps still need availability monitoring switched on.

Let’s now look at how we can set up availability monitoring and what we can get out of it. Firstly, you need to jump into your application and then go into Settings->Availability Monitoring. You will see something like this:

newrelic_availability_monitoring

You need to provide a URL you want New Relic to ping, tick the box, save your changes and you’re good to go. New Relic will begin hitting your URL every 30 seconds. But the fun doesn’t stop there. New Relic will ping your URL via an HTTP HEAD request (and deem everything OK if it receives a 200 response code), but you can supply a response string that you want New Relic to look for in which case it will perform a GET request and examine the response for the string that you provided. This can be very handy if you have a custom ‘Health Check’ page that you want to hit.

You can also set up email notification if downtime occurs:

newrelic_availability_notifications

Now that you’re monitoring availability, you will have access to a nice report which will visually show you when any downtime has occurred:

newrelic_availability_report

In fact, many of your charts (e.g. the application overview) will have this visual indication:

newrelic_availability_overview

You have to admit that’s some pretty nice functionality for so little effort.

You can, of course, disable and re-enable monitoring (via the New Relic REST API) when you’re performing deploys, to make sure you don’t get spurious downtime events.

Another interesting side-effect of this is that if you’re deploying your pet project to Heroku on a single dyno, you can use this ping functionality to prevent your dyno from sleeping, which can make your site annoyingly slow if you don’t have a lot of traffic.


Custom Error Recording

If unexpected errors occur in your application, New Relic will record these for you and give you a nice graph. Our little ‘Hello World’ app has performed admirably for the moment, so there is nothing for us to see on that front. But, we can purposely break our app and see what New Relic gives us.

Let’s modify our HelloController to raise an error randomly approximately 50% of the time:

class HelloController < ApplicationController
  def index
    if rand(2) == 0
      raise 'Random error'
    end
  end
end

We will now make a few hundred calls to our app and see what happens:

ab -n 300 -c 10 http://127.0.0.1:3000/

Our New Relic error graph now looks much more interesting:

newrelic_error_overview

And we can drill down to get some specifics:

newrelic_error_detailed

As you can see we can sort our errors and filter them as well as look at errors from web requests and background tasks separately. This is some incredibly powerful stuff to help you diagnose and fix problems with your application. You can of course also see the stack trace for each error:

newrelic_error_trace

There are services specifically dedicated to capturing errors from your application, some of the most well known ones are Airbrake and Bugsnag. These are paid services used by many application, but the functionality that New Relic provides just about makes these services redundant. In fact if we could send custom errors to New Relic (rather than letting it capture errors that we hadn’t rescued) we could make a compelling case for not using a separate error collection service (and save some money and get rid of an extra gem in the process).

While New Relic doesn’t document any way of doing this, we can always go to the source to see if what we want to do is hard. It looks to me like it should be pretty trivial for us to send custom errors to New Relic, so let’s give it a try. We’ll modify our controller action again to rescue all errors and send a custom error to New Relic:

class HelloController < ApplicationController
  def index
    if rand(2) == 0
      raise 'Random error'
    end
  rescue
    New Relic::Agent.notice_error(StandardError.new("I caught and reraised an error"))
  end
end

After we make a few more calls and wait for the data to come through we see the following:

newrelic_error_custom

It worked, our custom error is coming through! New Relic can definitely act as our error collection service. We are of course using a private interface here which is not very nice, but we can put the notice_error call behind a facade which will make things a bit easier for us if the interface changes.

An even better approach might be to not treat custom errors like regular errors at all, but instead create a custom metric to track and then build a custom dashboard to visualise. This way we’re no using any undocumented functionality and would still get all the benefits – brilliant!


Key Transaction Tracking

New Relic will normally track your transactions for you:

newrelic_transaction_tracking

You will be able to see where your application is spending most of its time (e.g. in the controller, model, database etc.). However, New Relic will not capture a detailed trace unless the transaction takes longer than Appdex * 4 seconds. Normally this is OK, but sometime you have transactions that are much more important to your application or to your business. Perhaps these transactions are extremely high volume or deal with important events like payments. Suffice to say you need to make sure this type of transaction always performs extremely well.

The thing is though, when a transaction is this important it has probably received quite a lot of love from you already and may be performing fairly well. Let’s say you have a transaction with an extremely high throughput (occurs many times per minute). If this transaction is performing optimally everything is fine, but if the performance were to degrade slightly, due to the volume of the traffic it may have a disproportionally detrimental effect on your application. What you want is something like:

  • a separate Apdex T value just for this transaction
  • the ability to receive alerts when the performance of this transaction degrades
  • a detailed trace every time this transaction performs even slightly non-optimally

This is exactly what key Key Transactions give you!

Before we set up a key transaction for our ‘Hello World’ app, we need to create a more interesting transaction which will usually perform well, but will sometimes perform somewhat badly. We will build the ability to look at car makes and models and get a particular car make to slow the transaction down. Firstly the route:

New RelicRails1::Application.routes.draw do
  get 'random_car', to: 'cars#show_random'

  root 'hello#index'
end

We want to be able to get a random car, this will map to the CarsController:

class CarsController < ApplicationController
  def show_random
    @car = Car.offset(rand(Car.count)).first
    if @car.make == 'Ford'
      sleep(2)
    end
  end
end

We get a random car from the database and if the car make is ‘Ford’ we will have a slow transaction on our hands. Of course we need a Car model:

class Car < ActiveRecord::Base
end

We’ll need to configure our database to use MySql in development (I did this, but you can stick with sqlite):

base: &BASE
  adapter: mysql2
  encoding: utf8
  host: "localhost"
  username: "root"
  max_connections: 10
  timeout: 5000

development: &DEV
  <<: *BASE
  database: "New Relic_rails1_development"
  sql_log_level: debug

We need a migration to create a cars table:

class Cars < ActiveRecord::Migration
  def change
    create_table :cars, force: true do |t|
      t.string :make
      t.string :model
    end
  end
end

And we need some seed data which we’ll put in our db/seeds.rb file:

Car.create(make: 'Ford', model: 'Mondeo')
Car.create(make: 'Honda', model: 'Accord')
Car.create(make: 'Audi', model: 'A4')
Car.create(make: 'Lamborghini', model: 'Murcielago')
Car.create(make: 'Toyota', model: 'Prius')

Lastly we should probably have a view cars/show_random.html.erb:

<h1>Make: <%= @car.make %></h1>
<h2>Model: <%= @car.model %></h2>

You’ll also need to add the mysql2 gem to the Gemfile if you’ve gone with MySql. After this we just need to create and populate the database, restart our server and we’re good to go:

bundle
rake db:create && rake db:migrate && rake db:seed
rails s

You’ll need to hit the URL, to make sure New Relic recognises that this transaction exists:

curl localhost:3000/random_car

We’re now ready to monitor this transaction as a key transaction. Firstly, jump into the transaction tab:

newrelic_transaction_tab

Click the ‘Track a Key Transaction’, button and pick our newly created transaction:

newrelic_transaction_create

We can give our new key transaction a name, pick the Apdex T that we’re happy with as well as set up some alerts. When our transaction takes longer than the Apdex that we’ve chosen, New Relic will capture a detailed trace which we’ll be able to use to figure out where the performance issue is coming from. Let’s make a few calls against our new URL and see what data we get:

ab -n 300 -c 20 http://127.0.0.1:3000/random_car

Hmm, it seems some of our transactions are frustrating our users:

newrelic_transaction_frustrating

Let’s see if New Relic has captured some transaction traces for us:

newrelic_transaction_slow_traces

Let’s look at one of these traces. It took around 2 seconds to respond, but only 10 milliseconds were using the CPU:

newrelic_transaction_cpu_burn

All our SQL statements were fast so database is not the issue:

newrelic_transaction_sql_trace

It looks like most of the time is spent in the controller action:

newrelic_transaction_controller_action

Let’s dig into the trace a little bit. It looks like the SQL SELECT was fast, a Car.find was also fast. Then we lose about 2 seconds which is followed by some very fast template rendering:

newrelic_transaction_trace_detail

New Relic has kindly highlighted for us where we lost those two seconds. We need to look at our controller code after a Car.find call:

class CarsController < ApplicationController
  def show_random
    @car = Car.offset(rand(Car.count)).first
    if @car.make == 'Ford'
      sleep(2)
    end
  end
end

Hmm, the initial SELECT must be the Car.count call, and the Car.find, must be due to the Car.offset call. Our big delay is right after this though. Ahh look at this, some silly person has put a 2 second delay in our code when the make of the car is ‘Ford’. That would explain why our 2 second delay happens only some of the time. I better do a git blame on our repository to find out who put that horrible code in there! On second thoughts, I better not, cause it might say that it was me.


External Service Call Recording

Whenever you make calls to other services from within you app (e.g. an HTTP request to an API like Twitter), New Relic will monitor these as external calls. These days a serious application may integrate with a number of external APIs. Often these external services can significantly degrade the performance of your app, especially if you make these calls in-process. New Relic can show which of your external calls are slowest, which ones you call the most and which respond the slowest on average. You can also look at the performance of each of the external services you use individually. Let’s give it a try.

We’ll create an external service of our very own, by building a small Sinatra app. Firstly we install the gem:

gem install sinatra

Create a new file for our service:

touch external_service.rb

And put the following code in there:

require 'sinatra'

get '/hello' do
  sleep_time = rand(2000)/1000.0
  sleep(sleep_time)
  "Hello External World #{sleep_time}!"
end

This service will sleep for a random time (between 0 and 2000 milliseconds) and then return a ‘Hello’ response with the time it slept for. Now all we have to do is start it:

ruby external_service.rb

Back in our Rails app we’ll build a new controller to call our external service. We’ll use this route:

New RelicRails1::Application.routes.draw do
...
  get 'external_call', to: 'external_calls#external_call'
...
end

Our controller will call our Sinatra service via HTTP:

require 'net/http'

class ExternalCallsController < ApplicationController
  def external_call
    url = URI.parse('http://localhost:4567/hello')
    external_request = Net::HTTP::Get.new(url.to_s)
    external_response = Net::HTTP.start(url.host, url.port) do |http|
      http.request(external_request)
    end
    @result = external_response.body
  end
end

And we need a view to display the results:

<h1><%= @result %></h1>

All we have to do now is make a few calls to our new endpoint:

ab -n 100 -c 10 http://127.0.0.1:3000/external_call

Let’s see what New Relic has produced for us.

newrelic_transaction_external_service

New Relic has indeed picked up our new external call. We’ve got the total calls per minute we’re making to the external endpoint. And the total that was spend responding by the external service. Of course our chart looks a little sparse since we only have one external service, which means we don’t have anything to compare against.

We can also get more detailed data about the specific external call as well as where in our app this call is being made from:

newrelic_transaction_external_call

We can see when the calls were made, the throughput and the average response time. This may seem simple, but when you have an app with a lot of external services this feature can give you a very nice overview of how these external services are performing, as well as when and where they are being used. This can allow you to make decisions regarding caching certain external service responses if possible, or even dropping particular external services if their performance is not up to scratch. And you no longer have to argue these things based on gut-feel and home-baked metrics, you’ll have hard data to prove your point for you.

Scalability and Capacity Analysis

There is nothing more frustrating for a developer than having your application fall over due to a traffic spike. Everything was running smooth until those extra few hundred users came along and your application exploded. You had a feeling this might happen, but couldn’t be sure – the wait and see attitude seemed to be the most pragmatic approach. Well with New Relic capacity and scalability reports, you no longer have to ‘wait and see’. You can tell straight away how well your app is scaling, you can do load tests and instantly see if you application can handle the load. You can observe your application response time trends as your user base grows and predict when you’ll need to add capacity. All of those are truly wonderful things.

First, let’s look at the capacity reports:

newrelic_capacity_instance_busy

Hmm, this one shows a big spike, but otherwise nothing. Well we’re running in development mode, so this is understandable. That spike is for when we did a bunch of requests concurrently just a little while ago. As you can see when we did those concurrent requests, we maxed out our poor lonely Webrick instance. If this was production and that load was constant, our instance would always be 100% busy, which would probably indicate that we need another instance.

The instance analysis report is slightly different:

newrelic_capacity_instance_analysis

In our case we don’t get much out of it, but it normally shows us the number of instances that are running, and the number of instance we actually need to handle the load if all instances were 100% busy. So if we were running 10 instances and the concurrent instance load was 2, we could easily halve (or even more than halve) the number of running instance and not degrade the performance at all. For a small app that runs only a few instances, this is no big deal, but for a large application with dozens and hundreds of instances, this can translate to significant cost savings.

And then there are the scalability reports. The response time report is probably the most interesting/important one:

newrelic_scalability_response

Once again, our graph is very distorted cause it’s a development app that we’ve been playing around with randomly. The idea with this report is that as the throughput for your application increases (more requests per minute), the response time should remain close to constant (i.e. performance does not degrade when there is more traffic). This means you should always be seeing something resembling a flat line here. If your line is sloping upwards significantly, your app is probably struggling to handle the traffic and you may need to look at adding more capacity. Where to add capacity is another question entirely (e.g. database capacity, more servers etc.). The other two scalability reports can help you answer it. There is the database report:

newrelic_scalability_database

You can’t expect your database to not be effected by higher load, so what you should be seeing here is a line that slowly goes up as the throughput of your application increases. It is up to you when the database response time is deemed unacceptable (i.e. is affecting the response of the application too much), but when you do decide that the database responses are too slow, you know it is time to add database capacity. The other report is the CPU:

newrelic_scalability_cpu

Once again you can’t really expect higher throughput to not affect your CPU load, you should be seeing a line that slowly goes up with increased throughput. This, together with the capacity reports we talked about earlier can allow you to decide when to add more Rails processes/servers to ensure your performance remains decent.


Conclusion

If one or all of those features have raised an eyebrow (or two) for you, the good news is that we’ve only just scratched the surface. Each of those features more than deserves an in-depth article of its own. But, New Relic also has a number of other features that are potentially even more powerful, these include Real User Monitoring, The New Relic Platform, The Thread Profiler, Alert Thresholds and Notification and many others. We will try to cover some or maybe even all of these in later tutorials.

For now, try New Relic out, deploy an agent in your favourite language and see if you can discover an out-of-the-box way of using some of the functionality that New Relic provides. And if you do have some innovative ways to use New Relic, be sure to let everyone know by leaving a comment.

September 25 2013

14:00

Getting Started With NewRelic in 30 Minutes

I remember working on a Rails app a few years ago and someone floated the idea of using this new service that had appeared on the scene. It was called NewRelic and they were promising to give you more insight into the performance of your Rails app, than you ever could get before. We gave it a try and it was impressive, more importantly it was something the Ruby web development ecosystem truly needed.

Fast forward to now and you’d be hard-pressed to find a Ruby web application that doesn’t have NewRelic hooked in. NewRelic as a company has continued to provide tools to monitor your Ruby apps, but they’ve also branched out into a number of other languages such as Java, Python and even .Net. But of course as the number of features you provide grows so does the complexity and the amount of documentation out there. It becomes hard to figure out where to start especially if you’re not yet an expert.

Today I thought we could go back to the roots of NewRelic and look at how we can get started with the service to monitor a Rails application.


A Basic Rails App

In order to use NewRelic we need something to monitor, so let’s set up a basic ‘Hello World’ Rails app.

The app we create will live under ~/projects/tmp/newrelic, and will be called newrelic_rails1. I assume you already have Rails installed:

cd ~/projects/tmp/newrelic
rails new newrelic_rails1
cd newrelic_rails1

There isn’t much for us to do to create our ‘Hello World’ app. We need a new controller:

rails g controller hello

Now we just need a route, we will get the root route of the application to use our controller. We also need a view, with the words ‘Hello World’. Given all this, our config/routes.rb should look like this:

NewrelicRails1::Application.routes.draw do
  root 'hello#index'
end

Our controller (app/controller/hello_controller.rb), will be as follows:

class HelloController &gt; ApplicationController
  def index
  end
end

And our view (app/views/hello/index.html.erb), will be similar to:

&lt;h1&gt;Hello World!&lt;/h1&gt;

We can now start up our development server:

rails s

When we curl localhost:3000, we get:

&lt;!DOCTYPE html&gt;
&lt;html&gt;
...
&lt;body&gt;
&lt;h1&gt;Hello World!&lt;/h1&gt;
&lt;/body&gt;
&lt;/html&gt;

Everything is working!


Hooking in NewRelic

With Ruby it’s very simple. We add a gem to our Gemfile, run a bundle install, drop a config file into the config folder and we have all we need. In fact, NewRelic is pretty good at guiding you through this. All you need to do is log in to your account and if you haven’t deployed a NewRelic agent before, it’s pretty obvious what to do:

newrelic_start

Firstly, we install the NewRelic agent gem by adding it to our Gemfile, as per the instructions:

newrelic_install_gem

Our Gemfile will now look like this:

source 'https://rubygems.org'

gem 'rails', '4.0.0'
gem 'sqlite3'
gem 'sass-rails', '~&gt; 4.0.0'
gem 'uglifier', '&gt;= 1.3.0'
gem 'coffee-rails', '~&gt; 4.0.0'
gem 'jquery-rails'
gem 'turbolinks'
gem 'jbuilder', '~&gt; 1.2'

group :doc do
  gem 'sdoc', require: false
end

gem 'newrelic_rpm'

Whenever we add anything to the Gemfile we need to run:

bundle install

We also need a newrelic.yml, which you can download from NewRelic:

newrelic_config_download

It will come pre-configured with your license key. We need to put this file under config/newrelic.yml.

At this point if we ran our application in staging or production mode, we would already get data in our NewRelic account. So let us do so:

RAILS_ENV=production rails s

This time when we curl localhost:3000, we get:

&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;&lt;script type=&quot;text/javascript&quot;&gt;var NREUMQ=NREUMQ||[];NREUMQ.push([&quot;mark&quot;,&quot;firstbyte&quot;,new Date().getTime()]);&lt;/script&gt;
  &lt;title&gt;NewrelicRails1&lt;/title&gt;
  &lt;link data-turbolinks-track=&quot;true&quot; href=&quot;/stylesheets/application.css&quot; media=&quot;all&quot; rel=&quot;stylesheet&quot; /&gt;
  &lt;script data-turbolinks-track=&quot;true&quot; src=&quot;/javascripts/application.js&quot;&gt;&lt;/script&gt;
  &lt;meta content=&quot;authenticity_token&quot; name=&quot;csrf-param&quot; /&gt;
&lt;meta content=&quot;i5rBPaG52bzM5Kn0SJwIbq6Qz0dG0KsIlcd8tb9vMV8=&quot; name=&quot;csrf-token&quot; /&gt;
&lt;/head&gt;
&lt;body&gt;

&lt;h1&gt;Hello World!&lt;/h1&gt;

&lt;script type=&quot;text/javascript&quot;&gt;if (typeof NREUMQ !== &quot;undefined&quot;) { if (!NREUMQ.f) { NREUMQ.f=function() {
NREUMQ.push([&quot;load&quot;,new Date().getTime()]);
var e=document.createElement(&quot;script&quot;);
e.type=&quot;text/javascript&quot;;
e.src=((&quot;http:&quot;===document.location.protocol)?&quot;http:&quot;:&quot;https:&quot;) + &quot;//&quot; +
  &quot;js-agent.newrelic.com/nr-100.js&quot;;
document.body.appendChild(e);
if(NREUMQ.a)NREUMQ.a();
};
NREUMQ.a=window.onload;window.onload=NREUMQ.f;
};
NREUMQ.push([&quot;nrfj&quot;,&quot;beacon-3.newrelic.com&quot;,&quot;b9119aa82e&quot;,&quot;2507356&quot;,&quot;cglYTRENCF4ERBtZB10KWRYKDABXGQ==&quot;,0,21,new Date().getTime(),&quot;&quot;,&quot;&quot;,&quot;&quot;,&quot;&quot;,&quot;&quot;]);}&lt;/script&gt;

&lt;p&gt;&lt;/body&gt;
&lt;/html&gt;

There is a bunch of JavaScript that got inserted into our pages so that NewRelic can monitor browser time. This is one way we can tell that our NewRelic integration is working. But it is not the only way, NewRelic also creates a log file:

% cat log/newrelic_agent.log

Logfile created on 2013-09-22 16:23:13 +1000 by logger.rb/36483
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Starting the New Relic agent in &quot;production&quot; environment.
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : To prevent agent startup add a NEWRELIC_ENABLE=false environment variable or modify the &quot;production&quot; section of your newrelic.yml.
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Reading configuration from config/newrelic.yml
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Enabling the Request Sampler.
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Environment: production
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Dispatcher: webrick
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Application: My Application
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing ActiveRecord 4 instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing Net instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing deferred Rack instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing Rails 4 Controller instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing Rails 4 view instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing Rails4 Error instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Finished instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Doing deferred dependency-detection before Rack startup
[09/22/13 16:23:16 +1000 skorks-envato (12424)] INFO : Reporting to: https://rpm.newrelic.com/accounts/303380/applications/2507356

We can also check our NewRelic account to make sure a new application has appeared for monitoring:

newrelic_monitored_application

There are however a few things that are not so nice:

  • Our application is named ‘My Application’
  • We accepted all the default configuration values, which may not suit our app
  • We had to launch our server in production mode (which is only possible cause it’s a brand new app that doesn’t rely on any external infrastructure)

So let us look at our newrelic.yml file in a little bit more detail to see how we can monitor our app performance exactly the way we want it.


Diving in to NewRelic Configuration

First of all, the NewRelic configuration file is extremely well commented and I encourage you to read the comments for the various configuration parameters to understand what all of them do.

Secondly, NewRelic configuration is environment aware, and configuration for all environments is defined in the one newrelic.yml file, this is very similar to, how the Rails database.yml file works. We define a bunch of common configuration values and then override the relevant ones in the specific environment blocks e.g.:

common: &amp;default_settings
  license_key: '&lt;your licence key&gt;'
  app_name: My Application
  monitor_mode: true

  ...

development:
  &lt;&lt;: *default_settings
  monitor_mode: false

test:
  &lt;&lt;: *default_settings
  monitor_mode: false

production:
  &lt;&lt;: *default_settings
  monitor_mode: true

staging:
  &lt;&lt;: *default_settings
  monitor_mode: true

We can instantly begin to see how we can fix some of the points that we raised above. If we don’t want to have to launch our app in production mode while we’re tweaking our configuration, all we have to do is enable monitoring in development mode (we will need to remember to switch this off when we’re happy with our configuration as we don’t want development data cluttering up our NewRelic account).

development:
  &lt;&lt;: *default_settings
  monitor_mode: true

We should also override our application name for every environment that we have, to make sure they’re monitored separately and the application name makes sense:

  common: &amp;default_settings
    license_key: '&lt;your licence key&gt;'
    app_name: newrelic_rails1
    monitor_mode: true

    ...

  development:
    &lt;&lt;: *default_settings
    monitor_mode: true
    app_name: newrelic_rails1 (Development)

  test:
    &lt;&lt;: *default_settings
    monitor_mode: false
    app_name: newrelic_rails1 (Test)

  production:
    &lt;&lt;: *default_settings
    monitor_mode: true
    app_name: newrelic_rails1 (Production)

  staging:
    &lt;&lt;: *default_settings
    monitor_mode: true
    app_name: newrelic_rails1 (Staging)
  

With just those configuration tweaks, when we start our server in development mode and curl localhost:3000:

newrelic_monitor_development

We’re now monitoring our application in development mode and our app name is what we expect. If your application is saying that it’s not receiving any data, give it a minute, it takes a little while for the data to start coming through.

The next most interesting (and often the most confusing) configuration value is the Apdex T-value. Unlike most of the other configuration parameters, this value does not live in the newrelic.yml file, but is instead found in the settings for the application within NewRelic:

newrelic_apdex

If you want to tweak your Apdex T-value you have to do it here, but what is this parameter and what is the right value to put in it? Well, NewRelic explains it in the following way:

Your application’s Apdex T-value is set to 0.5 seconds. That means requests responding in less than 0.5 seconds are satisfying (s), responding between 0.5 seconds and 2.0 seconds are tolerating (t), and responding in more than 2.0 seconds are frustrating (f).

Essentially, NewRelic uses the Apdex value to gauge the health of your application as far as performance is concerned, so if many of the requests that are monitored by NewRelic take longer than your Apdex value, NewRelic will consider your application to be performing poorly and if you’ve set up alerts, will notify you of the fact. Basically, you have to figure out, how fast you want each server request to be fulfilled by your application, so if you’re OK with a backend request taking two seconds, you can set your Apdex value to 2.0, but if you need a response to be returned within 100ms then you should set your Apdex value to 0.1.

If you have a new application you may set the Apdex value to the performance you desire from your application. If your app is an existing one, you may have some metrics regarding how fast it is/should be performing, and you can be guided by that. All requests which are fulfilled by the server in less than the Apdex T-value will be considered by NewRelic to be fine. All requests fulfilled within Apdex * 4 seconds will be considered tolerating (i.e. users can tolerate it). All responses that take longer than Apdex * 4 will be considered frustrating (frustrated users don’t tend to stick around). So, set your Apdex T-value in such a way that you actually get useful information out of it, the actual value depends on your domain and what you want to achieve (in terms of performance), there is no right or wrong answer.

We will set our Apdex T-value to 100ms (0.1), since all we have is a ‘Hello World’ app, and it should be able to return a response very quickly (even in development mode).


Even More NewRelic Configuration

It was a little funny that most of the configuration comes from the newrelic.yml file, but the Apdex T-value is in the application settings, so NewRelic now allows you to move all the configuration values from the YAML file into NewRelic:

newrelic_move_configuration

The advantage of this is that you don’t have to redeploy every time you want to tweak your configuration values, so it is definitely something worth considering. We will stick with the YAML file for now.

So what are some of the other useful NewRelic parameters we should know about?

Well, there is a set of parameters dealing with the NewRelic agent log file:

log_level: info
log_file_path: 'log'
log_file_name: 'newrelic_agent.log'

These have sensible defaults, but if we want the log file to go to a specific place or if we want to see more or less info in the file, we can easily control this. Since we’re just setting up NewRelic we will set the log level to debug, to make sure we don’t miss any important information (when we deploy we may want to set it to warn, or even error).

We now get a wealth of information in the log file, which (if read carefully) can give us a lot of insights into how NewRelic works:

% cat log/newrelic_agent.log&lt;/p&gt;

[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Starting the New Relic agent in &quot;development&quot; environment.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : To prevent agent startup add a NEWRELIC_ENABLE=false environment variable or modify the &quot;development&quot; section of your newrelic.yml.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Reading configuration from config/newrelic.yml
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Updating config (add) from NewRelic::Agent::Configuration::YamlSource. Results:
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : {...}
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Updating config (add) from NewRelic::Agent::Configuration::ManualSource. Results:
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : {...}
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Installed New Relic Browser Monitoring middleware
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Installed New Relic Agent Hooks middleware
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Agent is configured to use SSL
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Using JSON marshaller
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Transaction tracing threshold is 2.0 seconds.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Ignoring errors of type 'ActionController::RoutingError'
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Ignoring errors of type 'Sinatra::NotFound'
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Errors will be sent to the New Relic service.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Ignoring errors of type 'ActionController::RoutingError'
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Ignoring errors of type 'Sinatra::NotFound'
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : RequestSampler max_samples set to 1200
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Resetting RequestSampler
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Enabling the Request Sampler.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Environment: development
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Dispatcher: webrick
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Application: newrelic_rails1 (Development)
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : EnvironmentReport failed to retrieve value for &quot;Plugin List&quot;: undefined method `plugins' for #&lt;Rails::Application::Configuration:0x007fb232401a00&gt;
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : EnvironmentReport failed to retrieve value for &quot;JRuby version&quot;: uninitialized constant NewRelic::EnvironmentReport::JRUBY_VERSION
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : EnvironmentReport failed to retrieve value for &quot;Java VM version&quot;: uninitialized constant NewRelic::EnvironmentReport::ENV_JAVA
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : EnvironmentReport ignoring value for &quot;Rails threadsafe&quot; which came back falsey: nil
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Creating Ruby Agent worker thread.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Creating New Relic thread: Worker Loop
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : New Relic Ruby Agent 3.6.7.152 Initialized: pid = 12925
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Connecting Process to New Relic: bin/rails
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Created net/http handle to collector.newrelic.com:443
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Sending request to collector.newrelic.com:443/agent_listener/12/1f69cbd2a641bde79bdb5eb4c86a0ab32360e1f8/get_redirect_host?marshal_format=json
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing ActiveRecord 4 instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing Net instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing deferred Rack instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing Rails 4 Controller instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing Rails 4 view instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing Rails4 Error instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Finished instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Registered NewRelic::Agent::Samplers::CpuSampler for harvest time sampling.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Registered NewRelic::Agent::Samplers::MemorySampler for harvest time sampling.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : NewRelic::Agent::Samplers::ObjectSampler not supported on this platform.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : NewRelic::Agent::Samplers::DelayedJobSampler not supported on this platform.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Doing deferred dependency-detection before Rack startup
[09/22/13 17:23:40 +1000 skorks-envato (12925)] DEBUG : Uncompressed content returned
[09/22/13 17:23:40 +1000 skorks-envato (12925)] DEBUG : Created net/http handle to collector-1.newrelic.com:443
[09/22/13 17:23:40 +1000 skorks-envato (12925)] DEBUG : Sending request to collector-1.newrelic.com:443/agent_listener/12/1f69cbd2a641bde79bdb5eb4c86a0ab32360e1f8/connect?marshal_format=json
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Uncompressed content returned
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Server provided config: {...}
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Updating config (add) from NewRelic::Agent::Configuration::ServerSource. Results:
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : {...}
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Wiring up Cross Application Tracing to events after finished configuring
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Connected to NewRelic Service at collector-1.newrelic.com
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Agent Run       = 575257565.
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Connection data = {...}
[09/22/13 17:23:42 +1000 skorks-envato (12925)] INFO : Reporting to: https://rpm.newrelic.com/accounts/303380/applications/2507376
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Browser timing header: &quot;&lt;script type=\\&quot;text/javascript\\&quot;&gt;var NREUMQ=NREUMQ||[];NREUMQ.push([\&quot;mark\&quot;,\&quot;firstbyte\&quot;,new Date().getTime()]);&lt;/script&gt;&quot;
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Browser timing static footer: &quot;if (!NREUMQ.f) { NREUMQ.f=function() {\nNREUMQ.push([\&quot;load\&quot;,new Date().getTime()]);\nvar e=document.createElement(\&quot;script\&quot;);\ne.type=\&quot;text/javascript\&quot;;\ne.src=((\&quot;http:\&quot;===document.location.protocol)?\&quot;http:\&quot;:\&quot;https:\&quot;) + \&quot;//\&quot; +\n  \&quot;js-agent.newrelic.com/nr-100.js\&quot;;\ndocument.body.appendChild(e);\nif(NREUMQ.a)NREUMQ.a();\n};\nNREUMQ.a=window.onload;window.onload=NREUMQ.f;\n};\n&quot;
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Real User Monitoring is using JSONP protocol
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Reporting performance data every 60 seconds.
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Running worker loop
[09/22/13 17:23:50 +1000 skorks-envato (12925)] DEBUG : Attempting to insert RUM header at beginning of head.

For example we can see that:

  • We can switch off monitoring even if it’s switched on in the configuration file, by setting an environment variable NEWRELIC_ENABLE=false
  • We can see that NewRelic inserts a bunch of Rack middleware
  • We’re using Webrick as our server, which is obviously in development mode, but in production it would be good to confirm that NewRelic recognises the server that we’re using
  • NewRelic is sending data to collector.newrelic.com:443
  • NewRelic is sending data every 60 seconds
  • Real user monitoring is done via JSONP

Very useful information when you’re trying to figure out how things hang together.

Most of the other configuration parameters are pretty self explanatory e.g.:

browser_monitoring:
  auto_instrument: true
capture_params: false

The only other one to possibly be aware of is:

transaction_tracer:
  transaction_threshold: apdex_f

The transaction tracer captures detailed data about requests that take too long. The transaction threshold is normally a multiple (x4) of the Apdex value, but it is often useful to divorce these values from each other. You might be happy with an Apdex score of one second, but you may want to capture detailed data about requests that take 1.5 seconds or longer (instead of the four seconds or longer which would happen by default). So you can set this parameter separately:

transaction_tracer:
  transaction_threshold: 1.5

The NewRelic Developer Mode

One of the configuration values you may have noticed was:

developer_mode: true

This should only be switched on in development (if at all). In development mode, NewRelic agent will store performance data about the last 100 requests in memory. You can look at this data at any time by hitting the /newrelic endpoint of your running application:

newrelic_developer_mode

I hardly ever use it, but it’s there if you need it.


Notifying NewRelic of Deployments

Whenever you’re working on the performance of your application, it’s always good to know if a particular deploy has had a positive or negative effect on performance. For this purpose, you can notify NewRelic every time you perform a deploy. This way if performance degrades or improves, you’ll be able to see which deploy was the culprit. NewRelic provides Capistrano hooks to do this, but I prefer the command line way:

% newrelic deployments -a 'newrelic_rails1 (Development)' -e 'development' -u 'skorks' -r 'abc123'
Recorded deployment to 'newrelic_rails1 (Development)' (2013-09-22 18:19:13 +1000)

The key thing is to correctly supply the application name as configured in the newrelic.yml file.

We will get nice lines on the relevant NewRelic graphs to indicate when a deployment occurred.

newrelic_deployment

Conclusion

You now know a whole lot about how NewRelic works and how to start using it to monitor a Rails application. But configuring things properly is only half the battle, what kind of metrics will NewRelic actually capture for you? And how can you use them to improve the performance of your application? We will look at some of these in a subsequent article. For now, have a go at configuring NewRelic for your Rails application (you’ll get a free T-shirt) and if you have any questions don’t forget to leave a comment.

July 01 2013

23:09

Exploring Rack

If you’re a Ruby programmer who has done any kind of web development, you’ve almost certainly used Rack, whether you know it or not, as it’s the foundation which most Ruby web frameworks (Rails, Sinatra, etc.) are built upon. Let’s dig into some of the basic concepts of Rack and even build a small app or two.


What Is Rack, Exactly?

Rack is several things, actually:

  • a web server interface
  • a protocol for building and composing web applications
  • a collection of middleware utilities

A Web Server Interface

Part of what’s nice about Rack, is that it provides a standardized way for Ruby applications to talk to web servers, abstracting away the server stuff (listening on a port, accepting connections, parsing HTTP requests and responses, etc.) so that you can focus on what your application does.


The Protocol

The Rack protocol is very simple: a Rack application is simply a Ruby object with a call method. That method should accept an environment hash describing the incoming request and return a three-element array in the form of: [status, headers, body], where:

  • status is the HTTP status code.
  • headers is a hash of HTTP headers for the response.
  • body is the actual body of the response (e.g. the HTML you want to
    display). The body must also respond to each.

The easiest way to understand Rack’s protocol, is by taking a look at some code.

First, get the rack gem and set up a directory:

$ gem install rack
$ mkdir hellorack
$ cd hellorack

Now create a file named config.ru and fill it in with the following:

class Hello
  def self.call(env)
    [ 200,                              # 200 indicates success
      {"Content-Type" => "text/plain"}, # the hash of headers
      ["Hello from Rack!"]              # we wrap the body in an Array so
                                        # that it responds to `each`
    ]
  end
end

# Tell Rack what to run our app
run Hello

Save the file, open up your terminal and run the rackup command:

$ rackup
[2012-12-21 17:48:38] INFO  WEBrick 1.3.1
[2012-12-21 17:48:38] INFO  ruby 1.9.2 (2011-07-09) [x86_64-darwin11.0.1]
[2012-12-21 17:48:38] INFO  WEBrick::HTTPServer#start: pid=1597 port=9292

The bottom few lines of output, in the above code, let you know that Rack is running your app at port 9292 using Ruby’s built-in WEBrick web server. Point your browser to http://localhost:9292 to see a happy welcome message from Rack.

Kill the app (ctrl-c) and let’s talk about what is going on here.

When you run the rackup command, Rack looks for a rackup config file (conventionally named config.ru, but you can name it whatever you want). It then starts a web server (WEBrick by default) and runs your app.

This protocol is the foundation on which popular frameworks like Rails and Sinatra are built. What they do is layer functionality like template rendering, route dispatching, managing database connections, content negotiation, etc. on top of this fundamental abstraction.

How they do this, is what brings us to the concept of middleware.


Middleware

Middleware gives you a way to compose Rack applications together.

A middleware (yes, it’s both singular and plural) is simply a Rack application that gets initialized with another Rack application. You can define different middleware to do different jobs and then stack them together to do useful things.

For example, if you have a Rails app lying around (chances are, if you’re a Ruby developer, that you do), you can cd into the app and run the command rake middleware to see what middleware Rails is using:

$ cd my-rails-app
$ rake middleware
use ActionDispatch::Static
use Rack::Lock
use #<ActiveSupport::Cache::Strategy::LocalCache::Middleware:0x007fcc4481ae08>
use Rack::Runtime
use Rack::MethodOverride
use ActionDispatch::RequestId
use Rails::Rack::Logger
use ActionDispatch::ShowExceptions
use ActionDispatch::DebugExceptions
use ActionDispatch::RemoteIp
use ActionDispatch::Reloader
use ActionDispatch::Callbacks
use ActiveRecord::ConnectionAdapters::ConnectionManagement
use ActiveRecord::QueryCache
use ActionDispatch::Cookies
use ActionDispatch::Session::CookieStore
use ActionDispatch::Flash
use ActionDispatch::ParamsParser
use ActionDispatch::Head
use Rack::ConditionalGet
use Rack::ETag
use ActionDispatch::BestStandardsSupport
run MyRailsApp::Application.routes

Every request that comes into this app starts at the top of this stack, bubbles its way down, hits the router at the bottom, which dispatches to a controller that generates some kind of response (usually some HTML), which then bubbles its way back up through the stack before being sent back to the browser.


A Middleware Example

Nothing fosters understanding a new concept like coding does, so let’s build a very simple middleware that just converts the response body to uppercase. Open up our config.ru file from before and add the following:

class ToUpper
  # Our class will be initialize with another Rack app
  def initialize(app)
    @app = app
  end

  def call(env)
    # First, call `@app`
    status, headers, body  = @app.call(env)

    # Iterate through the body, upcasing each chunk
    upcased_body = body.map { |chunk| chunk.upcase }

    # Pass our new body on through
    [status, headers, upcased_body]
  end
end

# This is the same Hello app from before, just without all the comments
class Hello
  def self.call(env)
    [ 200, {"Content-Type" => "text/plain"}, ["Hello from Rack!"] ]
  end
end

use ToUpper # Tell Rack to use our newly-minted middleware
run Hello

Run the rackup command again and visit http://localhost:9292 to see our new middleware in action.

What Rack did here was build a Rack application that was the composition of the ToUpper and Hello applications. Internal to Rack, there’s a Builder class that effectively constructed a new app by doing the equivalent of:

app = ToUpper.new(Hello)
run app

If there were more middleware present (like in the Rails stack), it would just nest them all the way down:

use Middleware1
use Middleware2
use Middleware3
run MyApp

#=> Boils down to Middleware1.new(Middleware2.new(Middleware3.new(MyApp)))

Request and Response Classes

When you start writing Rack applications and middleware, manipulating the [status, headers, body] array quickly becomes tedious.

Rack provides a couple of convenience classes, Rack::Request and Rack::Response, to make life a little bit easier.

Rack::Request wraps an env hash and provides you with convenience methods for pulling out the information you might need:

def call(env)
  req = Rack::Request.new(env)
  req.request_method #=> GET, POST, PUT, etc.
  req.get?           # is this a GET requestion
  req.path_info      # the path this request came in on
  req.session        # access to the session object, if using the
  # Rack::Session middleware
  req.params         # a hash of merged GET and POST params, useful for
  # pulling values out of a query string

  # ... and many more
end

Rack::Response is complementary to Rack::Request, and gives you a more convenient way to construct a response. For example, our Hello app could be rewritten as follows:

class Hello
  def self.call(env)
    res = Rack::Response.new

    # This will automatically set the Content-Length header for you
    res.write "Hello from Rack!"

    # returns the standard [status, headers, body] array
    res.finish

    # You can get/set headers with square bracket syntax:
    #   res["Content-Type"] = "text/plain"

    # You can set and delete cookies
    #   res.set_cookie("user_id", 1)
    #   res.delete_cookie("user_id")
  end
end

Conclusion

In this article, we’ve covered the basic concepts of Rack, which should be enough for you to get a better understanding of what’s under the hood of the many popular frameworks out there and also help you get your feet wet if you’re interested in developing directly with Rack.

Code is an excellent teacher, and so if you’re interested in Rack, I highly recommend looking at its source. It comes with a lot of very useful baked-in utilities and middleware (and plenty more at rack-contrib) that you can use and learn from.

April 15 2013

15:00

Digging Into Rails 4

Rails 4 is rapidly approaching. In this article, let’s take a look at some of the new features that it offers, as well as the changes that may affect your current applications.


Some Bookkeeping

Cache digests are Rails 4′s solution for tracking the changes of aggressively cached templates.

There are several configuration and structural changes that comes with Rails 4.

Ruby >= 1.9.3

Rails 4 will only support Ruby 1.9.3+. Get ready for an upgrade if haven’t yet done so.

Threadsafe by Default

Rails 4 will be thread-safe by default, removing overhead and improving performance on threaded servers, like thin and puma. You need to ensure that your application (and its dependencies) are thread-safe, which typically means avoiding global state (e.g. class or global variables).

Aaron Patterson wrote and spoke about this subject. Definitely check those out!

No More vendor/plugins

Rails 3 embraced the idea of using gems to add custom functionality to Rails, and deprecated the use of plugins. Rails 4 completes this transition by removing the vendor/plugins directory altogether.

New Testing Directories

The default test directory naming scheme is more clear than in Rails 3.

The following directories will now be generated: test/models, test/helpers, test/controllers, test/mailers, and test/integration.

Executables

The script directory has been removed in favor of a new bin directory. This is where your app’s executables will live, and running rake rails:update:bin will put bundle, rake, and rails binstubs into your app’s bin directory.

This change can be useful in development, especially on a machine with multiple Ruby versions and gems. You can use bin/rails instead of bundle exec rails to ensure you run your executables in the correct environment.


Strong Parameters

Rails 4 tackles the mass assignment problem with the new Strong Parameters gem. A Rails 3 application might have a create action similar to the following example:

class UsersController < ApplicationController
  def create
    @user = User.create(params[:user])
    # ... check validity, redirect, etc.
  end
end

You can protect against unexpected input with declarations in the model:

class User < ActiveRecord::Base
  # Only allow the following attributes to be mass-assigned
  attr_accessible :name, :email
end

Using Rails 4′s Strong Parameters gem moves user input into the controller:

class UsersController < ApplicationController
  def create
    @user = User.create(user_params)
    # ... check validity, redirect, etc.
  end

  def user_params
    params.require(:user).permit(:name, :email)
  end
end

As you can see, the params hash in your controller is not a normal hash. It’s actually an instance of ActionController::Parameters, which exposes the require and permit methods.

The require method ensures that the specified key is available in the params hash, and raises an ActionController::ParameterMissing exception if the key doesn’t exist.

The permit method protects you from unexpected mass assignment.

The call User.create(params[:user]) raises an ActiveModel::ForbiddenAttributesError exception, but using User.create(params.require(:user).permit(:name, :email)) makes it work without complaint.

The Rails 3 mass-assignment functionality is not only disabled in Rails 4, but has been extracted to a gem, in case you require that functionality.


Turbolinks

Rails 4 will be thread safe by default, removing overhead and improving performance.

A controversial new feature in Rails 4 is Turbolinks, a JavaScript plugin designed to make app navigation faster in the browser.

In browsers with pushState support, clicking a link causes the Turbolinks plugin to kick in. It makes an Ajax request, updates the URL with pushState (so your back button works) and uses JavaScript to update the <title> and <body> in the DOM. The speed gains come from not having to download and reparse JavaScript and CSS assets.

Turbolinks gracefully degrade for browsers which do not support pushState. In these situations, the page’s links behave as normal—causing a full page refresh.

Events and Cache

It’s common in applications to wait for a page to completely load before executing any JavaScript. For example:

$(document).ready(/* some function to run */) {
  // or event just $(/* some function to run */)
}

With Turbolinks, the page load events won’t fire when users navigate from “page” to “page” because the DOM never actually reloads. The library, therefore, adds new events that you can listen for, in order to perform any subsequent initializations that your app might need:

  • page:fetch – starting to fetch a page from the server
  • page:change – a page has been loaded
  • page:load – a page has been loaded from a server fetch
  • page:restore – a page has been loaded from a cache fetch

The page:change event always fires when Turbolinks loads a page, followed by page:load or page:restore, depending on whether the load came from the server or the cache.

Potential Issues

Rails 4 is coming, and it brings a slew of changes to the framework.

Turbolinks have a few issues that you might need to address:

  • Memory leaks: Turbolinks does not clear or reload your JavaScript when the page changes. You could potentially see the effects of memory leaks in your applications, especially if you use a lot of JavaScript.
  • Event Bindings: You have to take older browsers into consideration. Make sure you listen for page:* events, as well as DOMContentLoaded.
  • Client-side frameworks: Turbolinks may not play nicely with other client-side frameworks like Backbone, Angular, Knockout, Ember, etc.

Opting Out

You may opt out of Turbolinks by:

  1. removing turbolinks from your Gemfile, and
  2. removing the //= require turbolinks line from application.js

Caching

Rails 4 brings an overhauled caching strategy. First, action and page caching, as you may know it from previous versions of Rails, have been removed and extracted to gems: action and page, respectively.

Russian Dolls

The new kid on the block is Russian doll caching, or nested fragment caching. The easiest way to understand this system is to look at some code. Suppose that you have a project management application. You may have the following models:

class Milestone < ActiveRecord::Base
  has_many :todos
end

class Todo < ActiveRecord::Base
  belongs_to :milestone, :touch => true
end

The :touch option is required for this caching strategy to work properly. If a todo is added to a milestone, we need to break cache on the milestone to avoid serving stale views.

We now have finely-grained caches in our views. Consider this file as an example (app/views/milestones/show.html.erb):

<% cache @milestone do %>
  <h1><%= @milestone.name %></h1>
  <div class="description"><%= @milestone.description %></div>

  <ul class="todos">
    <%= render @milestone.todos %>
  </ul>
<% end %>

And in app/views/todos/_todo.html.erb:

<% cache todo do %>
  <li class="todo">
    <%= todo.description %>
    <span class="status"><%= todo.status %></span>
  </li>
<% end %>

Now, suppose that you have a milestone with ten todos. Editing only one todo causes the milestone’s cache to break, but when generating the HTML, all but one of the todo partials can be fetched from the cache, thus improving render times.

PATCH is now the new HTTP verb for updating resources.

You’re trading time for space, as this generates a lot of cruft in your cache. But, as DHH points out, cache stores like Memcached just chuck out old data to make space for new data. So this isn’t an issue in most cases.

Cache Digests

Cache digests are Rails 4′s solution for tracking the changes of aggressively cached templates. Rails 4 tracks templates and their dependencies, and it suffixes fragment cache keys with the MD5 digest of the template (and its dependencies). When you edit one of your templates, its cache key recieves the update, and you won’t have to manually version your templates.

For more information (and for use in Rails 3), check out the README for the cache digests gem.


Streaming, via ActionController::Live

The new ActionController::Live module provides the ability to stream data to clients. Simply include the module into a controller to enable your app to send arbitrary streamed data. You’ll have to use a threaded server, like thin and puma, in order to stream data; actions from streaming controllers run in a separate thread.

Here’s an example from the Rails 4 documentation:

class MyController < ActionController::Base
  include ActionController::Live

  def stream
    response.headers['Content-Type'] = 'text/event-stream'
    100.times {
      response.stream.write "hello world
"
        sleep 1
    }
    response.stream.close
  end
end

As the docs note, there are three things to keep in mind:

  • You must write any headers before you call write or close on the response stream.
  • You have to call close on the response stream when you’re finished writing data.
  • Ensure that your actions are thread-safe, as they will run in a separate thread.

Niceties and Other Things

We’ve talked about the “headline” features in Rails 4. But this release is a big one, and includes a number of smaller changes to be aware of.

PATCH

As described in the Rails blog, PATCH is now the HTTP verb for updating resources.

This change will typically be transparent to developers, as PUT requests will still route to the update action for RESTful-style routes.

But it is a change that you should be aware of; PUT routing may change in the future.

Custom Flash Types

This small feature may help clean up some code. You can register your own flash types to use in redirect_to calls and in templates. For example:

# app/controllers/application_controller.rb
class ApplicationController
  add_flash_types :error, :catastrophe
end

# app/controllers/things_controller.rb
class ThingsController < ApplicationController
  def create
    # ... create a thing
  rescue Error => e
    redirect_to some_path, :error => e.message
  rescue Catastrophe => e
    redirect_to another_path, :catastrophe => e.message
  end
end

# app/views/layouts/application.html.erb
<div class="error"><%= error %></div>
<div class="catastrophe"><%= catastrophe %></div>

Deprecated Finders

Rails 4 deprecates the old-style finder option hashes, as well as all dynamic finder methods (with the exception of find_by_... and find_by_...). Instead, you’ll use where:

  • find_all_by_... can be rewritten using where(...).
  • find_last_by_... can be rewritten using where(...).last.
  • scoped_by_... can be rewritten using where(...).
  • find_or_initialize_by_... can be rewritten using where(...).first_or_initialize.
  • find_or_create_by_... can be rewritten using find_or_create_by(...) or where(...).first_or_create.
  • find_or_create_by_...! can be rewritten using find_or_create_by!(...) or where(...).first_or_create!.

The deprecated finders gem will be included as a dependency in 4.0. and removed in 4.1. The gem, however, will be around and maintained until 5.0.

Routing Concerns

Routing Concerns is an attempt to DRY up your config/routes.rb. The basic idea is to define common sub-resources (like comments) as concerns and include them in other resources/routes. Here’s the obvious example:

concern :commentable do
  resources :comments
end

concern :remarkable do
  resources :remarks
end

resources :posts, :concerns => :commentable  
resources :articles, :concerns => [:commentable, :remarkable] # can include several  

The above is equivalent to the following Rails 3 code:

resources :posts do
  resources :comments
end

resources :articles do
  resources :comments
  resources :remarks
end

Personally, I’m not sure this adds much value; perhaps it makes sense for large applications with hundreds of routes.

Renamed Callbacks

Action callbacks in controllers have been renamed from *_filter to *_action. For example:

class UsersController < ApplicationController
  before_action :set_user, :except => [:index, :new, :create}
  before_action :require_the_president, :only => [:fire_the_missiles]

  private

  def set_user
    @user = somehow_find_and_set_the_user(params[:id])
  end

  def require_the_president
    @user.is_the_president?
  end
end

The old *_filter callbacks still work and are not deprecated; so, you can still use them if you wish. DHH’s reason for the change was:

“To avoid the misconception that these callbacks are only suited for transforming or halting the response. With the new style, it’s more inviting to use them as they were intended, such as setting shared ivars for views.”


Wrapping Up

Rails 4 is coming, bringing with it a slew of changes. I hope that this article has given you a sense of what to expect, and perhaps a launching point into investigating what this new version has to offer.

If you really want to wade into the deep end, check out our Tuts+ Premium course on Rails 4!

Tags: Ruby rails

August 23 2012

20:59

Automatic Testing for TDD with PHP

Traditional test-driven development can, at times, be cumbersome. You have to stop writing code in order to run your tests. Luckily, there are solutions, which provide the ability to automatically run your tests as you code. In this tutorial, you will learn how to use a Ruby gem, called watchr, to monitor your code and automatically run the appropriate tests whenever you save your work.


Step 1: Software Requirements

Any tool that helps you obtain quicker feedback is a valuable asset.

This tutorial uses PHP for the code example, however, the techniques are applicable for any language, which offers a CLI utility for unit testing. Ruby is required because we will use the watchr gem. So, make sure you have a working installation of Ruby and PHP with PHPUnit.

Next, ensure that you have libnotify installed, if you’re on Linux; Windows and Mac OSX users need “Growl.” This tutorial is directly applicable on Linux, but I will suggest alternative commands and settings where possible.

Now, it’s time to install the watchr gem. Open a console, and make sure you are in the folder where you can directly run gem. Type the following command:

gem install watchr

Step 2: Technical Background

When a file or folder is modified, watchr can trigger a callback function.

The watchr gem is an executable program written in Ruby, and it wraps around features found in an operating system’s file system to provide the ability to watch for changes made to a specific file or folder. Naturally, these file system features differ for each operating system and file system.

watchr provides a unified application programming interface (API) for all operating systems. On Linux, it uses inotify, the kernel’s file system event library; on other operating systems, it uses the appropriate alternative. If, for some reason, the operating system does not have an available event service, watchr periodically polls the watched file or folder.

When a file or folder is modified, watchr can trigger a callback function. We will use this function to run our tests.


Step 3: Create a PHP Project

Our project is rather simple. Replicate the simple directory structure shown in the following image:

New PHP Project

In the Nettuts.php file, add the following code:

<?php

class Nettuts {

}

?>

Next, add the following code to NettutsTest.php:

<?php

require_once dirname(__FILE__) . '/../Classes/Nettuts.php';

class NettutsTest extends PHPUnit_Framework_TestCase {

  protected $object;

  protected function setUp() {
    $this->object = new Nettuts;
  }

  protected function tearDown() {

  }
}

?>

At this point, the test file is simply a skeleton, and, as you can see in the image above, the tests pass.


Step 4: Create the First watchr Script

Now, we need to create a Ruby file in our project’s folder; let’s call it autotest_watchr.rb. Next, add the following code to the file:

watch("Classes/(.*).php") do |match|
  run_test %{Tests/#{match[1]}Test.php}
end

Automated tests are IDE independent – a big plus in my book.

This code uses the watch method to watch all the .php files in our project’s Classes folder. When a .php file changes, the operating system issues an event, and our watch method will be triggered. The name of the .php file is returned (minus the extension) in a match array’s position of 1. As with any regular expression, parentheses are used to specify a match variable, and in this code, we use them in the matching condition to fetch the file name. Then, we call the run_test method with the path of the composed test file name.

We should also watch our test files; so, add the following code to the Ruby file:

watch("Tests/.*Test.php") do |match|
  run_test match[0]
end

Note that the match array contains the full file name at position 0, and we pass it directly to the run_test method.


Step 5: Make the Script Run the Tests

The Ruby script is set up to watch our .php files, and now we need to implement the run_test method. In our case, we want to run PHPUnit for the specific file.

def run_test(file)
  unless File.exist?(file)
    puts "#{file} does not exist"
    return
  end

  puts "Running #{file}"
  result = `phpunit #{file}`
  puts result
end

We first ensure that the file exists, and simply return if it doesn’t. Next, we run the test with PHPUnit and send the result to the console. Let’s run our watchr script. Open your console, navigate to your project’s directory, and then run:

watchr ./autotest_watchr.rb

Windows users should omit “./” from the above command.

Now modify one of the .php files (just add an empty line at the end of the file), save it, and observe the output in the console. You should see something similar to what’s shown below:

Running Tests/NettutsTest.php
PHPUnit 3.6.0 by Sebastian Bergmann.

F

Time: 0 seconds, Memory: 3.75Mb

There was 1 failure:

1) Warning
No tests found in class "NettutsTest".

/usr/bin/phpunit:46

FAILURES!
Tests: 1, Assertions: 0, Failures: 1.

Yep, we don’t yet have a test to run; so let’s put in a dummy test. Add the following code to the test PHP file:

function testDummyPassingTest() {
  $this->assertTrue(true);
}

Run the Ruby script again, and you should see:

Running Tests/NettutsTest.php
PHPUnit 3.6.0 by Sebastian Bergmann.

.

Time: 0 seconds, Memory: 3.75Mb

OK (1 test, 1 assertion)

Step 6: Parse the Test Output

Let’s notify the user, via the system’s notification mechanismm about the test results. We’ll modify the run_tests method to trigger a method, called notify. Below is the modified run_tests:

def run_tests(file)
  unless File.exist?(file)
    puts "#{file} does not exist"
    return
  end

  puts "Running #{file}"
  result = `phpunit #{file}`
  puts result

  if result.match(/OK/)
    notify "#{file}", "Tests Passed Successfuly", "success.png", 2000
  end
end

The name of the image file, success.png, points to the image you want to display in the notification area. This image is not provided in this tutorial; so you will need to find your own. Now, let’s write the notify method:

def notify title, msg, img, show_time
  images_dir='~/.autotest/images'
  system "notify-send '#{title}' '#{msg}' -i #{images_dir}/#{img} -t #{show_time}"
end

Mac OSX and Windows users: replace the notify-send command with the appropriate Growl alternative. Modify something in either your test or code file so that the test still passes. Save the modified PHP file, and watch the magic happen. Below is an image of the result on my system:

Tests Passed Successfuly

Next, we need to catch the failures. The following code adds a couple of lines to run_tests:

def run_tests(file)
  unless File.exist?(file)
    puts "#{file} does not exist"
    return
  end

  puts "Running #{file}"
  result = `phpunit #{file}`
  puts result

  if result.match(/OK/)
    notify "#{file}", "Tests Passed Successfuly", "success.png", 2000
  elsif result.match(/FAILURES!/)
    notify_failed file, result
  end
end

Also, let’s add the notify_failed method to the file:

def notify_failed cmd, result
  failed_examples = result.scan(/failure:

(.*)
/)
  notify "#{cmd}", failed_examples[0], "failure.png", 6000
end

Modify either of your PHP files to make the test fail; save the modified file. Observe the notification message. It contains the name of the first failing test. This name is selected by the regular expression in the method notify_failed, which parses the PHPUnit output.

Tests Failed

Step 7: Clear the Console Before Each Test Run

Add the following method to your Ruby script, and be sure to call it in the run_test method. The code should work in Linux and Mac OSX, though you might need to do some research for Windows.

def clear_console
  puts "e[He[2J"  #clear console
end

Conclusion

Whenever you program using TDD, any tool that helps you obtain quicker feedback is a valuable asset. My coworkers use similar scripts with watchr or alternatives (some are written around fs_event on MacOS). Needless to say, we’re spoiled now, and can’t imagine developing anything without automatically running tests.

Automated tests are IDE independent – a big plus in my book. Too many IDEs force you to use a specific testing framework, and don’t get me started on remote testing. I prefer to use scripts like this daily, and surely recommend them to any agile software developer.


August 08 2012

16:47

Troubleshooting Ruby Performance Problems

In this video tutorial from New Relic, you’ll see a full performance troubleshooting cycle from problem identification and diagnosis, to fixing the problem code, and then testing the optimized code and verifying the performance improvements. You will learn the fastest way to identify when your Ruby app is suffering from performance problems, how to narrow down your troubleshooting to the specific app component that is the source of the problem, and then seeing into that component to find exactly which code is causing your app to slow down and why.


April 28 2012

03:03

Rails cache sweeper redux

Michael Mahemoff writes: To be effective, Rails cache sweepers need to be more fully understood.  They know no standard, so you must employ art. He goes on: Sweepers observe both your models and your controllers, but most workarounds focus on their controller nature.  Importantly: the sweeper must be explicitly added as an observer. Even more important is redux. Read all about [Rails cache sweeper redux] it.

March 15 2012

21:00

Intelligent ActiveRecord Models

ActiveRecord models in Rails already do a lot of the heavy lifting, in terms of database access and model relationships, but with a bit of work, they can do more things automatically. Let’s find out how!


Step 1 - Create a Base Rails App

This idea works for any sort of ActiveRecord project; however, since Rails is the most common, we’ll be using that for our example app. The app we’ll be using has lots of Users, each of whom can perform a number of actions on Projects .

If you’ve never created a Rails app before, then read this tutorial, or syllabus, first. Otherwise, fire up the old console and type rails new example_app to create the app and then change directories to your new app with cd example_app.


Step 2 - Create Your Models and Relationships

First, we generate the user that will own:


 rails generate scaffold User name:text email:string password_hash:text

Likely, in a real world project, we’d have a few more fields, but this will do for now. Let’s next generate our project model:


 rails generate scaffold Project name:text started_at:datetime started_by_id:integer completed_at:datetime completed_by_id:integer

We then edit the generated project.rb file to describe the relationship between users and projects:


 class Project < ActiveRecord::Base
  belongs_to :starter, :class_name =>"User", :foreign_key =>"started_by_id"
  belongs_to :completer, :class_name =>"User", :foreign_key =>"completed_by_id"
 end

and the reverse relationship in user.rb:


 class User < ActiveRecord::Base
  has_many :started_projects, :foreign_key =>"started_by_id"
  has_many :completed_projects, :foreign_key =>"completed_by_id"
 end

Next, run a quick rake db:migrate, and we’re ready to begin getting intelligent with these models. If only getting relationships with models was as easy in the real world! Now, if you’ve ever used the Rails framework before, you’ve probably learned nothing… yet!


Step 3 - Faux Attributes Are Cooler Than Faux Leather

The first thing we’re going to do is use some auto generating fields. You’ll have noticed that when we created the model, we created a password hash and not a password field. We’re going to create a faux attribute for a password that will convert it to a hash if it’s present.

So, in your model, we’ll add a definition for this new password field.


 def password={new_password)
  write_attribute(:password_hash, SHA1::hexdigest(new_password))
 end

 def password
  ""
 end

We only store a hash against the user so we’re not giving out the passwords without a bit of a fight.

The second method means we return something for forms to use.

We also need to ensure that we have the Sha1 encryption library loaded; add require 'sha1' to your application.rb file after line 40: config.filter_parameters += [:password].

As we’ve changed the app at the configuration level, reload it with a quick touch tmp/restart.txt in your console.

Now, let’s change the default form to use this instead of password_hash. Open _form.html.erb in the app/models/users folder:


 <div class="field">
  <%= f.label :password_hash %><br />
  <%= f.text_area :password_hash %>
 </div>

becomes


 <div>
  <%= f.label :password %><br/>
  <%= f.text_field :password %>
 </div>

We’ll make it an actual password field when we’re happy with it.

Now, load http://localhost/users and have a play with adding users. It should look a bit like the image below; great, isn’t it!

User Form

Wait, what’s that? It overwrites your password hash every time you edit a user? Let’s fix that.

Open up user.rb again, and change it like so:


 write_attribute(:password_hash, SHA1::hexdigest(new_password)) if new_password.present?

This way, only when you supply a password does the field get updated.


Step 4 - Automatic Data Guarantees Accuracy or Your Money Back

The last section was all about changing the data that your model gets, but what about adding more information based on things already known without having to specify them? Let’s have a look at that with the project model. Begin by having a look at http://localhost/projects.

Make the following changes quickly.

*app/controllers/projects_controler.rb* line 24


 # GET /projects/new
 # GET /projects/new.json
 def new
  @project = Project.new
  @users = ["--",nil] + User.all.collect { |u| [u.name,u.id] }

  respond_to do |format|
   format.html # new.html.erb
   format.json { render :json =>@project }
  end
 end

 # GET /projects/1/edit
 def edit
  @project = Project.find(params[:id])
  @users = ["--",nil] + User.all.collect { |u| [u.name,u.id] }
 end

*app/views/projects/_form.html.erb* line 24


 <%= f.select :started_by_id, @users %>

*app/views/projects/_form.html.erb* line 24


 <%= f.select :completed_by , @users%>

In MVC frameworks, the roles are clearly defined. Models represent the data. Views display the data. Controllers get data and pass them to the view.

Who Enjoys Filling Out Date/time Fields?

We now have a full functioning form, but it bugs me that I have to set the start_at time manually. I’d like to have it set when I assign a started_by user. We could put it in the controller, however, if you’ve ever heard the phrase “fat models, skinny controllers” you’ll know this makes for bad code. If we do this in the model, it’ll work anywhere we set a the starter or completer. Let’s do that.

First edit app/models/project.rb, and add the following method:


 def started_by=(user)
  if(user.present?)
   user = user.id if user.class == User
   write_attribute(:started_by_id,user)
   write_attribute(:started_at,Time.now)
  end
 end

This code ensures that something has actually been passed. Then, if it’s a user, it retrieves its ID and finally writes both the user *and* the time it happened – holy smokes! Let’s add the same for the completed_by field.


 def completed_by=(user)
  if(user.present?)
   user = user.id if user.class == User
   write_attribute(:completed_by_id,user)
   write_attribute(:started_at,Time.now)
  end
 end

Now edit the form view so we don’t have those time selects. In app/views/projects/_form.html.erb, remove lines 26-29 and 18-21.

Open up http://localhost/projects and have a go!

Spot the Deliberate Mistake

Whoooops! Someone (I’ll take the heat since it’s my code) cut and paste, and forgot to change the :started_at to :completed_at in the second largely identical (hint) attribute method. No biggie, change that and everything is go… right?


Step 5 - Help Your Future Self by Making Additions Easier

So apart from a little cut-and-paste confusion, I think we did fairly good job, but that slip up and the code around it bothers me a bit. Why? Well, let’s have a think:

  • It’s cut and paste duplication: DRY (Don’t repeat yourself) is a principle to follow.
  • What if someone wants to add another somethingd_at and somethingd_by to our project, like, say, authorised_at and authorised_by>
  • I can imagine quite a few of these fields being added.

Lo and behold, along comes a pointy haired boss and asks for, {drumroll}, authorised_at/by field and a suggested_at/by field! Right then; let’s get those cut and paste fingers ready then… or is there a better way?

The Scary Art of Meta-progamming!

That’s right! The holy grail; the scary stuff your mothers warned you about. It seems complicated, but actually can be pretty simple – especially what we’re going to attempt. We’re going to take an array of the names of stages we have, and then auto build these methods on the fly. Excited? Great.

Of course, we’ll need to add the fields; so let’s add a migration rails generate migration additional_workflow_stages and add those fields inside the newly generated db/migrate/TODAYSTIMESTAMP_additional_workflow_stages.rb.


class AdditionalWorkflowStages < ActiveRecord::Migration
 def up
  add_column :projects, :authorised_by_id, :integer
  add_column :projects, :authorised_at, :timestamp
  add_column :projects, :suggested_by_id, :integer
  add_column :projects, :suggested_at, :timestamp
 end

 def down
  remove_column :projects, :authorised_by_id
  remove_column :projects, :authorised_at
  remove_column :projects, :suggested_by_id
  remove_column :projects, :suggested_at
 end
end

Migrate your database with rake db:migrate, and replace the projects class with:


 class Project < ActiveRecord::Base
 # belongs_to :starter, :class_name =>"User"

 # def started_by=(user)
 # if(user.present?)
 #   user = user.id if user.class == User
 #   write_attribute(:started_by_id,user)
 #   write_attribute(:started_at,Time.now)
 # end
 # end
 #
 # def started_by
 # read_attribute(:completed_by_id)
 # end

 end

I’ve left the started_by in there so you can see how the code was before.


 [:starte,:complete,:authorise,:suggeste].each do |arg|
  ..MORE..
  end

Nice and gentle – goes through the names(ish) of the methods we wish to create:


 [:starte,:complete,:authorise,:suggeste].each do |arg|

  attr_by = "#{arg}d_by_id".to_sym
  attr_at = "#{arg}d_at".to_sym
  object_method_name = "#{arg}r".to_sym	  

  ...MORE...
 end

For each of those names, we work out the two model attributes we’re setting e.g started_by_id and started_at and the name of the association e.g. starter


 [:starte,:complete,:authorise,:suggeste].each do |arg|

  attr_by = "#{arg}d_by_id".to_sym
  attr_at = "#{arg}d_at".to_sym
  object_method_name = "#{arg}r".to_sym	  

  belongs_to object_method_name, :class_name =>"User", :foreign_key =>attr_by

 end

This seems pretty familiar. This is actually a Rails bit of metaprogramming already that defines a bunch of methods.


 [:starte,:complete,:authorise,:suggeste].each do |arg|

  attr_by = "#{arg}d_by_id".to_sym
  attr_at = "#{arg}d_at".to_sym
  object_method_name = "#{arg}r".to_sym	  

  belongs_to object_method_name, :class_name =>"User", :foreign_key =>attr_by

  get_method_name = "#{arg}d_by".to_sym

  define_method(get_method_name) { read_attribute(attr_by) }

 end

Ok, we come to some real meta programming now that calculates the ‘get method’ name – e.g. started_by, and then creates a method, just as we do when we write def method, but in a different form.


 [:starte,:complete,:authorise,:suggeste].each do |arg|

  attr_by = "#{arg}d_by_id".to_sym
  attr_at = "#{arg}d_at".to_sym
  object_method_name = "#{arg}r".to_sym	  

  belongs_to object_method_name, :class_name =>"User", :foreign_key =>attr_by

  get_method_name = "#{arg}d_by".to_sym

  define_method(get_method_name) { read_attribute(attr_by) }

  set_method_name = "#{arg}d_by=".to_sym

  define_method(set_method_name) do |user|
   if user.present?
    user = user.id if user.class == User
    write_attribute(attr_by,user)
    write_attribute(attr_at,Time.now)
   end
  end

 end

A little bit more complicated now. We do the same as before, but this is the set method name. We define that method, using define(method_name) do |param| end, rather than def method_name=(param).

That wasn’t so bad, was it?

Try it Out in the Form

Let’s see if we can still edit projects as before. It turns out that we can! So we’ll add the additional fields to the form, and, hey, presto!

app/views/project/_form.html.erb line 20


 <div class="field">
  <%= f.label :suggested_by %><br/>
  <%= f.select :suggested_by, @users %>
 </div>

 <div class="field">
  <%= f.label :authorised_by %><br/>
  <%= f.select :authorised_by, @users %>
 </div>

And to the show view… so we can see it working.

*app/views-project/show.html.erb* line 8


 <p>
  <b>Suggested at:</b> <%= @project.suggested_at %>
 </p>

 <p>
  <b>Suggested by:</b> <%= @project.suggested_by_id %>
 </p>

 <p>
  <b>Authorised at:</b> <%= @project.authorised_at %>
 </p>

 <p>
  <b>Authorised by:</b> <%= @project.authorised_by_id %>
 </p>

Have another play with http://localhost/projects, and you can see we have a winner! No need to fear if someone asks for another workflow step; simply add the migration for the database, and put it in the array of methods… and it gets created. Time for a rest? Maybe, but I’ve just two more things to make note of.


Step 6 - Automate the Automation

That array of methods seems quite useful to me. Could we do more with it?

First, let’s make the list of method names a constant so we can access it from outside.


 WORKFLOW_METHODS = [:starte,:complete,:authorise,:suggeste]
 WORKFLOW_METHODS.each do |arg|....

Now, we can use them to auto create form and views. Open up the _form.html.erb for projects, and let’s try it by replacing lines 19 -37 with the snippet below:


 <% Project::WORKFLOW_METHODS.each do |workflow| %>
  <div class="field">
  <%= f.label "#{workflow}d_by" %><br/>
  <%= f.select "#{workflow}d_by", @users %>
  </div>
 <% end %> 

But app/views-project/show.html.erb is where the real magic is:


 <p id="notice"><%= notice %></p>

 <p>
  <b>Name:</b>: <%= @project.name %>
 </p>

 <% Project::WORKFLOW_METHODS.each do |workflow|
  at_method = "#{workflow}d_at"
  by_method = "#{workflow}d_by_id"
  who_method = "#{workflow}r"
 %>

 <p>
  <b><%= at_method.humanize %>:</b>: <%= @project.send(at_method) %>
 </p>

 <p>
  <b><%= who_method.humanize %>:</b>: <%= @project.send(who_method) %>
 </p>

 <p>
  <b><%= by_method.humanize %>:</b>: <%= @project.send(by_method) %>
 </p>

 <% end %>

 <%= link_to 'Edit', edit_project_path(@project) %> |
 <%= link_to 'Back', projects_path %>

This should be fairly clear, although, if you’re not familiar with send(), it’s another way to call a method. So object.send("name_of_method") is the same as object.name_of_method.

Final Sprint

We’re almost done, but I’ve noticed two bugs: one is formatting, and the other is a bit more serious.

The first is that, while I view a project, the whole method is showing an ugly Ruby object output. Rather than adding a method to the end, like this


 @project.send(who_method).name

Let’s modify User to have a to_s method. Keep things in the model if you can, and add this to the top of the user.rb, and do the same for project.rb as well. It always makes sense to have a default representation for a model as a string:


 def to_s
  name
 end

Feels a bit mundane writing methods the easy way now, eh? No? Anyhow, on to more serious things.

An Actual Bug

When we update a project because we send all of the workflow stages that have been assigned previously, all our time stamps are mixed up. Fortunately, because all our code is in one place, a single change will fix them all.


 define_method(set_method_name) do |user|
  if user.present?
   user = user.id if user.class == User

   # ADDITION HERE
   # This ensures it's changed from the stored value before setting it
   if read_attribute(attr_by).to_i != user.to_i
    write_attribute(attr_by,user)
    write_attribute(attr_at,Time.now)
   end
  end
 end

Conclusion

What have we learned?

  • Adding functionality to the model can seriously improve the rest of you code
  • Meta programming isn’t impossible
  • Suggesting a project might get logged
  • Writing smart in the first place means less work later
  • No-one enjoys cutting, pasting and editing and it causes bugs
  • Smart Models are sexy in all walks of life

Thank you so much for reading, and let me know if you have any questions.


February 16 2012

22:39

How to Upload Files with Ease Using DragonFly: New on Premium

File uploads are generally a tricky area in web development. In this Tuts+ Premium tutorial, we will learn how to use Dragonfly, a powerful Ruby gem that makes it easy and efficient to add any kind of upload functionality to a Rails project.

Become a Premium member to read this tutorial, as well as hundreds of other advanced tutorials and screencasts from the Tuts+ network.


What We’re Going to Build

Our sample application will display a list of users, and for each one of them, we will be able to upload an avatar and have it stored. Additionally, Dragonfly will allow us to:

  • Dynamically manipulate images without saving additional copies
  • Leverage HTTP caching to optimize our application load

Additionally, in this lesson, we will follow a BDD [Behavior Driven Development] approach, using Cucumber and RSpec.


Tuts+ Premium

The recently re-launched Tuts+ Premium is a service that provides top-tier training in a variety of creative fields. Whether you prefer books, visual training, or in depth tutorials, we have you covered. While we unfortunately can’t afford to provide the service for free, it’s only $19 a month – less than you’d spend on dinner.

I hope you’ll consider checking it out! In addition to learning a huge variety of new skills, it’s also a fantastic way to say thank you to Nettuts+.


January 28 2012

23:18

Writing an API Wrapper in Ruby with TDD

Sooner or later, all developers are required to interact with an API. The most difficult part is always related to reliably testing the code we write, and, as we want to make sure that everything works properly, we continuosly run code that queries the API itself. This process is slow and inefficient, as we can experience network issues and data inconsistencies (the API results may change). Let’s review how we can avoid all of this effort with Ruby.


Our Goal

“Flow is essential: write the tests, run them and see them fail, then write the minimal implementation code to make them pass. Once they all do, refactor if needed.”

Our goal is simple: write a small wrapper around the Dribbble API to retrieve information about a user (called ‘player’ in the Dribbble world).
As we will be using Ruby, we will also follow a TDD approach: if you’re not familiar with this technique, Nettuts+ has a good primer on RSpec you can read. In a nutshell, we will write tests before writing our code implementation, making it easier to spot bugs and to achieve a high code quality. Flow is essential: write the tests, run them and see them fail, then write the minimal implementation code to make them pass. Once they all do, refactor if needed.

The API

The Dribbble API is fairly straightforward. At the time of this it supports only GET requests and doesn’t require authentication: an ideal candidate for our tutorial. Moreover, it offers a 60 calls per minute limit, a restriction that perfectly shows why working with APIs require a smart approach.


Key Concepts

This tutorial needs to assume that you have some familiarity with testing concepts: fixtures, mocks, expectations. Testing is an important topic (especially in the Ruby community) and even if you are not a Rubyist, I’d encourage you to dig deeper into the matter and to search for equivalent tools for your everyday language. You may want to read “The RSpec book” by David Chelimsky et al., an excellent primer on Behavior Driven Development.

To summarize here, here are three key concepts you must know:

  • Mock: also called double, a mock is “an object that stands in for another object in an example”. This means that if we want to test the interaction between an object and another, we can mock the second one. In this tutorial, we will mock the Dribbble API, as to test our code we don’t need the API, itself, but something that behaves like it and exposes the same interface.
  • Fixture: a dataset that recreates a specific state in the system. A fixture can be used to create the needed data to test a piece of logic.
  • Expectation: a test example written the from the point of view of the result we want to achieve.

Our Tools

“As a general practice, run tests every time you update them.”

WebMock is a Ruby mocking library that is used to mock (or stub) http requests. In other words, it allows you to simulate any HTTP request without actually making one. The primary advantage to this is being able to develop and test against any HTTP service without needing the service itself and without incurring in related issues (like API limits, IP restrictions and such).
VCR is a complementary tool that records any real http request and creates a fixture, a file that contains all the needed data to replicate that request without performing it again. We will configure it to use WebMock to do that. In other words, our tests will interact with the real Dribbble API just once: after that, WebMock will stub all the requests thanks to the data recorded by VCR. We will have a perfect replica of the Dribbble API responses recorded locally. In addition, WebMock will let us test edge cases (like the request timing out) easily and consistently. A wonderful consequence of our setup is that everything will be extremely fast.

As for unit testing, we will be using Minitest. It’s a fast and simple unit testing library that also supports expectations in the RSpec fashion. It offers a smaller feature set, but I find that this actually encourages and pushes you to separate your logic into small, testable methods. Minitest is part of Ruby 1.9, so if you’re using it (I hope so) you don’t need to install it. On Ruby 1.8, it’s only a matter of gem install minitest.

I will be using Ruby 1.9.3: if you don’t, you will probably encounter some issues related to require_relative, but I’ve included fallback code in a comment right below it. As a general practice, you should run tests every time you update them, even if I won’t be mentioning this step explicitly throughout the tutorial.


Setup

Setup

We will use the conventional /lib and /spec folder structure to organize our code. As for the name of our library, we’ll call it Dish, following the Dribbble convention of using basketball related terms.

The Gemfile will contain all our dependencies, albeit they’re quite small.

source :rubygems

gem 'httparty'

group :test do
  gem 'webmock'
  gem 'vcr'
  gem 'turn'
  gem 'rake'
end

Httparty is an easy to use gem to handle HTTP requests; it will be the core of our library. In the test group, we will also add Turn to change the output of our tests to be more descriptive and to support color.

The /lib and /spec folders have a symmetrical structure: for every file contained in the /lib/dish folder, there should be a file inside /spec/dish with the same name and the ‘_spec’ suffix.

Let’s start by creating a /lib/dish.rb file and add the following code:

require "httparty"
Dir[File.dirname(__FILE__) + '/dish/*.rb'].each do |file|
  require file
end

It doesn’t do much: it requires ‘httparty’ and then iterates over every .rb file inside /lib/dish to require it. With this file in place, we will be able to add any functionality inside separate files in /lib/dish and have it automatically loaded just by requiring this single file.

Let’s move to the /spec folder. Here’s the content of the spec_helper.rb file.

#we need the actual library file
require_relative '../lib/dish'
# For Ruby < 1.9.3, use this instead of require_relative
# require(File.expand_path('../../lib/dish', __FILE__))

#dependencies
require 'minitest/autorun'
require 'webmock/minitest'
require 'vcr'
require 'turn'

Turn.config do |c|
 # :outline  - turn's original case/test outline mode [default]
 c.format  = :outline
 # turn on invoke/execute tracing, enable full backtrace
 c.trace   = true
 # use humanized test names (works only with :outline format)
 c.natural = true
end

#VCR config
VCR.config do |c|
  c.cassette_library_dir = 'spec/fixtures/dish_cassettes'
  c.stub_with :webmock
end

There’s quite a few things here worth noting, so let’s break it piece by piece:

  • At first, we require the main lib file for our app, making the code we want to test available to the test suite. The require_relative statement is a Ruby 1.9.3 addition.
  • We then require all the library dependencies: minitest/autorun includes all the expectations we will be using, webmock/minitest adds the needed bindings between the two libraries, while vcr and turn are pretty self-explanatory.
  • The Turn config block merely needs to tweak our test output. We will use the outline format, where we can see the description of our specs.
  • The VCR config blocks tells VCR to store the requests into a fixture folder (note the relative path) and to use WebMock as a stubbing library (VCR supports some other ones).

Last, but not least, the Rakefile that contains some support code:

require 'rake/testtask'

Rake::TestTask.new do |t|
  t.test_files = FileList['spec/lib/dish/*_spec.rb']
  t.verbose = true
end

task :default => test

The rake/testtask library includes a TestTask class that is useful to set the location of our test files. From now on, to run our specs, we will only type rake from the library root directory.

As a way to test our configuration, let’s add the following code to /lib/dish/player.rb:

module Dish
  class Player
  end
end

Then /spec/lib/dish/player_spec.rb:

require_relative '../../spec_helper'
# For Ruby < 1.9.3, use this instead of require_relative
# require (File.expand_path('./../../../spec_helper', __FILE__))

describe Dish::Player do

  it "must work" do
    "Yay!".must_be_instance_of String
  end

end

Running rake should give you one test passing and no errors. This test is by no means useful for our project, yet it implicitly verifies that our library file structure is in place (the describe block would throw an error if the Dish::Player module was not loaded).


First Specs

To work properly, Dish requires the Httparty modules and the correct base_uri, i.e. the base url of the Dribbble API. Let’s write the relevant tests for these requirements in player_spec.rb:

...
describe Dish::Player do

  describe "default attributes" do

    it "must include httparty methods" do
      Dish::Player.must_include HTTParty
    end

    it "must have the base url set to the Dribble API endpoint" do
      Dish::Player.base_uri.must_equal 'http://api.dribbble.com'
    end

  end

end

As you can see, Minitest expectations are self-explanatory, especially if you are an RSpec user: the biggest difference is wording, where Minitest prefers “must/wont” to “should/should_not”.

Running these tests will show one error and one failure. To have them pass, let’s add our first lines of implementation code to player.rb:

module Dish

  class Player

    include HTTParty

    base_uri 'http://api.dribbble.com'

  end

end

Running rake again should show the two specs passing. Now our Player class has access to all Httparty class methods, like get or post.


Recording our First Request

As we will be working on the Player class, we will need to have API data for a player. The Dribbble API documentation page shows that the endpoint to get data about a specific player is http://api.dribbble.com/players/:id

As in typical Rails fashion, :id is either the id or the username of a specific player. We will be using simplebits, the username of Dan Cederholm, one of the Dribbble founders.

To record the request with VCR, let’s update our player_spec.rb file by adding the following describe block to the spec, right after the first one:

  ...

  describe "GET profile" do

  before do
    VCR.insert_cassette 'player', :record => :new_episodes
  end

  after do
    VCR.eject_cassette
  end

  it "records the fixture" do
    Dish::Player.get('/players/simplebits')
  end

  end

end

After running rake, you can verify that the fixture has been created. From now on, all our tests will be completely network independent.

The before block is used to execute a specific portion of code before every expectation: we use it to add the VCR macro used to record a fixture that we will call ‘player’. This will create a player.yml file under spec/fixtures/dish_cassettes. The :record option is set to record all new requests once and replay them on every subsequent, identical request. As a proof of concept, we can add a spec whose only aim is to record a fixture for simplebits’s profile. The after directive tells VCR to remove the cassette after the tests, making sure that everything is properly isolated. The get method on the Player class is made available, thanks to the inclusion of the Httparty module.

After running rake, you can verify that the fixture has been created. From now on, all our tests will be completely network independent.


Getting the Player Profile

Dribbble

Every Dribbble user has a profile that contains a pretty extensive amount of data. Let’s think about how we would like our library to be when actually used: this is a useful way to flesh out our DSL will work. Here’s what we want to achieve:

simplebits = Dish::Player.new('simplebits')
simplebits.profile
  => #returns a hash with all the data from the API
simplebits.username
  => 'simplebits'
simplebits.id
  => 1
simplebits.shots_count
  => 157

Simple and effective: we want to instantiate a Player by using its username and then get access to its data by calling methods on the instance that map to the attributes returned by the API. We need to be consistent with the API itself.

Let’s tackle one thing at a time and write some tests related to getting the player data from the API. We can modify our "GET profile" block to have:

describe "GET profile" do

  let(:player) { Dish::Player.new }

  before do
    VCR.insert_cassette 'player', :record => :new_episodes
  end

  after do
    VCR.eject_cassette
  end

  it "must have a profile method" do
    player.must_respond_to :profile
  end

  it "must parse the api response from JSON to Hash" do
    player.profile.must_be_instance_of Hash
  end

  it "must perform the request and get the data" do
    player.profile["username"].must_equal 'simplebits'
  end

end

The let directive at the top creates a Dish::Player instance available in the expectations. Next, we want to make sure that our player has got a profile method whose value is a hash representing the data from the API. As a last step, we test a sample key (the username) to make sure that we actually perform the request.

Note that we’re not yet handling how to set the username, as this is a further step. The minimal implementation required is the following:

...
class Player

  include HTTParty

  base_uri 'http://api.dribbble.com'

  def profile
    self.class.get '/players/simplebits'
  end

end
...

A very little amount of code: we’re just wrapping a get call in the profile method. We then pass the hardcoded path to retrieve simplebits’s data, data that we had already stored thanks to VCR.

All our tests should be passing.


Setting the Username

Now that we have a working profile function, we can take care of the username. Here are the relevant specs:

describe "default instance attributes" do

  let(:player) { Dish::Player.new('simplebits') }

  it "must have an id attribute" do
    player.must_respond_to :username
  end

  it "must have the right id" do
    player.username.must_equal 'simplebits'
  end

end

describe "GET profile" do

  let(:player) { Dish::Player.new('simplebits') }

  before do
    VCR.insert_cassette 'base', :record => :new_episodes
  end

  after do
    VCR.eject_cassette
  end

  it "must have a profile method" do
    player.must_respond_to :profile
  end

  it "must parse the api response from JSON to Hash" do
    player.profile.must_be_instance_of Hash
  end

  it "must get the right profile" do
    player.profile["username"].must_equal "simplebits"
  end

end

We’ve added a new describe block to check the username we’re going to add and simply amended the player initialization in the GET profile block to reflect the DSL we want to have. Running the specs now will reveal many errors, as our Player class doesn’t accept arguments when initialized (for now).

Implementation is very straightforward:

...
class Player

  attr_accessor :username

  include HTTParty

  base_uri 'http://api.dribbble.com'

  def initialize(username)
    self.username = username
  end

  def profile
    self.class.get "/players/#{self.username}"
  end

end
...

The initialize method gets a username that gets stored inside the class thanks to the attr_accessor method added above. We then change the profile method to interpolate the username attribute.

We should get all our tests passing once again.


Dynamic Attributes

At a basic level, our lib is in pretty good shape. As profile is a Hash, we could stop here and already use it by passing the key of the attribute we want to get the value for. Our goal, however, is to create an easy to use DSL that has a method for each attribute.

Let’s think about what we need to achieve. Let’s assume we have a player instance and stub how it would work:

player.username
  => 'simplebits'
player.shots_count
  => 157
player.foo_attribute
  => NoMethodError

Let’s translate this into specs and add them to the GET profile block:

...
describe "dynamic attributes" do

  before do
    player.profile
  end

  it "must return the attribute value if present in profile" do
    player.id.must_equal 1
  end

  it "must raise method missing if attribute is not present" do
    lambda { player.foo_attribute }.must_raise NoMethodError
  end

end
...

We already have a spec for username, so we don’t need to add another one. Note a few things:

  • we explicitly call player.profile in a before block, otherwise it will be nil when we try to get the attribute value.
  • to test that foo_attribute raises an exception, we need to wrap it in a lambda and check that it raises the expected error.
  • we test that id equals 1, as we know that that is the expected value (this is a purely data-dependent test).

Implementation-wise, we could define a series of methods to access the profile hash, yet this would create a lot of duplicated logic. Moreover, the would rely on the API result to always have the same keys.

“We will rely on method_missing to handle this cases and ‘generate’ all those methods on the fly.”

Instead, we will rely on method_missing to handle this cases and ‘generate’ all those methods on the fly. But what does this mean? Without going into too much metaprogramming, we can simply say that every time we call a method not present on the object, Ruby raises a NoMethodError by using method_missing. By redefining this very method inside a class, we can modify its behaviour.

In our case, we will intercept the method_missing call, verify that the method name that has been called is a key in the profile hash and in case of positive result, return the hash value for that key. If not, we will call super to raise a standard NoMethodError: this is needed to make sure that our library behaves exactly the way any other library would do. In other words, we want to guarantee the least possible surprise.

Let’s add the following code to the Player class:

def method_missing(name)
  if profile.has_key?(name.to_s)
    profile[name.to_s]
  else
    super
  end
end

The code does exactly what described above. If you now run the specs, you should have them all pass. I’d encorage you to add some more to the spec files for some other attribute, like shots_count.

This implementation, however, is not really idiomatic Ruby. It works, but it can be streamlined into a ternary operator, a condensed form of an if-else conditional. It can be rewritten as:

def method_missing(name, *args, &block)
  profile.has_key?(name.to_s) ? profile[name.to_s] : super
end

It’s not just a matter of length, but also a matter of consistency and shared conventions between developers. Browsing source code of Ruby gems and libraries is a good way to get accustomed to these conventions.


Caching

As a final step, we want to make sure that our library is efficient. It should not make any more requests than needed and possibly cache data internally. Once again, let’s think about how we could use it:

player.profile
  => performs the request and returns a Hash
player.profile
  => returns the same hash
player.profile(true)
  => forces the reload of the http request and then returns the hash (with data changes if necessary)

How can we test this? We can by using WebMock to enable and disable network connections to the API endpoint. Even if we’re using VCR fixtures, WebMock can simulate a network Timeout or a different response to the server. In our case, we can test caching by getting the profile once and then disabling the network. By calling player.profile again we should see the same data, while by calling player.profile(true) we should get a Timeout::Error, as the library would try to connect to the (disabled) API endpoint.

Let’s add another block to the player_spec.rb file, right after dynamic attribute generation:

describe "caching" do

  # we use Webmock to disable the network connection after
  # fetching the profile
  before do
    player.profile
    stub_request(:any, /api.dribbble.com/).to_timeout
  end

  it "must cache the profile" do
    player.profile.must_be_instance_of Hash
  end

  it "must refresh the profile if forced" do
    lambda { player.profile(true) }.must_raise Timeout::Error
  end

end

The stub_request method intercepts all calls to the API endpoint and simulates a timeout, raising the expected Timeout::Error. As we did before, we test the presence of this error in a lambda.

Implementation can be tricky, so we’ll split it into two steps. Firstly, let’s move the actual http request to a private method:

...
def profile
  get_profile
end

...

private

def get_profile
  self.class.get("/players/#{self.username}")
end
...

This will not get our specs passing, as we’re not caching the result of get_profile. To do that, let’s change the profile method:

...
def profile
  @profile ||= get_profile
end
...

We will store the result hash into an instance variable. Also note the ||= operator, whose presence makes sure that get_profile is run only if @profile returns a falsy value (like nil).

Next we can add the forced reload directive:

...
def profile(force = false)
  force ? @profile = get_profile : @profile ||= get_profile
end
...

We’re using a ternary again: if force is false, we perform get_profile and cache it, if not, we use the logic written in the previous version of this method (i.e. performing the request only if we don’t have already an hash).

Our specs should be green now and this is also the end of our tutorial.


Wrapping Up

Our purpose in this tutorial was writing a small and efficient library to interact with the Dribbble API; we’ve laid the foundation for this to happen. Most of the logic we’ve written can be abstracted and requesed to access all the other endpoints. Minitest, WebMock and VCR have proven to be valuable tools to help us shape our code.

We do, however, need to be aware of a small caveat: VCR can become a double-edged sword, as our tests can become too much data-dependent. If, for any reason, the API we’re building against changes without any visible sign (like a version number), we may risk having our tests perfectly working with a dataset, which is no longer relevant. In that case, removing and recreating the fixture is the best way to make sure that our code still works as expected.


Tags: Ruby api TDD

January 26 2012

21:38

24 Extremely Useful Ruby Gems for Web Development

One of the nicer things about developing on the Ruby platform is the sheer amount of meticulously categorized, highly reusable code wrapped up in the form of aptly named ‘gems’.

I’m sure you’ve heard of popular frameworks like Sinatra or the super popular Rails that ship as gems but you’re missing an entire spectrum of others that handle issues at a much lower level. Start using these and watch your productivity shoot through the roof!


A Quick Note

I’m well aware that some of the gems listed here have Rails, or parts of Rails, as a dependency. That doesn’t mean that they are any less useful or need to be sneered at.


CarrierWave

Upload files in your Ruby applications, map them to a range of ORMs, store them on different backends. It works well with Rack based web applications, such as Ruby on Rails.

Related reading


Kaminari

Kaminari is a Scope & Engine based, clean, powerful, customizable and sophisticated paginator. Kaminari supports multiple ORMs (ActiveRecord, Mongoid, MongoMapper) multiple web frameworks (Rails, Sinatra), and multiple template engines (ERB, Haml).

Related reading


HAML

Haml (HTML Abstraction Markup Language) is a layer on top of XHTML or XML that’s designed to express the structure of XHTML or XML documents in a non-repetitive, elegant, easy way, using indentation rather than closing tags and allowing Ruby to be embedded with ease. It was originally envisioned as a plugin for Ruby on Rails, but it can function as a stand-alone templating engine.

Related reading


Authlogic

A simple, unobtrusive model based Ruby authentication solution. Authlogic is very flexible, it has a strong public API and a plethora of hooks to allow you to modify behavior and extend it.

Related reading


Shoulda

Shoulda is a gem that allows you to create more understandable tests for your Ruby application. Shoulda allows you to provide context to your tests enabling you to categorize tests according to a specific feature or scenario you’re testing.

Related reading


factory_girl

factory_girl provides a framework and DSL for defining and using factories – less error-prone, more explicit, and all-around easier to work with than fixtures. It has straightforward definition syntax, support for multiple build strategies (saved instances, unsaved instances, attribute hashes, and stubbed objects), and support for multiple factories for the same class (user, admin_user, and so on), including factory inheritance.

Related reading


RMagick

RMagick is an interface between the Ruby programming language and the ImageMagick and GraphicsMagick image processing libraries.

Related reading


Cancan

CanCan is an authorization library for Ruby on Rails which restricts what resources a given user is allowed to access and is decoupled from user roles. All permissions are stored in a single location and not duplicated across controllers, views, and database queries.

Related reading


Nokogiri

Nokogiri is an HTML, XML, SAX, and Reader parser. Among Nokogiri’s many features is the ability to search documents via XPath or CSS3 selectors. Nokogiri parses and searches XML/HTML very quickly, and also has correctly implemented CSS3 selector support as well as XPath support.

Related reading


SASS

Sass makes CSS fun again. Sass is an extension of CSS3, adding nested rules, variables, mixins, selector inheritance, and more. It’s translated to well-formatted, standard CSS using the command line tool or a web-framework plugin.

Related reading


Formtastic

Formtastic is a Rails FormBuilder DSL (with some other goodies) to make it far easier to create beautiful, semantically rich, syntactically awesome, readily stylable and wonderfully accessible HTML forms in your Rails applications.

Related reading


Capistrano

Capistrano is a utility and framework for executing commands in parallel on multiple remote machines, via SSH. It uses a simple DSL (borrowed in part from Rake) that allows you to define tasks, which may be applied to machines in certain roles. It also supports tunneling connections via some gateway machine to allow operations to be performed behind VPN’s and firewalls.

Related reading


Omniauth

OmniAuth is a Ruby authentication framework that provides a standardized interface to many different authentication providers such as Facebook, OpenID, and even traditional username and password.

Related reading


Bundler

Bundler is a tool that manages gem dependencies for your ruby application. It takes a gem manifest file and is able to fetch, download, and install the gems and all child dependencies specified in this manifest. It can manage any update to the gem manifest file and update the bundle’s gems accordingly. It also lets you run any ruby code in context of the bundle’s gem environment.

Related reading


resque

Resque (pronounced like “rescue”) is a Redis-backed library for creating background jobs, placing those jobs on multiple queues, and processing them later. Resque is heavily inspired by DelayedJob.

Related reading


Jammit

Jammit is an industrial strength asset packaging library for Rails, providing both the CSS and JavaScript concatenation and compression that you’d expect, as well as YUI Compressor and Closure Compiler compatibility, ahead-of-time gzipping, built-in JavaScript template support, and optional Data-URI / MHTML image embedding.

Related reading


capybara

Capybara helps you test Rails and Rack applications by simulating how a real user would interact with your app. It is agnostic about the driver running your tests and comes with Rack::Test and Selenium support built in.

Related reading


Active Merchant

Active Merchant is an extraction from the e-commerce system Shopify. Shopify’s requirements for a simple and unified API to access dozens of different payment gateways with very different internal APIs was the chief principle in designing the library. It was developed for usage in Ruby on Rails web applications and integrates seamlessly as a plugin but it also works excellently as a stand alone library.

Related reading


eventmachine

EventMachine implements a fast, single-threaded engine for arbitrary networkcommunications. It’s extremely easy to use in Ruby. EventMachine wraps all interactions with IP sockets, allowing programs to concentrate on the implementation of network protocols. It can be used to create both network servers and clients.

Related reading


mustache

Inspired by ctemplate, Mustache is a framework-agnostic way to renderlogic-free views.As ctemplates says, “It emphasizes separating logic from presentation:it is impossible to embed application logic in this templatelanguage.

Related reading


Passenger

Phusion Passenger™ — a.k.a. mod_rails or mod_rack — makes deployment of Ruby web applications, such as those built on the revolutionary Ruby on Rails web framework, a breeze.

Related reading


Chef

Chef is a system integration framework designed to bring the benefits of configuration management to your entire infrastructure. With Chef, you can manage your servers by writing code, not by running commands.

Related reading


Thinking Sphinx

A concise and easy-to-use Ruby library that connects ActiveRecord to the Sphinx search daemon, managing configuration, indexing and searching.

Related reading


Wrapping Up

So those were some of the awesome gems I’ve found extremely useful when I’m whipping up a web app in Ruby. I’m a 100% sure I’m missing a metric butt load of others though. Let me know about your favorite gems in the comments below and thank you so much for reading!


October 11 2011

19:33

Building Static Sites with Jekyll

A full-blown CMS is rarely necessary. Sometimes, you only need to create a light, static website … but you have just enough pages to make the process of copying template files and making cross-site markup changes a chore. Today, I’ll demonstrate a simple solution—Jekyll—that will make creating small websites a breeze.


Step 0: Meeting Jekyll

Jekyll is a simple, blog aware, static site generator.

Jekyll is a simple, blog aware, static site generator. That’s what the site says. But, what exactly does this mean? A static site generator is a program that takes a set of files and generates your site with them. As you’ll see, we’ll be able to use a set of templates, create the content files separately, and then use Jekyll to generate our site. The “blog aware” part means that we could use this to create a blog, or any website that has a series of post-like entries (such as a portfolio). Let’s give it a try!


Step 1: Installing Jekyll

Refer here for more information on Ruby Gems.

We’ll begin by installing Jekyll; it’s a Ruby gem, so doing so should be pretty straightforward.

gem install jekyll # use `sudo` if your setup requires it

Yep: it’s that easy. There are a few more pieces we could install if we are planning on doing a more complex set-up, however, since we’re not, this will do the trick.


Step 2: Creating our First Template

Every file or folder that does not begin with an underscore will be copied to the generated site.

Next, let’s set up the folders for Jekyll. Create a folder, called example-app for this tutorial; we’ll be creating a little portfolio site for, say, a photographer. This is a great example of where Jekyll shines: it’s a small site that won’t be updated too frequently, but is large enough to the point that you don’t want to open every page when you need to make a markup change.

Inside example-app, create a folder called _layouts. Notice the underscore at the beginning of this folder: any folder or file that begin with an underscore will not be part of the site that Jekyll generates. If they have a name that Jekyll recognizes (such as _config.yml or _layouts), their contents will be used in the generation of the site, but the files themselves won’t show up in the site. Remember this: every file or folder that does not begin with an underscore will be copied to the generated site (which, by the way, defaults to the _site sub-folder).

So, let’s create a layout. We’ll start with a general site layout that includes all the “chrome” for our site. Create a new file, called default.html inside the _layouts folder (the name doesn’t matter), and add the following code to it:

<!DOCTYPE html>
<html>
<head>
  <meta charset=utf-8 />
  <title> {% if page.title %} {{ page.title }} | {% endif %} John Doe, Photographer </title>
  <link rel="stylesheet" href="/css/styles.css" />
</head>
<body>

  <div id="main">

    <header>
      <h1> John Doe Photograghy </h1>
    <header>

    <nav role="navigation">
      <ul>
        <li><a href="/">Home</a></li>
        <li><a href="/portfolio/">Portfolio</a></li>
        <li><a href="/about">About</a></li>
        <li><a href="/contact">Contact</a></li>
      </ul>
    </nav>

    {{ content }}

    <footer>
      <p>@copy; John Doe Photography 2011 | All Rights Reserved. </p>
    </footer>

  </div>
</body>
</html>

A couple of things here to keep in mind…

Firstly, Jekyll uses the Liquid template system (by default). This means, anything you can do with Liquid, you can do in a template in Jekyll. For example, in the <title> tag, we’re using both types of Liquid markup: output markup and tag markup. Output markup may output text (if the variable referenced exists), while tag markup doesn’t. Output markup is delimited by double curly-braces, while tag markup is delimited by a the curly brace / percent sign duo.

The next thing to notice above is what is inside the Liquid tags: things like page.title and content. These are variables provided by Jekyll; you can see the list of available template data in the docs. We can also create custom template data, as we’ll review shortly.

Lastly, notice the CSS we’re linking to: create a css folder in the root of your project and throw this bit of styling into a style.css file:

body {
  font: 16px/1.5 verdana, helvetica-neue, helvetica, arial, san-serif;
  background: black;
  color: #ececec;
  padding: 0;
  margin: 0;
}
ul {
  margin: 0;
  padding: 0;
}
a {
  color: #ccc;
  text-decoration: none;
}

a:hover {
  color: #ececec;
  text-decoration: underline;
}

#main {
  width: 960px;
  margin: 0 auto;
  background: rgba(255, 255, 255, 0.4);
}
header {
  padding: 0 10px;
  overflow: hidden;
}
h1 {
  margin: 0;
}

nav ul, ul.entries {
  list-style-type: none;
}
nav li a {
  float: left;
  margin: 5px;
}
.content {
  padding: 10px;
}

ul.entries li {
  position: relative;
  margin: 20px auto;
  padding: 20px;
  background: #ececec;
  width: 600px;
}

ul.entries img {
  width: 600px;
}

ul.entries li h3 {
  position: absolute;
  bottom: -18px;
  left: 17px;
  font-size: 2em;
}
ul.entries a {
  color: #ececec;
}
ul.entries a:hover {
  color: #fff;
}

footer {
  font-size: 0.65em;
  text-align: center;
}

Also, create an img folder and add an image, named banner.jpg; we’ll be using it shortly. Any image will do; just crop it to 960px by 300px;.

You might be wondering why we’re using the if statement above if the page.title variable just won’t display if it exists? Well, if it does exists, I want to include the vertical bar after it; another way to write that would be like this:

{{ page.title }}{% if page.title %} | {% endif %}

So, how we do use this template? Well, we need to create a page that will use this template. In the root directory of our project, create an index.html file. Here’s the content:

---
layout: default
---
<section role="banner">
  <img src="/img/banner.jpg" />
</section>

<section class="content">
  <p>
  Welcome to John Doe Photography! Please, check out my <a href="/portfolio/">Portfolio</a> to see my work.
  </p>
</section>

Here’s the content of our index.html file. Notice what’s at the top of the file: Jekyll calls this YAML front matter. Any file (that doesn’t start with an underscore) that has YAML front matter will be generated by Jekyll before being put in the _site folder (if it has no underscore or YFM, then it will just be copied _site). In this case, the YAML front matter just tells Jekyll what template we want it to use.

Now, open a terminal, cd into your project directory, and run jekyll. You should see something like this:

WARNING: Could not read configuration. Using defaults (and options).
	No such file or directory - /Users/andrew/Desktop/example-app/_config.yml
Building site: /Users/andrew/Desktop/example-app -> /Users/andrew/Desktop/example-app/_site
Successfully generated site: /Users/andrew/Desktop/example-app -> /Users/andrew/Desktop/example-app/_site

Ignore the warning; we’ll come to that shortly. For now, you can see that the site has been built in a freshly-created _site directory. If you open the _site/index.html file in your browser of choice, you should see … a failure. The problem is that our paths (urls and stylesheet) begin with a forward slash. This means we can’t just view them as files, we need to view them on a server. Sure, you could go start up W/MAMP, but why take the trouble? Jekyll has a built in server. So, run jekyll --server, and go to localhost:4000 to see something like image below:

Tutorial Image

If the image above isn’t enough, look at the code of _site/index.html. You’ll see that the template we specified was blended with the content we provided and—voila!—we have our page.

I want to remind you that it’s the YAML front matter that makes this magic happen; if a file doesn’t start with three dashes, one or more lines of properties, and another line of three dashes, the file will just be copied to the _site folder, no generation taking place.


Step 3: Creating a Portfolio Template

Now that we’re comfortable with the basics, let’s create a portfolio for our fictional photographer. Remember how I noted that Jekyll is “blog aware”? Well, we’re going to use this blog-awareness feature to our advantage: instead of posts, we’ll have portfolio entries.

Posts belong in a folder, called _posts, so create that now. The file name pattern for posts must be specific as well: year-month-day-title.ext. Posts — well, any file in your Jekyll site, really — can be either Markdown or HTML.

So let’s make a few posts: remember, these will actually be entries in our portfolio:

_posts/2010-03-04-bikes.md

---
layout: portfolio_entry
image: /img/bikes.jpg
title: Bikes, Black and White
---
Bikes are used by almost everyone in downtown Amsterdam. These ones are chained to a bike rack.

_posts/2010-10-01-wing.md

---
layout: portfolio_entry
title: Wing and a Prayer
image: /img/wing.jpg
---
The wing of the AirBus I rode to England.

_posts/2011-06-05-bridge.md

---
layout: portfolio_entry
title: Stone Bridge
image: /img/bridge.jpg
---
An old stone bridge in London.

_posts/2011-07-09-road.md

---
layout: portfolio_entry
title: Road and Curb
image: /img/road.jpg
---
Bike lanes here are terribly thin.

Pretty simple, eh? Notice how we’re creating a custom YAML front matter field: image. This is the URL to the image for that entry. Sure, we could build the whole entry HTML here in this file, but what if we want to change that? We’d have to return and change it in every entry. This way, we can instead use our portfolio_entry template to render them. What’s that template look like? It’s pretty simple too:

_layouts/portfolio_entry.html

---
layout: default
---

<h2 class="content">{{page.title}}</h2>

<img src="{{ page.image }}" />

{{ content }}

If you looked at the template data page, you’ll know that any custom front matter we add will be available under page; so, here, we can access page.image. We’re also using page.title and content (everything after the last three-dash line).

I should mention here that, while the post title is supposed to be available on the post object, I’ve only been able to get it to work on the page object. Whatever works!

Also, notice that we have this template using our default layout. You can nest templates like that, and make your job even easier.

This gives us our entry (post) pages, but what about the main portfolio page? When writing our navigation in our default layout, I noted that we want it as /portfolio/. So, create a folder, called portfolio in the root directory, and open an index.html file within it.

---
layout: default
title: Portfolio
---

<section class="content">
  <h2>Portfolio</h2>

  <p>Check out my images below!</p>
</section>

<ul class="entries">
  {% for post in site.posts %}

  <li>
    <a href="{{ post.url }}">
      <img src="{{ post.image }}" />
      <h3>{{ post.title }}</h3>
    </a>
  </li>

  {% endfor %}
</ul>

This is our most complicated piece yet. Remember, this isn’t a template: it’s a “normal” file, but it can still include Liquid tags. We start by setting layout to default, and title to “Portfolio.”

Notice that, in the HTML, we have a Liquid for-in loop. We retrieve all the posts with sites.posts; then, we loop over those posts with for post in site.posts / endfor. If you’ve worked with WordPress, or any other blogging system, you should be familiar with the concept of a loop. That’s all this is! Inside, as you can see, we can get the standard properties, as well as any front matter we defined (like image).

Now if we run jekyll --server to re-generate the site and start the server, localhost:4000/portfolio/ should show this:

Tutorial Image

And here’s an entry page:

Tutorial Image

Great! You’ve created a portfolio. I’m sure you see, as well, how this works for a blog. Let’s now move on to look at some configuration options for Jekyll.


Step 4: Writing a Config File

There’s a plethora of options for Jekyll. It’s great that all of them have really sensible defaults, but if you want to change them, it’s not hard at all.

There are two ways to set options.

  • First, when you run the program on the command line, you can pass parameters. We’ve already seen the --server parameter, which starts a server after generating the site.
  • A different way, and the way we’ll use here, is in a config file, called _config.yml; this is a YAML file, so each line is a key: value pair, just like in the YAML front matter. Jekyll will look for this file before generating site.

So, make an _config.yml file, and let’s check out some of the most common options.

For a complete list of options, review the configuration documentation.

  • auto: Adding auto: true to your config file will keep Jekyll running, watching your project folder for changes and regenerating the site on the fly.
  • source: If your source files are in a different directory than the one you’re running Jekyll from, you’ll want to set that directory with the source property.
  • destination: By default, the destination for your generated site is ./_site. If you’d like something different, set it here.
  • permalink: The permalink is the path to your posts. By default, that’s /year/month/day/title.html. However, you can customize that if you want. Among others, you can use the variables :year, :month, :day, :title, and :categories. :categories comes from the front matter; all the others come from the post file name. Then, you can set permalink to things like /:year/:month/:title/ or /:categories/:title.html. Bonus tip: if you have a permalink property in the post front matter, it will override the site-wide default.
  • exclude: Like I said above, Jekyll won’t generate files in directories starting with an underscore. But, if you have folders that you want it to ignore, but that don’t start with an underscore, you can do it with exclude in your config file.

Step 5: Deploying the Site

So, let’s say you’ve created the site, and want to set it free, out on the world wide web. How do you do that?

There are several ways to accomplish this. Of course, if it’s a small site that you won’t be updating too often, then simply FTP it up to your server; this might be your only option, if you’re using shared hosting.

If you’ve got a VPS or dedicated hosting setup, you can run more automatically. Check out the deployment documentation for a list of good ideas. If you aren’t sure what to do, try following the directions for using the git post-receive hook; I’ve tried that, and it’s pretty cool.


Step 6: Taking it Further

This is just the tip of Jekyll.


Conclusion

Well, that’s your introduction to Jekyll – the simple, blog aware, static site generator. The next time you’re building a brochure-style, business-card-y, micro-portfolio site, think you’ll give Jekyll a try? Let me know in the comments and thank you so much for reading!


Tags: Other jekyll Ruby

October 10 2011

22:20

The Intro to Rails Screencast I Wish I Had

Isn’t it funny how most “introduction to Ruby on Rails” screencasts are overly simplistic, and rely on generators like scaffolding? The teacher typically follows up the tutorial by stating that most Rails developer don’t use scaffolding generators. Well that’s not much help then! I’d like to give you the tutorial I wish I had. Along the way, we’ll also rely heavily on test-driven development to build a simple app.

Choose 720p for the clearest picture.

Covered in this Screencast…

  • Create models and generators
  • Use test-driven development to plan and test an application’s features
  • Work with ActiveRecord
  • Autotest with Guard
  • Use Rspec and Capybara to simulate the user.
  • Create partials
  • Take advantage of Flash notices
  • …and plenty more

Conclusion

If you watched the entire screencast, I hope you enjoyed it! There’s certainly much more to cover, but we crammed a great deal into thirty minutes or so! What other tricks and techniques have you picked up, if you’re just digging into Rails?


October 03 2011

20:22

How to Use Omniauth to Authenticate your Users

I hate signing up for websites. I’ve already signed up for so many, using different usernames, that going back to one of them and trying to remember my credentials is sometimes impossible. These days, most sites have begun offering alternative ways to sign up, by allowing you to use your Facebook, Twitter or even your Google account. Creating such an integration sometimes feels like a long and arduous task. But fear not, Omniauth is here to help.

Omniauth allows you to easily integrate more than sixty authentication providers, including Facebook, Google, Twitter and GitHub. In this tutorial, I’m going to explain how to integrate these authentication providers into your app.


Step 1: Preparing your Application

Let’s create a new Rails application and add the necessary gems. I’m going to assume you’ve already installed Ruby and Ruby on Rails 3.1 using RubyGems.

rails new omniauth-tutorial

Now open your Gemfile and reference the omniauth gem.

gem 'omniauth'

Next, per usual, run the bundle install command to install the gem.


Step 2: Creating a Provider

In order to add a provider to Omniauth, you will need to sign up as a developer on the provider’s site. Once you’ve signed up, you’ll be given two strings (sort of like a username and a password), that needs to be passed on to Omniauth. If you’re using an OpenID provider, then all you need is the OpenID URL.

If you want to use Facebook authentication, head over to developers.facebook.com/apps and click on “Create New App”.

Facebook New App

Fill in all necessary information, and once finished, copy your App’s ID and Secret.

Facebook Secret

Configuring Twitter is a bit more complicated on a development machine, since they don’t allow you to use “localhost” as a domain for callbacks. Configuring your development environment for this kind of thing is outside of the scope of this tutorial, however, I recommend you use Pow if you’re on a Mac.


Step 3: Add your Providers to the App

Create a new file under config/initializers called omniauth.rb. We’re going to configure our authentication providers through this file.

Paste the following code into the file we created earlier:

Rails.application.config.middleware.use OmniAuth::Builder do
  provider :facebook, YOUR_APP_ID, YOUR_APP_SECRET
end

This is honestly all the configuration you need to get this going. The rest is taken care of by Omniauth, as we’re going to find in the next step.


Step 4: Creating the Login Page

Let’s create our sessions controller. Run the following code in your terminal to create a new sessions controller, and the new, create, and failure actions.

rails generate controller sessions new create failure

Next, open your config/routes.rb file and add this:

get   '/login', :to => 'sessions#new', :as => :login
match '/auth/:provider/callback', :to => 'sessions#create'
match '/auth/failure', :to => 'sessions#failure'

Let’s break this down:

  • The first line is used to create a simple login form where the user will see a simple “Connect with Facebook” link.
  • The second line is to catch the provider’s callback. After a user authorizes your app, the provider redirects the user to this url, so we can make use of their data.
  • The last one will be used when there’s a problem, or if the user didn’t authorize our application.

Make sure you delete the routes that were created automatically when you ran the rails generate command. They aren’t necessary for our little project.

Open your app/controllers/sessions_controller.rb file and write the create method, like so:

def create
  auth_hash = request.env['omniauth.auth']

  render :text => auth_hash.inspect
end

This is used to make sure everything is working. Point your browser to localhost:3000/auth/facebook and you’ll be redirected to Facebook so you can authorize your app (pretty cool huh?). Authorize it, and you will be redirected back to your app and see a hash with some information. In between will be your name, your Facebook user id, and your email, among other things.


Step 5: Creating the User Model

The next step is to create a user model so users may sign up using their Facebook accounts. In the Rails console (rails console), create the new model.

rails generate model User name:string email:string

For now, our user model will only have a name and an email. With that out of the way, we need a way to recognize the user the next time they log in. Keep in mind that we don’t have any fields on our user’s model for this purpose.

The idea behind an application like the one we are trying to build is that a user can choose between using Facebook or Twitter (or any other provider) to sign up, so we need another model to store that information. Let’s create it:

rails generate model Authorization provider:string uid:string user_id:integer

A user will have one or more authorizations, and when someone tries to login using a provider, we simply look at the authorizations within the database and look for one which matches the uid and provider fields. This way, we also enable users to have many providers, so they can later login using Facebook, or Twitter, or any other provider they have configured!

Add the following code to your app/models/user.rb file:

has_many :authorizations
validates :name, :email, :presence => true

This specifies that a user may have multiple authorizations, and that the name and email fields in the database are required.

Next, to your app/models/authorization.rb file, add:

belongs_to :user
validates :provider, :uid, :presence => true

Within this model, we designate that each authorization is bound to a specific user. We also set some validation as well.


Step 6: Adding a Bit of Logic to our Sessions Controller

Let’s add some code to our sessions controller so that it logs a user in or signs them up, depending on the case. Open app/controllers/sessions_controller.rb and modify the create method, like so:

def create
  auth_hash = request.env['omniauth.auth']

  @authorization = Authorization.find_by_provider_and_uid(auth_hash["provider"], auth_hash["uid"])
  if @authorization
    render :text => "Welcome back #{@authorization.user.name}! You have already signed up."
  else
    user = User.new :name => auth_hash["user_info"]["name"], :email => auth_hash["user_info"]["email"]
    user.authorizations.build :provider => auth_hash["provider"], :uid => auth_hash["uid"]
    user.save

    render :text => "Hi #{user.name}! You've signed up."
  end
end

This code clearly needs some refactoring, but we’ll deal with that later. Let’s review it first:

  • We check whether an authorization exists for that provider and that uid. If one exists, we welcome our user back.
  • If no authorization exists, we sign the user up. We create a new user with the name and email that the provider (Facebook in this case) gives us, and we associate an authorization with the provider and the uid we’re given.

Give it a test! Go to localhost:3000/auth/facebook and you should see “You’ve signed up”. If you refresh the page, you should now see “Welcome back”.


Step 7: Enabling Multiple Providers

The ideal scenario would be to allow a user to sign up using one provider, and later add another provider so he can have multiple options to login with. Our app doesn’t allow that for now. We need to refactor our code a bit. Change your sessions_controlller.rb’s create method to look like this:

def create
  auth_hash = request.env['omniauth.auth']

  if session[:user_id]
    # Means our user is signed in. Add the authorization to the user
    User.find(session[:user_id]).add_provider(auth_hash)

    render :text => "You can now login using #{auth_hash["provider"].capitalize} too!"
  else
    # Log him in or sign him up
    auth = Authorization.find_or_create(auth_hash)

    # Create the session
    session[:user_id] = auth.user.id

    render :text => "Welcome #{auth.user.name}!"
  end
end

Let’s review this:

  • If the user is already logged in, we’re going to add the provider they’re using to their account.
  • If they’re not logged in, we’re going to try and find a user with that provider, or create a new one if it’s necessary.

In order for the above code to work, we need to add some methods to our User and Authorization models. Open user.rb and add the following method:

def add_provider(auth_hash)
  # Check if the provider already exists, so we don't add it twice
  unless authorizations.find_by_provider_and_uid(auth_hash["provider"], auth_hash["uid"])
    Authorization.create :user => self, :provider => auth_hash["provider"], :uid => auth_hash["uid"]
  end
end

If the user doesn’t already have this provider associated with their account, we’ll go ahead and add it — simple. Now, add this method to your authorization.rb file:

def self.find_or_create(auth_hash)
  unless auth = find_by_provider_and_uid(auth_hash["provider"], auth_hash["uid"])
    user = User.create :name => auth_hash["user_info"]["name"], :email => auth_hash["user_info"]["email"]
    auth = create :user => user, :provider => auth_hash["provider"], :uid => auth_hash["uid"]
  end

  auth
end

In the code above, we attempt to find an authorization that matches the request, and if unsuccessful, we create a new user.

If you want to try this out locally, you’ll need a second authentication provider. You could use Twitter’s OAuth system, but, as I pointed out before, you’re going to need to use a different approach, since Twitter doesn’t allow using “localhost” as the callback URL’s domain (at least it doesn’t work for me). You could also try hosting your code on Heroku, which is perfect for a simple site like the one we’re creating.


Step 8: Some Extra Tweaks

Lastly, we need to, of course, allow users to log out. Add this piece of code to your sessions controller:

def destroy
  session[:user_id] = nil
  render :text => "You've logged out!"
end

We also need to create the applicable route (in routes.rb).

get '/logout', :to => 'sessions#destroy'

It’s as simple as that! If you browse to localhost:3000/logout, your session should be cleared, and you’ll be logged out. This will make it easier to try multiple accounts and providers. We also need to add a message that displays when users deny access to our app. If you remember, we added this route near the beginning of the tutorial. Now, we only need to add the method in the sessions controller:

def failure
  render :text => "Sorry, but you didn't allow access to our app!"
end

And last but not least, create the login page, where the user can click on the “Connect With Facebook” link. Open app/views/sessions/new.html.erb and add:

<%= link_to "Connect With Facebook", "/auth/facebook" %>

If you go to localhost:3000/login you’ll see a link that will redirect you to the Facebook authentication page.


Conclusion

I hope this article has provided you with a brief example of how Omniauth works. It’s a considerably powerful gem, and allows you to create websites that don’t require users to sign up, which is always a plus! You can learn about Omniauth on GitHub.

Let me us know if you have any questions!


September 30 2011

18:41

How to Deploy your First Rails App: New on Premium

In this Premium video tutorial, we’ll teach you how to deploy your first Rails web application with Heroku. Particularly if you’re somewhat new to Rails, you may find that you’re met with a long string of errors when pushing your app and database to their servers. Don’t worry, though; this screencast will take you through the entire process from scratch.

Become a Premium member to watch this video, as well as hundreds of other advanced tutorials and screencasts from the Tuts+ network.


You’ll Learn How To…

  • Work with models and controllers
  • Adjust routes
  • Source control with Git
  • Install and work with Heroku
  • Convert a Sqlite3 database to PostgreSQL
  • Debugging Heroku errors


September 15 2011

21:04

The Best Way to Learn Ruby on Rails

I come from a PHP background, but these days, I’m a full-time Rails developer. The difficulty for most people who make the switch lies in the learning curve that’s involved. Once you’ve become proficient in a language or framework, switching to a new one feels like an unnecessary (and time-consuming) challenge.

However, learning Ruby and Rails is actually pretty easy! It’s an incredibly powerful framework, and has a huge community pushing it along. So that begs the question: what’s the best way to learn Rails? Here’s a lesson plan.

This article details a full lesson plan that will get you up and running in no time! Follow each assignment below in order.


Assignment 1 - Work Through the “Try Ruby” Exercises

You might think learning Ruby is the most important step, and this is probably the part where everyone stops learning and just sticks with their current framework. Don’t do that! Don’t be scared to learn this language (or any language, really). Ruby is a fun, beautiful, and easy-to-learn language. Plus, you don’t really need to dominate the language 100% to be able to learn Rails. You just need the basics to begin.

Tryruby

Work through the Try Ruby exercises.

The most recommended tool for dipping into Ruby’s syntax is the Try Ruby website. It provides you with an interactive, web-based shell to play with the language. If you type help, you’ll be greeted with an interactive, 15 minute tutorial that’ll teach you the very basics of the language. Typing help 2 will take you to the second chapter of the tutorial.

Once you’ve worked through these exercises a couple of times, you’ll have a solid base. Personally, completing those two tutorials is all I initially did. The rest I’ve learned by creating web sites with Rails, Googling, and playing in IRB.

Assignment 2 - Install Ruby And Ruby on Rails

Run gem install rails to install Rails.

If you want to learn Rails, you’ll of course need to install it on your machine. There are multiple ways to do this, depending on your platform. If you’re on a Mac or a Linux-based computer, I recommend you use RVM. It’s a great tool to install Ruby. Refer to this article for installation instructions. If you’re on Windows, refer to RubyInstaller for Windows.

Next, you need to install Rails. To do so you should have RubyGems installed on your computer. If you used RVM to install Ruby, congratulations, you already have it! If you’re on Windows, refer here. To install Rails, run gem install rails, and, tada, you’re all set up!

Bonus Credit

rbenv is another increasingly popular method for installing Ruby. Research that as well.


Assignment 3 - Take The Rails For Zombies Course

Zombies

I’ve always thought that learning by example is the best (and only) way to learn a language and a framework. A free and incredible course that you must take is Rails For Zombies, from the folks at EnvyLabs. It’s interactive, which means that, after watching each video, you’ll work through a handful of exercises.


Assignment 4 - Read A Book

Agile Rails

Purchase Agile Web Development with Rails.

Up until now, every tool I’ve given you has been free and interactive. Now, you need to purchase the book on Rails: Agile Web Development with Rails. It teaches you Rails by means of building a website from scratch. It goes through the very basics, creating controllers, models, scaffolding and even unit and functional testing, plus some AJAX! Be sure to buy the latest edition though!

Bonus Credit

Here’s some more popular Rails books that you might consider.


Assignment 5 - Build A Simple Blog

I know it sounds boring (it does to me), but it’s a great example and is used everywhere for a reason: it’s simple, you can finish that kind of tutorial in under an hour, and you learn the basics very fast. Now, if you read the book from assignment four, you should already have enough knowledge to build something like this. I encourage you to try and build one by yourself, from memory, without checking any tutorials on the subject. You can check some methods, look at how to validate some fields, etc., but the idea is that you need to put your knowledge to the test.

Blog Post

Build a simple blog that has support for comments. You don’t need to implement a fancy authentication system; just something simple to get started.

If You Do Get Stuck…


Assignment 6 - Add Some Complexity To Your Blog

Great! You’ve built yourself a basic blog. It certainly lacks some basic functionality one would expect from a blogging platform, but it’s a start. Let’s put more work into it.

Auth

Add a basic authentication system to your blog. This might sound somewhat complicated, but I won’t force you to do this completely on your own. There are a number of tools/gems available that will make the job a lot simpler, but they’re too advanced for your current skill level. Plus, we need you to keep learning the inner workings of a Rails application before relying on third-party tools.

Watch this screencast by Ryan Bates, where he covers the entire process of creating a basic authentication system from scratch. Apply that to your blog and make sure only you are able to create and edit posts, and delete comments (but make sure every one can create comments!).

Bonus Credit

  • Form validation is equally important, and isn’t difficult to implement. Refer to this screencast to make the task easier.
  • Refer to this article for more information on model associations in Rails. You can use the techniques outlined to add tagging to your blog.

Assignment 7 - Create Something On Your Own

This is where you need to get creative. For me, learning a language or framework by following tutorials works well up to a point. The first thing I created after I finished my sample blogging applications was an invoicing application. I found a simple design template online and started working on that. That allowed me to build something useful and according to my needs, while having actual fun!

If you want to follow in my footsteps, download the Twitter Bootstrap web app template (if you wish), so that you may spend more time programming than designing, and build yourself an invoicing application. Try to include authentication, clients, invoices (which should let the user apply a discount and taxes), and invoice items (which should include a description, quantity and price). This will teach you various important parts of a Rails application, such as nested forms.

Resources to Get you Started

  • Nested Model Form Parts One and Two: You’ll need this the moment you create an invoice, since invoice items will be “nested” inside the invoice form. Watch the videos to learn more.
  • Simple Invoicing App: This is the invoicing app I created to learn Rails. It’s built with Rails 3.0.4 though, so keep that in mind. It might be handy to check the models, associations, and JavaScript files.

Assignment 8 - Stay Up To Date …

Learning to program well is a never-ending process. That’s why you should subscribe to various blogs and Twitter accounts on the subject. Here’s a list of influential Rails people for you to follow on Twitter:

  • @dhh: The creator of Ruby on Rails. Posts insight on new releases and various tips.
  • @rbates: Producer of Railscasts, posts tips and tricks very often.
  • @rails: The official Ruby on Rails Twitter account. Follow them if you want insight on release dates, and development.
  • @rails_apps: They’re posting new Rails example apps pretty often, pretty useful.

And here are some websites you should subscribe to:

  • Railscasts: A must! Awesome screencasts about Ruby on Rails, best practices, tips, new gems, etc.
  • Asciicasts: If you’re not into watching videos, this site mimics Railscasts’ tutorials in text. It’s not as up-to-date as the first one, but it’s excelent.
  • Nettuts+: While we’re not 100% focused on Ruby and Rails, we’re posting more and more applicable content these days!
  • The Rails Way: Cool blog with various articles and tutorials.
  • Riding Rails: The official Ruby on Rails blog. They don’t post many tips or tutorials, but they announce official releases, so it’s good to read it often.
  • Planet Ruby on Rails: A Ruby on Rails blog aggregator. It’s no as filtered as reading one of the blogs I listed above, but it’s a good resource to check every once in a while.

Conclusion

This lesson plan will turn you into a Rails developer in no time. As you’ll find, Rails is an amazing framework, and it continues to get better every single year. I learned Rails using the same tools I recommended above; you should too!

If you have any questions, feel free to ask them in the comments or tweet me. And feel free to share your experience! Tell us how you learned Rails, what tools you used, and how you’re using it today.


Tags: Ruby basix rails

September 12 2011

13:00

Create Beautiful Administration Interfaces with Active Admin

Every web developer knows that creating an administration interface for their projects is an incredibly tedious task. Luckily, there are tools that make this task considerably simpler. In this tutorial, I’ll show you how to use Active Admin, a recently launched administration framework for Ruby on Rails applications.

You can use Active Admin to add an administration interface to your current project, or you can even use it to create a complete web application from scratch – quickly and easily.

Today, we’ll be doing the latter, by creating a fairly simple project management system. It might sound like quite a bit of work, but Active Admin will do the bulk of the work for us!


Step 1 – Set up the Development Environment

I’m going to assume you have some previous Ruby on Rails knowledge, especially involving model validations, since the rest of the application interface is going to be taken care of by Active Admin. Apart from that, you should have a development environment for Ruby on Rails 3.1 already set up, including Ruby 1.9.2.

Refer to this article if you require assistance installing Ruby and Rails.

Create the application we’ll be working on, by running the following command in your Terminal:

rails new active_admin

Next, open your Gemfile and add the following lines:

gem 'activeadmin'
gem 'meta_search', '>= 1.1.0.pre'

The last gem is required for Active Admin to work with Rails 3.1, so don’t forget it. After that’s done, run the bundle install command to install the gems. Now, we need to finish installing Active Admin, by running the following command:

rails generate active_admin:install

This will generate all needed initializers and migrations for Active Admin to work. It will also create an AdminUser model for authentication, so run rake db:migrate to create all the needed database tables. Apart from that, you need to add one line to your config/environments/development.rb file, so sending emails works:

config.action_mailer.default_url_options = { :host => 'localhost:3000' }

Once that’s done, run rails server and point your browser to localhost:3000/admin. You’ll be greeted with a nice login form. Just type “admin@example.com” as the email and “password” as the password, and hit “Login”. You should now see a nice administration interface.


Step 2 – Configuring our User Model

As you can see from the webpage you just generated, there’s not much you can do, yet. We’re going to need a way to edit our users, and we can do that using Active Admin. The framework uses what it calls “Resources”. Resources map models to administration panels. You need to generate them using a command in your terminal, so Active Admin can know their existence, so go ahead and run:

rails generate active_admin:resource AdminUser

The syntax for that command is simple: just write the database model’s name at the end. This will generate a file inside the app/admin folder, called admin_users.rb. Now, if you refresh your browser you’ll see a new link at the top bar, called “Admin Users”. Clicking that will take you to the Admin User administration panel. Now, it’ll probably look a little too cluttered, since by default, Active Admin shows all of the model’s columns, and considering that the framework uses Devise for authentication, you’ll see a bunch of columns that are not really necessary. This takes us to the first part of our customization: the index page.

Admin User

Customizing Active Admin resources is fairly easy (and fun if you ask me). Open app/admin/admin_users.rb on your favorite text editor and make it look like this:

ActiveAdmin.register AdminUser do
  index do
    column :email
    column :current_sign_in_at
    column :last_sign_in_at
    column :sign_in_count
    default_actions
  end
end

Let’s review the code:

  • The first line is created by Active Admin, and, like it says, it registers a new resource. This created the menu link at the top bar and all of the default actions, like the table you just saw.
  • The index method allows us to customize the index view, which is the table that shows all rows.
  • Inside of the block you pass to the index method, you specify which columns you do want to appear on the table, ie. writing column :email will have Active Admin show that column on the view.
  • default_actions is a convenience method that creates one last column with links to the detail, edition and deletion of the row.

One final step for this view is to customize the form. If you click the “New Admin User” link on the top right, you’ll see that the form also contains all of the columns on the model, which is obviously not very useful. Since Active Admin uses Devise, we only need to enter an email address to create a user, and the rest should be taken care of by the authentication gem. To customize the forms that Active Admin displays, there’s a method, called (you guessed it) form:

ActiveAdmin.register AdminUser do
  index do
    # ...
  end

  form do |f|
    f.inputs "Admin Details" do
      f.input :email
    end
    f.buttons
  end
end

If the code looks familiar to you, you’ve probably used the Formtastic gem before. Let’s take a look at the code:

  • You specify the form’s view by calling the form method and passing it a block with an argument (f in this case).
  • f.inputs creates a fieldset. Word of advice: you have to add at least one fieldset. Fields outside of one will simply not appear on the view.
  • To create a field, you simply call f.input and pass a symbol with the name of the model’s column, in this case, “email”.
  • f.buttons creates the “Submit” and “Cancel” buttons.

You can further customize the forms using the DSL (Domain Specific Language) provided by Formtastic, so take a look at tutorials about this gem.

One last step for this form to work: since we’re not providing a password, Devise is not going to let us create the record, so we need to add some code to the AdminUser model:

after_create { |admin| admin.send_reset_password_instructions }

def password_required?
  new_record? ? false : super
end

The after_create callback makes sure Devise sends the user a link to create a new password, and the password_required? method will allow us to create a user without providing a password.

Go try it out. Create a user, and then check your email for a link, which should let you create a new password, and log you into the sytem.


Step 3 – Projects

We are going to create a simple Project Management system. Not anything too complicated though, just something that will let us manage projects and tasks for the project, and assign tasks to certain users. First thing, is to create a project model:

rails generate model Project title:string

Active Admin relies on Rails’ models for validation, and we don’t want projects with no title, so let’s add some validations to the generated model:

# rails
validates :title, :presence => true

Now, we need to generate an Active Admin resource, by running:

rails generate active_admin:resource Project

For now, that’s all we need for projects. After migrating your database, take a look at the interface that you just created. Creating a project with no title fails, which is what we expected. See how much you accomplished with just a few lines of code?


Step 4 – Tasks

Projects aren’t very useful without tasks right? Let’s add that:

rails generate model Task project_id:integer admin_user_id:integer title:string is_done:boolean due_date:date

This creates a task model that we can associate with projects and users. The idea is that a task is assigned to someone and belongs to a project. We need to set those relations and validations in the model:

class Task < ActiveRecord::Base
  belongs_to :project
  belongs_to :admin_user

  validates :title, :project_id, :admin_user_id, :presence => true
  validates :is_done, :inclusion => { :in => [true, false] }
end

Remember to add the relations to the Project and AdminUser models as well:

class AdminUser < ActiveRecord::Base
  has_many :tasks

  # ...
end
class Project < ActiveRecord::Base
  has_many :tasks

  # ...
end

Migrate the database, and register the task model with Active Admin:

rails generate active_admin:resource Task

Now go and take a look at the tasks panel in your browser. You just created a project management system! Good job.


Step 5 – Making It Even Better

The system we just created isn’t too sophisticated. Luckily, Active Admin is not just about creating a nice scaffolding system, it gives you far more power than that. Let’s start with the Projects section. We don’t really need the id, created and updated columns there, and we certainly don’t need to be able to search using those columns. Let’s make that happen:

index do
  column :title do |project|
    link_to project.title, admin_project_path(project)
  end

  default_actions
end

# Filter only by title
filter :title

A few notes here:

  • When you specify columns, you can customize what is printed on every row. Simply pass a block with an argument to it, and return whatever you want in there. In this case, we are printing a link to the project’s detail page, which is easier than clicking the “View” link on the right.
  • The filters on the right are also customizable. Just add a call to filter for every column you want to be able to filter with.

The project’s detail page is a little boring don’t you think? We don’t need the date columns and the id here, and we could show a list of the tasks more directly. Customizing the detail page is accomplished by using the show method in the app/admin/projects.rb file:

show :title => :title do
  panel "Tasks" do
    table_for project.tasks do |t|
      t.column("Status") { |task| status_tag (task.is_done ? "Done" : "Pending"), (task.is_done ? :o k : :error) }
      t.column("Title") { |task| link_to task.title, admin_task_path(task) }
      t.column("Assigned To") { |task| task.admin_user.email }
      t.column("Due Date") { |task| task.due_date? ? l(task.due_date, :format => :long) : '-' }
    end
  end
end

Let’s walk through the code:

  • show :title => :title specifies the title the page will have. The second :title specifies the model’s column that will be used.
  • By calling panel "Tasks" we create a panel with the given title. Within it, we create a custom table for the project’s tasks, using table_for.
  • We then specify each column and the content’s it should have for each row.
    • The “Status” column will contain “Done” or “Pending” whether the task is done or not. status_tag is a method that renders the word passed with a very nice style, and you can define the color by passing a second argument with either : ok, :warning and :error for the colors green, orange and red respectively.
    • The “Title” column will contain a link to the task so we can edit it.
    • The “Assigned To” column just contains the email of the person responsible.
    • The “Due Date” will contain the date the task is due, or just a “-” if there’s no date set.

Step 6 – Some Tweaks for the Tasks

How about an easy way to filter tasks that are due this week? Or tasks that are late? That’s easy! Just use a scope. In the tasks.rb file, add this:

scope :all, :default => true
scope :due_this_week do |tasks|
  tasks.where('due_date > ? and due_date < ?', Time.now, 1.week.from_now)
end
scope :late do |tasks|
  tasks.where('due_date < ?', Time.now)
end
scope :mine do |tasks|
  tasks.where(:admin_user_id => current_admin_user.id)
end

Let’s review that code:

  • scope :all defines the default scope, showing all rows.
  • scope accepts a symbol for the name, and you can pass a block with an argument. Inside the block you can refine a search according to what you need. You can also define the scope inside the model and simply name it the same as in this file.
  • As you can see, you can access the current logged in user’s object using current_admin_user.
Tasks2

Check it out! Just above the table, you’ll see some links, which quickly show you how many tasks there are per scope, and lets you quickly filter the list. You should further customize the table and search filters, but I’ll leave that task to you.

We’re now going to tweak the task’s detail view a bit, since it looks rather cluttered:

show do
  panel "Task Details" do
    attributes_table_for task do
      row("Status") { status_tag (task.is_done ? "Done" : "Pending"), (task.is_done ? :o k : :error) }
      row("Title") { task.title }
      row("Project") { link_to task.project.title, admin_project_path(task.project) }
      row("Assigned To") { link_to task.admin_user.email, admin_admin_user_path(task.admin_user) }
      row("Due Date") { task.due_date? ? l(task.due_date, :format => :long) : '-' }
    end
  end

  active_admin_comments
end

This will show a table for the attributes of the model (hence the method’s name, attributes_table_for). You specify the model, in this case task, and in the block passed, you define the rows you want to show. It’s roughly the same we defined for the project’s detail page, only for the task. You may be asking yourself: What’s that “active_admin_comments” method call for? Well, Active Admin provides a simple commenting system for each model. I enabled it here because commenting on a task could be very useful to discuss functionality, or something similar. If you don’t call that method, comments will be hidden.

Tasks

There’s another thing I’d like to show when viewing a task’s detail, and that’s the rest of the assignee’s tasks for that project. That’s easily done using sidebars!

sidebar "Other Tasks For This User", :o nly => :show do
  table_for current_admin_user.tasks.where(:project_id => task.project) do |t|
    t.column("Status") { |task| status_tag (task.is_done ? "Done" : "Pending"), (task.is_done ? :o k : :error) }
    t.column("Title") { |task| link_to task.title, admin_task_path(task) }
  end
end

This creates a sidebar panel, titled “Other Tasks For This User”, which is shown only on the “show” page. It will show a table for the currentadminuser, and all tasks where the project is the same as the project being shown (you see, task here will refer to the task being shown, since it’s a detail page for one task). The rest is more or less the same as before: some columns with task details.


Step 7 – The Dashboard

You may have noticed, when you first launched your browser and logged into your app, that there was a “Dashboard” section. This is a fully customizable page where you can show nearly anything: tables, statistics, whatever. We’re just going to add the user’s task list as an example. Open up the dashboards.rb file and revise it, like so:

ActiveAdmin::Dashboards.build do
  section "Your tasks for this week" do
    table_for current_admin_user.tasks.where('due_date > ? and due_date < ?', Time.now, 1.week.from_now) do |t|
      t.column("Status") { |task| status_tag (task.is_done ? "Done" : "Pending"), (task.is_done ? :o k : :error) }
      t.column("Title") { |task| link_to task.title, admin_task_path(task) }
      t.column("Assigned To") { |task| task.admin_user.email }
      t.column("Due Date") { |task| task.due_date? ? l(task.due_date, :format => :long) : '-' }
    end
  end

  section "Tasks that are late" do
    table_for current_admin_user.tasks.where('due_date < ?', Time.now) do |t|
      t.column("Status") { |task| status_tag (task.is_done ? "Done" : "Pending"), (task.is_done ? :o k : :error) }
      t.column("Title") { |task| link_to task.title, admin_task_path(task) }
      t.column("Assigned To") { |task| task.admin_user.email }
      t.column("Due Date") { |task| task.due_date? ? l(task.due_date, :format => :long) : '-' }
    end
  end
end

The code should be fairly familiar to you. It essentially creates two sections (using the section method and a title), with one table each, which displays current and late tasks, respectively.

Dashboard

Conclusion

We’ve created an extensive application in very few steps. You may be surprised to know that there are plenty more features that Active Admin has to offer, but it’s not possible to cover them all in just one tutorial, certainly. If you’re interested in learning more about this gem, visit activeadmin.info.

You also might like to check out my project, called active_invoices on GitHub, which is a complete invoicing application made entirely with Active Admin. If you have any questions, feel free to ask them in the comments, or send me a tweet.


Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
(PRO)
No Soup for you

Don't be the product, buy the product!

close
YES, I want to SOUP ●UP for ...