Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 27 2014


How to Use New Relic Custom Dashboards & Why You’d Want To

Today we’re going to look at New Relic custom dashboards. Specifically, I will show you three ways that I tend to use custom dashboards:

  • creating an overview dashboard from existing charts
  • creating your own charts from existing data captured by New Relic
  • creating dashboards for your custom metrics

But, before we get into any of that, we must first ask ourselves a question, why should we use custom dashboards at all? New Relic does a pretty good job of presenting your application performance data. Just using the regular UI, you can glean way more information about the performance of your app than you ever could have before you started using New Relic. So, before I show you how to use custom dashboards, I’ll explain why I think anyone using New Relic should look at them sooner rather than later.

Why Use Custom Dashboards at All?

It’s true, most people happily use New Relic without ever looking at the custom dashboard functionality that it provides. It’s not until you’re a fairly advanced user that you may decide to look at custom dashboards and see what they have to offer you. I think this is a shame, playing around with custom dashboards can not only allow you to slice and dice your app performance data in different ways, but can also:

  • teach you quite a bit about how New Relic captures metrics
  • allow you to learn exactly what kind of data gets stored for the metrics that get captured
  • teach you about the limitations of New Relic charts

You can learn all those things by reading the documentation, but playing around with custom dashboards, allows us to begin to understand some of these things by example (on a more intuitive level), rather than just knowing it as a bunch of dot points. As is often the case with technology, tinkering with an unrelated area of a tool, will sometimes give you more insight and understanding into how the more commonly used areas of the same tool work. Using custom dashboards will make you a more savvy New Relic user, and if you’ve read the other New Relic posts that I’ve written, you’ll know how I feel about knowing your tools.

Creating an Overview Dashboard from Existing Charts

The one custom dashboard I always like to build is what I call ’24 hours at a glance’. I take a bunch of existing charts that I consider important for a single application, lock the time period to the last 24 hours and put them all together on one dashboard. Whenever I am looking at a specific application in New Relic, this will be the first screen I look at to see if there is anything particularly bad that jumps out at me from the last 24 hours. Let’s see how we can build this kind of dashboard.

Firstly, we need to create a new custom dashboard. In the New Relic UI click on Dashboards->Create custom dashboard. On the next screen, we give our dashboard a name (you could use “24 hours at a glance”, or your own name) and pick the grid layout. Essentially, grid layout is a collection of charts all the same size and overview layout is one big chart surrounded by a bunch of smaller charts (we will use overview layout in the next section).


Now we need to select the app for which we want to create our dashboard and then find some relevant charts to add. I like to add the following:

  • server average response time
  • historical server average response time
  • browser average response time
  • error rate
  • throughput
  • historical throughput
  • application CPU usage by host
  • application memory usage by host
  • top five web transactions by wall clock time
  • downtime
  • top countries by throughput
  • top five database operations by wall clock time

Depending on your application you may want to add some others, but I find this gives me a fairly good snapshot of what’s happening with the application and what to investigate if anything is wrong. Let’s find one of these charts and add them to our new dashboard. The server average response time is an easy one, as it’s the big chart in the Overview section of the Monitoring tab for an application. Each chart in the New Relic UI has a little New Relic logo in the bottom right corner, when you hover your mouse over this logo it turns into a plus sign, clicking on the plus will allow you to add this chart to a dashboard:


But before we add our chart, we need to change the time window for the New Relic UI to be 24 hours. We need to do this in order to give ourselves the ability to ‘lock’ the chart to the last 24 hours when we actually add it to the dashboard (this is awkward UX in my opinion, but at least we have a way to do what we need):


We can now go ahead and add the chart:


Don’t forget to tick the Lock to span checkbox. When we now visit our new dashboard, the chart we’ve just added should be there:


We can rinse and repeat the above process until we’ve added all the charts that we want. In the end, it should look something like this:


You can click the Edit dashboard button in the top right corner which will let you drag the charts around and arrange them in the order you want. The only thing to note is that you can’t modify the individual charts in any way (for example, you may have wanted to have a more descriptive chart title, but you can’t change it) since they are standard New Relic charts.

The other custom dashboard I always like to build from existing charts is the ‘All applications at a glance’. This is only applicable if you have multiple applications you’re looking after. Here we pick one or two of the most important charts for every relevant application and put them together. It’s usually a safe bet to use the ‘Response Time’ chart from each application. The actual process of putting the dashboards together is the same as described above, you’ll just need to switch applications to get the relevant charts from each. In the end, you should end up with something like this:


This is the screen I’ll tend to look at first, when I log in to New Relic. It may be useful to lock the time of each chart to 24 hours just like we did for our ’24 hours at a glance’ dashboard, but that’s up to you. Of course, this is only relevant if you’re supporting multiple applications. Having said that, if you have multiple pre-production environments for your application (for staging or load test), you may want to put them all together into a dashboard similar to this one, it may help you catch changes that degrade performance before the code ends up in production.

Creating Your Own Charts from Existing Data

The New Relic UI suffers from some necessary limitations. It needs to be all things to all people, so they can only group together charts and tables that would make sense for all web applications. Most of the time the UI screens will limit you to looking at one transaction at a time and one or two sets of metrics at a time, if you need to access others you have to click around. The good news is, with custom dashboards, this limitation no longer applies. We know which transactions are related within our domain, we also know which metrics are important to us on a per transaction basis. We can build a dashboard that groups together several related transactions with all the important metrics for each and look at it on the one screen.

Let’s say we have a particularly important transaction in our application, it might make sense to have a dashboard where we can see most of the important information about this transaction at a glance. Here on Tuts+ we have a concept of articles (obviously) and articles are pretty important to us, let’s build a dashboard to keep an eye on them.

Once again we need to create a new dashboard just like before, we’ll call it ‘Article overview’, but this time we’ll use an overview layout. We don’t have to go hunting for charts as we’ll be creating our own custom charts, so click the big button to create the main chart for our dashboard:


It will ask you if you want to add a chart or a table, we will add a table later, for now choose chart. You will see a screen that looks like this:


The main thing to look at here is the metric that you want to display. When you click inside the metrics text-box it will drop down a list of top level metrics that you can select. The metrics in New Relic are named as prefix/category/label. In the case of a Rails app, the prefix might be Controller or ActiveRecord (if you’re not using Rails, the prefix for transactions will be WebTransactions). For Controller, the category would be the name of the controller and the label will be the action name. If you explore some of the metrics while you’re playing around with your first chart, you will start to have a feel for the kind of metrics you have access to and where to find them. If you don’t see the metrics you expect, make sure that you have the right application selected within New Relic, this always trips me up.

Back to our main chart. The metric we’re after will be tied to our ArticlesController, so its name is Controller/articles/show. Once we’ve picked the metric, the contents of the Value drop-down will change to contain all the values that make sense for this metric. It’s worth once again exploring all the various values and seeing what the resulting chart actually contains. In our case, “Average response time” seems like a good thing to have as our main chart.

At this point, if we give our chart a title and click the preview button we can see what it looks like:


This looks OK, but I want the Y axis to be in milliseconds and I want the units on the axis as well. So, let’s drop down the advanced options for the chart and change the number format to be ‘To Milliseconds‘, we will also set the Y-axis unit label as ‘ms‘:


Our chart now looks good on the preview. The only thing of note that we haven’t talked about is the Chart click-through drop-down. This essentially allows your chart to be a link to another custom dashboard, when you click on the chart, that dashboard will be displayed. We don’t need this functionality, so we leave the drop-down alone. We’ll now go ahead and save our chart.


We can now add the various sub-charts. In our case, I happen to know that Tuts+ has a concept of tutorials (also, obviously) which is closely related to articles, so if I am going to keep an eye on article average response time, it’s probably a good idea to have a tutorial average response time close by as a comparison, so I will create a chart for that. We’ll follow the same steps as above, in the end our dashboard will look like this:


Hmm, it looks like the average response time for articles is much higher than tutorials, but I happen to know that both of them share a significant amount of code, strange. But it’s also an indication that our custom dashboard is already paying dividends and we haven’t even finished constructing it yet. I could have found this information by looking it up in the regular New Relic UI, but having the charts side by side like this helps to really bring home the fact that there might be an issue.

It might also be good to see where our ArticlesController sits compared to other controllers, as far as their maximum response time goes, this is a job for a table. We add another chart just like before, but this time pick table instead of chart. To create tables with multiple rows, we need to use wildcards in our metric name. In our case, I will set the metric to be Controller/, this will select all the metrics under the *Controller prefix, I will now set the limit text-box to be 10 which will do exactly as you expect and set the number of rows in our table to ten. In the end, we should have something that looks like this, right before we save:


Our custom dashboard will now be:


It seems that ArticlesController#show has the longest maximum response time out of all the controller actions, including TutorialsController#show, this is very curious and I should probably make a note to look into this.

We can continue adding a bunch of other charts such as end user average response time or calls per minute. However, some things you just can’t construct using a custom chart, such as historical throughput or response time. Luckily we can always fall back on finding the charts that we want somewhere else in New Relic and simply adding those ones to our custom dashboard.

The only limitation with using a dashboard which has custom charts is that you have to have the correct app selected within New Relic, otherwise all the custom charts on your dashboard will be blank.

Creating Dashboards for Your Custom Metrics

If you read my last article on custom metrics, you might remember me mentioning that the only way for you to view the custom metric data that you’ve collected is to create a custom dashboard in New Relic, this is the third reason to use custom dashboards. If you collect a lot of custom metrics, this might be the best reason of all.

On Tuts+ we have a concept of categories (yet again, obviously), I just happen to know that we have a few custom metrics floating around for categories. Let’s see if we can put these on a dashboard and actually get an idea of what’s happening. We’ll create another dashboard and call it ‘Categories Custom Metrics’. All custom metrics in New Relic should live under the Custom prefix and this is where we find the metrics we’re looking for:


We’ll create a couple of charts, one to see how long building presenters in CategoriesController takes and the other to see how long it takes to get a link hash from the presenters. The key thing to know with custom metrics, is what kind of data you’re actually sending to New Relic. In this case I happen to know that we’re measuring time, so I can pick ‘Average value‘ as my metric value and set the Number format to milliseconds to get a reasonable looking chart. After creating both charts, our custom dashboard looks like this:


It looks like getting the link hash from the presenters is very quick and doesn’t fluctuate too much, there is no need to optimize anything here and I can probably stop collecting this metric completely (no need to capture unnecessary data). However, building the presenters takes significantly more time, we can look into this further to see if it can be optimized. We can also keep an eye on the chart (by looking at the custom dashboard occasionally) to make sure the performance doesn’t degrade as we continue to work on the application.


Custom dashboards are not a panacea. Aside from creating charts for custom metrics, you can do everything that custom dashboards can do with the regular New Relic UI. However playing with custom dashboards will definitely help you become more of a power user of New Relic, with a deeper understanding of how it works under the hood. In addition, the ability to look at your performance metrics in different ways can be an invaluable tool to help you surface potential performance problems before they have a significant impact on your application.

If you have any questions about New Relic custom dashboards, don’t be afraid to leave a comment and I’ll do my best to answer. Also, if you’ve used custom dashboards to good effect in the past, please share any tips you might have, it’s always interesting to see how other people use their tools.

December 30 2013


Using New Relic Custom Metrics to Monitor All the Things

When you first get introduced to New Relic, it’s easy to be overwhelmed by all the features. But like with most tools, as you slowly climb the learning curve and get familiar with the features provided out of the box, you begin to wonder how it all hangs together under the hood and if there is more you can do with what you have at hand.

Today I will look at how New Relic actually monitors transactions and how you can hook into the process. We will take a quick glance at the support provided by New Relic for monitoring background jobs and we’ll also dig into New Relic custom metrics, how you would use them and the benefits you get from doing so. By the end you will have a deeper understanding of how New Relic works and be able to utilize it more fully. In other words, we will have a deeper knowledge of our tools—something every developer should strive for.

Let’s begin by taking a quick look at how New Relic actually hooks-in to track the performance of your application.

How New Relic Tracks Transactions and How You Can Do the Same

It may seem a little like magic, you include a gem in your Gemfile:

gem 'newrelic_rpm'

And somehow New Relic monitors all your code. Of course it is just code, so let’s see how New Relic actually instruments your app so it can begin monitoring it when the agent gem is required. We will do this in the context of a Rails 4 app.

The first place to look is newrelic_rpm.rb, which has the following relevant code in it:

if Rails::VERSION::MAJOR.to_i >= 3
  module NewRelic
    class Railtie < Rails::Railtie

      initializer "newrelic_rpm.start_plugin" do |app|
        NewRelic::Control.instance.init_plugin(:config => app.config)

So a Railtie gets created when the Rails version is above three, this gets a NewRelic::Control singleton instance (when it is initialized) and calls init_plugin. When the NewRelic::Control instance is created it figures out what framework is running (Rails 4 in our case) and loads some relevant code; we can see this in new_relic/control/class_methods#load_framework_class. The init_plugin method that gets executed lives in new_relic/control/instance_methods. The interesting code here is:

if Agent.config[:agent_enabled] && !NewRelic::Agent.instance.started?
  load_samplers unless Agent.config[:disable_samplers]

The install_instrumentation call is the important one. The implementation lives in new_relic/control/instrumentation. Skipping over the boilerplate bits, this works out which instrumentation files it needs to load and then requires them one by one. In our case, it will load files under new_relic/agent/instrumentation/rails4. One of the files here is action_controller.rb, when this gets required it eventually executes the following code via some metaprogramming magic:

executes do
  class ActionController::Base
    include NewRelic::Agent::Instrumentation::ControllerInstrumentation
    include NewRelic::Agent::Instrumentation::Rails4::ActionController
  NewRelic::Agent::Instrumentation::ActionControllerSubscriber \

And here we get to the crux of it: ActionController::Base, from which all your controllers inherit and it gets a couple of modules included in it, the most important one, which is NewRelic::Agent::Instrumentation::ControllerInstrumentation. This is the start of how New Relic begins to monitor all your controller actions as ‘transactions’. Of course, it’s a somewhat simplified view and we’re glossing over a lot of detail, but it gives you an idea of how New Relic monitors your code. The question is, how can you use this information?

Monitoring Custom Frameworks & Background Jobs

You’re highly unlikely to find yourself in a situation where you’re using a web framework that New Relic does not already have instrumentation for (in the Ruby world), but let’s say you did. Knowing what we know now, we can easily, manually instrument the controller actions of this custom framework. If we have a controller like this:

class CustomController
  def custom_action

We can instrument it like so:

class CustomController
  include NewRelic::Agent::Instrumentation::ControllerInstrumentation

  def custom_action
  add_transaction_tracer :custom_action

Now your controller method will be tracked like a transaction in the same way Rails actions get tracked. Of course, if you’ve rolled your own web framework which includes database handling code, you will need to do extra work to instrument much of that code, to allow New Relic to monitor more than just controller actions. But the general idea remains sound.

The above pattern becomes more useful when you want New Relic to track background jobs in your application. You’re a lot more likely to have rolled some custom background job handling code than you are to have written your own web framework. In fact, we did just this with Tuts+ initially, although we’re now migrating to Sidekiq. If you’re using one of the well-known background jobs systems like Sidekiq, Resque or Delayed Job, New Relic already has instrumentation built in, but if you did roll your own, the above pattern is all you need to monitor your tasks.

For example, our custom Tuts+ background jobs were regular ruby classes that responded to the execute method, so all we have to do is this:

class SomeBackgroundJob
  include NewRelic::Agent::Instrumentation::ControllerInstrumentation

  def execute
  add_transaction_tracer :custom_action, category: :task

The last bit, category: :task, is to ensure that New Relic does not track it as a web transaction, but instead treats it as a background tasks and makes it appear under the background tasks tab in the New Relic UI. If we create a base class for all our jobs, we can put the instrumentation there and child classes will inherit it, so we don’t have to worry about doing the above in every job class.

Customizing a Transaction Even More

Interestingly enough, even the web transactions that New Relic automatically monitors are not sacrosanct. You can, for example, add some custom parameters to be sent to New Relic for the transaction that is currently executing (if you’ve switched on capturing parameters).


You can do this at any point during the transaction. All you have to do is call ::NewRelic::Agent.add_custom_parameters(:key => 'value') at any time and the parameters you pass in will get added to the parameter data you see inside New Relic. For example, if we had a controller that looked like this:

class HelloController < ApplicationController
  def index
    ::NewRelic::Agent.add_custom_parameters(:hello => 'world')

Slow transactions would give us the following:


This isn’t all we can do. We can segment a transaction while it’s running by renaming it. Let’s say we want to treat a transaction as special when it is performed by a particular user. You may do something like this:

class HelloController < ApplicationController
  def index
    new_relic_name = NewRelic::Agent.get_transaction_name
    if == 'Joe Customer'
      NewRelic::Agent.set_transaction_name("#{new_relic_name} - Joe Customer")

Now this transaction will be treated as a separate transaction in the New Relic UI:


Even default New Relic instrumentation has some room for customization, but sometimes, just like Captain Kirk, you just need more power. This is where custom metrics come in.

New Relic Custom Metrics & How They Are Useful

Back in the day, you would have used custom metrics to monitor things like external service communication and use of various common tools like Redis. These days, New Relic has better ways of monitoring that stuff, so what do we need custom metrics for? I’ve found custom metrics to be useful in four situations:

  • monitoring code that New Relic can’t see
  • monitoring code you don’t control
  • monitoring scripts
  • monitoring totally custom events

Let’s take a quick look at each of them.

Monitoring Code New Relic Can’t See

New Relic is pretty good at breaking down the performance of your various application methods in a transaction trace, but sometimes you will see something like this in a trace:


It seems there is application code that New Relic couldn’t instrument for some reason. What we can do is help New Relic (and ourselves) out with some custom metrics. We need to figure out which method New Relic had trouble monitoring and hook in some custom metrics to track how long this method took to execute. This will then appear in all subsequent traces. Let’s say we have a class with a method we want to monitor via custom metrics:

class Order
  def amount

We can begin to track the amount method like so:

require 'new_relic/agent/method_tracer'

class Order
  include ::NewRelic::Agent::MethodTracer

  def amount
  add_method_tracer :amount, 'Custom/amount'

The second parameter to add_method_tracer is the name that this custom metric will get in the New Relic UI. Metric names are slash separated strings and all custom metrics should start with ‘Custom/’. You could, for example, name your custom metric as ‘Custom/<class_name>/<method_name>’. At this point you will begin seeing the amount method in your transaction traces, in the New Relic UI. But, what if our amount method is highly complex and we want to monitor parts of it which we suspect are slow? My advice is that you should refactor your method—it’s too big, but if you can’t do that, you can instrument random code like so:

class Order
  extend ::NewRelic::Agent::MethodTracer

  def amount
    self.class.trace_execution_scoped(['Custom/amount/complex_code']) do
      ...complex code...

Now the instrumented part of the method will be reported separately in your transaction traces. If you’ve previously instrumented the method itself, your new ‘inner’ metric will be grouped under the previous one.

This is by far the most common way you’ll end up using custom metrics in your code, but let’s look at the others anyway.

Monitoring Code You Don’t Control

Often you’re using a library which, you suspect, is slowing down your application. New Relic won’t instrument any random gem for you by default, so what can you do? You can fork the gem and add some instrumentation using the method we saw above, but an even simpler solution exists – use initializers. Let’s say you’re using the foobar library which has a class Foo with a method bar that you suspect has some slow code in it. All you need to do is create an initializer foobar_instrumentation.rb, and put the following in it:

require 'new_relic/agent/method_tracer'

Foo.class_eval do
  include ::NewRelic::Agent::MethodTracer

  add_method_tracer :bar

As you can see the code is very similar to what we had above, New Relic will work out a sensible name for your new custom metric based on the class and method name and you will begin to see it in your transaction traces. Use this to work out if that suspicious library is actually making your code perform badly, but don’t keep this instrumentation around permanently. It spams unnecessary initializers in your Rails app and pollutes your New Relic UI with custom metrics that you don’t really need to track permanently.

Monitoring Scripts

One part of web applications that is often neglected is scripts. To paraphrase a presentation I did recently – it’s still production code and should be treated as such. You don’t need production code to be performing badly, especially if you execute it on an ongoing basis via cron jobs (or similar method that is not a background job in your system), so we can use New Relic to figure out if your scripts are slow.

You may instrument your script code using custom metrics as described above. It won’t show up in transaction traces as it won’t be part of a transaction, what you will be able to do though, is create a custom dashboard from the metrics you collect which should give you an idea if your script is performing badly.

The other thing you can do is to treat your script as a type of background job and instrument it accordingly (include NewRelic::Agent::Instrumentation::ControllerInstrumentation etc.), it will get lumped together with other background jobs in the UI, but you don’t have to worry about custom dashboards.

The only caveat with scripts is this. New Relic only sends data over the wire periodically, with a one-off script that executes quickly you will need to ensure that the collected data actually gets sent, so you may need to shut down the New Relic agent manually. A good rule of thumb is to manually start the agent at the beginning of each script and to shut it down at the end:

require 'newrelic_rpm'


... codez ...


This way you won’t ever need to wonder why your data is not appearing in the UI.

Monitoring Totally Custom Events

One of the interesting things about New Relic, is that it lets you take advantage of its UI and data aggregation facilities for metrics that have nothing to do with performance (in theory). For example, you may want to have some visibility on how often users sign up to your application, the frequency of sales, the total amount users pay when they make purchases etc. These are more business metrics than performance ones, but if it’s too much trouble to track these separately, you may use New Relic to do so.

New Relic lets you record custom metrics directly via two API calls:

  • record_metric
  • increment_metric

You can use record_metric to track any metric that has an amount and increment_metric is pretty self explanatory. So we may, for example, do this:

def purchase(amount)
  ::NewRelic::Agent.record_metric('Custom/purchase_amount', amount)

The only way for you to see these metrics in the UI would be to create some custom dashboards. I have to mention that this would be a somewhat “creative” use of the New Relic API as it’s designed with performance data in mind, but it is certainly a handy thing to know when you need to throw a quick dashboard together and don’t want to set up a bunch of extra infrastructure.

The Dangers of Monitoring Too Much

Of course, all this power has a cost. If you collect too many custom metrics, it can begin to slow down your application. It can also slow down the New Relic UI and make it difficult to interpret the data as New Relic will collapse similar metrics into a summary one. New Relic recommends keeping the number of custom metrics you collect below 2000. I have found that custom metrics are best used periodically. Instrument the code you need, use the instrumentation to solve the problem you’re having and then remove the instrumentation. This way you get to solve your performance issues and the number of custom metrics you use is unlikely to grow too high.


We’ve dug into the internals of the newrelic_rpm gem and saw how to tell New Relic about code that you consider a web transaction. We saw how to alter transactions on the fly, how to monitor background jobs and looked at the various situations where it makes sense to use custom metrics. There is much you can do with New Relic beyond the functionality it provides out of the box and we’re now much more able to utilize it to its full potential. But, there is always more to learn, like how to build custom dashboards out of the metrics that we capture or how to monitor infrastructure with plugins. We’ll be covering those topics and more in subsequent articles, so make sure to check back often. And as always if you have a question, want to share your own New Relic story or just want to say hi, don’t forget to leave a comment.

September 30 2013


New Relic & JMeter – Perfect Performance Testing

Following on from the great introductory articles featured recently on Nettuts+, this article looks to show how you can take New Relic to the next level. As a performance monitoring tool New Relic is fantastic, but what about performance testing, before you go live. That’s where JMeter comes in to play. In this tutorial, you will see how we can stress test our application under realistic load, and combine the output of JMeter and New Relic to give you confidence in your applications performance, before releasing into a production environment.

Why wait until deployment to see how your application is going to fare against real world traffic. If there is a bottleneck in your code that degrades the user experience, do you really want that to go live? What if we could find these bottlenecks early, improve performance and deliver a great application to our end users the first time, and maintain that going forward with regular benchmarking. JMeter and New Relic together can give you this perfect performance testing suite.

Demo Application

Before we can begin using New Relic and JMeter we need a simple app to do some performance testing on! So, lets write a simple Ruby Sinatra app that has a service we can test. I won’t go into the creation of this application too much, as you can read up on Sinatra in other articles on Nettuts+.

The application will be faked a little, to allow us to see some interesting results along the lines of what we may see in various applications. We will write a service that takes an id, and depending on that id will return a value either straight away or with a delay. This will show us what can happen if requests are handled quickly or slowly and the impact this has on your apps overall performance as many users make requests.

Here is the code that defines the services:

             require 'sinatra'
             require 'puma'
             require 'newrelic_rpm'

             module Example
             class App < Sinatra::Base
             get '/example/:id' do |id|
             result = id
             if id == '1'
             result = "This is our id: #{id}"
             if id == '2'
             sleep 3
             result = "We waited for id: #{id}"

As you can see this is clearly a contrived example, but the idea is that we have some fast responding services and one with a slight delay. We can now use this app and start to write our performance testing plan in JMeter. Lets first get JMeter installed on our machine.

Hooking Into New Relic

Getting your application reporting to New Relic is a very simple process. New Relic support Ruby, Python, PHP, Java and other platforms, featuring easy to follow guides for all. In the case of Ruby an Sinatra, it is literally a four step process:

  • Add the ‘newrelic_rpm’ gem to your GemFile and ‘bundle install’.
  • In your main ‘app.rb’ where we defined the service route above, add a “require ‘newrelic_rpm’” line.
  • Download the ‘newrelic.ini’ file from your account in New Relic and place in a config folder in your app.
    (Ensuring Monitor Mode is set to ‘true’ for Development if running locally.)
  • Rackup your application and see it listed in New Relic!

Once you have follow these simple steps, you should start to see some data coming through to New Relic as you hit your app with some traffic. You’ll know it’s working when the app is listed and turns green.


For the sake of completeness, I will just list a brief overview of the main view New Relic provides for your applications. The design on New Relic is mainly to monitor applications that are in production environments with live traffic. The overview screen provides an at a glance look at the current status of your application and how it is responding to customers requests.

The screen can be broken down as follows:

  1. Response Time – this is the average response time of calls across your application.
  2. Apdex – New Relics metric for customer experience. A score more towards 1 indicates the vast majority of user’s
    requests are falling within a reasonable time. The apdex can be useful for alerting when it falls below as set number.
  3. Throughput – the requests per minute (RPM) being made to your application.
  4. Web Transactions – the different routes being accessed in your application. These are ordered by the most time consuming requests.
  5. Error rate – the percentage of requests causing an error. You can click through and debug individual errors here.

What Is JMeter?


JMeter is a Java application that allows you to build up test plans that can stress test your application. You can set everything from the amount of simultaneous users of the service, to the amount of requests they make a second. You can even ramp up the requests to see how your app deals with changing load, just as it could in real world deployment.

As part of this tutorial, I will show the basics of getting a test plan running against your applications, but with a wealth of plugins and documentation there are plenty of tools to handle any kind of performance testing you may need.

Installation and Usage

Installation is fairly straightforward and here we will list instructions for Mac and Linux.

Mac OS X

On a Mac JMeter can be installed very easily via Brew. Once you have Brew try the
following command:

        brew install jmeter


On a Linux machine, simply download from the JMeter downloads page. Then, simply follow the instructions provided.

All Platforms

Once you have the main JMeter package, we also need to install the standard set of plugins. We will be making use of one plugin in particular later on, therefore we need to add these in to be able to use it. The standard plugin set can be obtained from this link: Once downloaded extract into the JMeter package which is located at: “/usr/local/Cellar/jmeter/” on a Mac, and wherever you installed it to on Linux.

Analysis In New Relic – First We Need a JMeter Test Plan!

So now we have JMeter installed and our simple application, let’s test this app and see how it behaves. When you fire up JMeter you will get this screen:

Now, let’s set the base URL for our requests. Right click on “Test Plan” in the left pane, and choose ‘Add -> Config Element -> HTTP Request Default’. We can now enter our base URL in here like so.


We can now add the amount of threads or “users” of our system. To do this right click on “Test Plan” again and choose ‘Add -> Threads (Users) -> Thread Group’. We can then enter the users, in this case 20. Make sure to choose the loop count forever option, as this will allow us to control the time and number of requests via a plugin later.


Once we have the the thread group we can now define the requests we want to make to our application that we are going to performance test. To do this we will add “HTTP Request” to our “Test Plan”. This can found by right clicking on the “Thread Group” and choosing “Add -> Sampler -> HTTP Request”. We can then define the request to make in the pane as below.


You can see how we dont need to define the base URL, as we did that earlier on and instead just need to add the path for the request. In this case the path is to our ‘example/1′ response. You will also notice I have gone ahead and added the other two requests along with the result and graphing panes, which we will use to analyse the results of the tests. By now you should of got the hang of adding elements and they can be easily found in the menu from their names. The main two of interest are the “Throughput Shaping Timer” and the “Composite Graph”.

The Shaping Timer enables us to map how we want the requests to be made to our application over time. For example, we can configure one request per second for 60 seconds, and then ramp up to five requests a second for 60 seconds and see the effect this has on our response times. Let’s take a look how we configure that in the Shaping Timer pane.


So, by going in and adding each row, you can define the amount of request to make and for how long it should do this for. We can then view our results using the “Composite Graph”, which shows the transactions made per second against the response time of our requests. This requires minimal configuration, simply adding the two graphs we will combine, then in the settings for the composite graph, add in the graphs we require like so:


That’s it! We can now run our test plan and start seeing some results. Hit play towards the top of the screen and then click on the composite graph. It will start to ploy out the results as they come in and you can get a picture of how your application is responding. Let’s look at our results.


We can clearly see the jump in requests at one minute has a fairly sizeable impact on our application. For the first minute the requests are stable at one per second and give response times of around two/three ms. However, when we increase to five, the response times increase slightly hitting five and five m/s. Obviously these are very quick response times in the real world, but we are just showing here how we can increase the load and see the affect, if any, this will have.

Let’s compare these results with the service that has a delay of three seconds. How will that cope with the increase in load? To switch to example two, right click on example one and choose toggle. This will disable that request, then do a toggle on example two and that will enable it. Be sure to click the “Clear All” (Sweeping brush) icon at the top to clear the last run’s results, then hit play.


Even with the three second delay, the server managed the requests quite well and we see much the same in the way of results for this service. Only a few millisecond increase as the requests increase. With such a simple service, this is to be expected.

New Relic Analytics

The real power now comes with combining this data with New Relic. We could for example, set JMeter to run for half an hour with different variations of load and then use New Relic to analyse the results and use its drill down functionality to look for bottlenecks in the application. These can then be fine tuned, increasing your performance before delivering to your customers.

Again, I won’t go into the setup of New Relic as this is covered in other recent articles on Nettuts+ (See here). But once your application is connected, it’s simply a case of generating the load through JMeter and logging into New Relic to see the results. For this run, I have set up the Shaping Timer to run our load for 30 minutes ramping up the requests from five to 10 and then 15 per second. This should give us some reasonable traffic to look at in New Relic.


Once the JMeter test has run, we can take a look into New Relic which we can now see has stat on the traffic following through the app.


This clearly shows the ramping up of the requests, at its peak hitting around 400 request per minute (RPM) and the response times remaining stable at three seconds. We can delve deeper into the stats and look into the transaction we are making. If we click through to the Web Transactions view, we can see the analysis New Relic has done on just this part of the application. If the code that handled the request had more layers to it, such as methods to call other systems to get data before presenting back to the user, we would see more of a breakdown.

For example, on the left it shows we spent 100% of the request time, in that call. If we had multiple stage such as a call to a database, we may see a high percentage there and we would know to optimise the query to the database to increase performance.


New Relic also provides a great reporting view on your applications data, called Scalability. This report can be really useful to monitor your applications ability to handle increasing load. The graph shows your response time against the requests per minute, and you can clearly see if there is any degradation in the response time as they increase. This is great tool and one you should refer to often both in performance testing like this, but also in your performance monitoring of your production application.

In our example below, it is clear that the application is capable of maintaining a three second response time even as the RPM increases.


New Relic also provides another view, that of Capacity. This allows us to look at how much of the available resources our application is making use of. It indicates to the developer whether the number of instances serving your application is enough to handle the kind of load you are getting. This is vital to ensure you are not running near capacity and have the ability to handle any spikes in traffic that may occur outside your normal traffic flow. New Relic summarise the page well, next to the analysis of our application here, which we can see is fairing well even on this single instance.



The aim of this tutorial was to show you how to quickly setup JMeter testing plans for your application, so you can test drive the performance of your application before delivering to your customers. This approach can be used in new projects, ensuring the application you are going to deliver is ready for real world traffic. It can also be used on legacy applications, giving you a baseline performance indicator so that as you make changes going forward you can see whether your application’s performance is improving or decreasing.

By leveraging the great tools provided by New Relic, you can both monitor your application online in real time, but also take its toolset and apply it to your own offline analysis. This will give you, the developer, confidence in your product both as it is being developed and when it is released to the wild.

September 27 2013


5 Reasons Why New Relic Is a Developer’s Best Friend

Once you start digging around New Relic you begin to realise just how many interesting features the service has to help monitor the performance and health of your application. It was truly difficult to pick just five things to talk about, so rather than focusing on the obvious features let’s look at some of the less hyped functionality that New Relic provides and how we can use it in interesting and sometimes unorthodox ways.

When we left you last time, we had a basic ‘Hello World’ Rails application (called New Relic_rails1, living in ~/project/tmp/New Relic). We will continue using this app, extend it and see if we can use it to demonstrate the features of New Relic that we’ll be looking at.

Availability Monitoring

This is one New Relic feature that usually doesn’t make the front page of the marketing material. There is not a lot to it, but if you think about it, what’s more important that making sure your app is actually up and running and accessible by your users?

Firstly, when you set up availability monitoring, your application gets a nice asterisk on your main applications dashboard:


It’s a nice visual reminder, so you can see which apps still need availability monitoring switched on.

Let’s now look at how we can set up availability monitoring and what we can get out of it. Firstly, you need to jump into your application and then go into Settings->Availability Monitoring. You will see something like this:


You need to provide a URL you want New Relic to ping, tick the box, save your changes and you’re good to go. New Relic will begin hitting your URL every 30 seconds. But the fun doesn’t stop there. New Relic will ping your URL via an HTTP HEAD request (and deem everything OK if it receives a 200 response code), but you can supply a response string that you want New Relic to look for in which case it will perform a GET request and examine the response for the string that you provided. This can be very handy if you have a custom ‘Health Check’ page that you want to hit.

You can also set up email notification if downtime occurs:


Now that you’re monitoring availability, you will have access to a nice report which will visually show you when any downtime has occurred:


In fact, many of your charts (e.g. the application overview) will have this visual indication:


You have to admit that’s some pretty nice functionality for so little effort.

You can, of course, disable and re-enable monitoring (via the New Relic REST API) when you’re performing deploys, to make sure you don’t get spurious downtime events.

Another interesting side-effect of this is that if you’re deploying your pet project to Heroku on a single dyno, you can use this ping functionality to prevent your dyno from sleeping, which can make your site annoyingly slow if you don’t have a lot of traffic.

Custom Error Recording

If unexpected errors occur in your application, New Relic will record these for you and give you a nice graph. Our little ‘Hello World’ app has performed admirably for the moment, so there is nothing for us to see on that front. But, we can purposely break our app and see what New Relic gives us.

Let’s modify our HelloController to raise an error randomly approximately 50% of the time:

class HelloController < ApplicationController
  def index
    if rand(2) == 0
      raise 'Random error'

We will now make a few hundred calls to our app and see what happens:

ab -n 300 -c 10

Our New Relic error graph now looks much more interesting:


And we can drill down to get some specifics:


As you can see we can sort our errors and filter them as well as look at errors from web requests and background tasks separately. This is some incredibly powerful stuff to help you diagnose and fix problems with your application. You can of course also see the stack trace for each error:


There are services specifically dedicated to capturing errors from your application, some of the most well known ones are Airbrake and Bugsnag. These are paid services used by many application, but the functionality that New Relic provides just about makes these services redundant. In fact if we could send custom errors to New Relic (rather than letting it capture errors that we hadn’t rescued) we could make a compelling case for not using a separate error collection service (and save some money and get rid of an extra gem in the process).

While New Relic doesn’t document any way of doing this, we can always go to the source to see if what we want to do is hard. It looks to me like it should be pretty trivial for us to send custom errors to New Relic, so let’s give it a try. We’ll modify our controller action again to rescue all errors and send a custom error to New Relic:

class HelloController < ApplicationController
  def index
    if rand(2) == 0
      raise 'Random error'
    New Relic::Agent.notice_error("I caught and reraised an error"))

After we make a few more calls and wait for the data to come through we see the following:


It worked, our custom error is coming through! New Relic can definitely act as our error collection service. We are of course using a private interface here which is not very nice, but we can put the notice_error call behind a facade which will make things a bit easier for us if the interface changes.

An even better approach might be to not treat custom errors like regular errors at all, but instead create a custom metric to track and then build a custom dashboard to visualise. This way we’re no using any undocumented functionality and would still get all the benefits – brilliant!

Key Transaction Tracking

New Relic will normally track your transactions for you:


You will be able to see where your application is spending most of its time (e.g. in the controller, model, database etc.). However, New Relic will not capture a detailed trace unless the transaction takes longer than Appdex * 4 seconds. Normally this is OK, but sometime you have transactions that are much more important to your application or to your business. Perhaps these transactions are extremely high volume or deal with important events like payments. Suffice to say you need to make sure this type of transaction always performs extremely well.

The thing is though, when a transaction is this important it has probably received quite a lot of love from you already and may be performing fairly well. Let’s say you have a transaction with an extremely high throughput (occurs many times per minute). If this transaction is performing optimally everything is fine, but if the performance were to degrade slightly, due to the volume of the traffic it may have a disproportionally detrimental effect on your application. What you want is something like:

  • a separate Apdex T value just for this transaction
  • the ability to receive alerts when the performance of this transaction degrades
  • a detailed trace every time this transaction performs even slightly non-optimally

This is exactly what key Key Transactions give you!

Before we set up a key transaction for our ‘Hello World’ app, we need to create a more interesting transaction which will usually perform well, but will sometimes perform somewhat badly. We will build the ability to look at car makes and models and get a particular car make to slow the transaction down. Firstly the route:

New RelicRails1::Application.routes.draw do
  get 'random_car', to: 'cars#show_random'

  root 'hello#index'

We want to be able to get a random car, this will map to the CarsController:

class CarsController < ApplicationController
  def show_random
    @car = Car.offset(rand(Car.count)).first
    if @car.make == 'Ford'

We get a random car from the database and if the car make is ‘Ford’ we will have a slow transaction on our hands. Of course we need a Car model:

class Car < ActiveRecord::Base

We’ll need to configure our database to use MySql in development (I did this, but you can stick with sqlite):

base: &BASE
  adapter: mysql2
  encoding: utf8
  host: "localhost"
  username: "root"
  max_connections: 10
  timeout: 5000

development: &DEV
  <<: *BASE
  database: "New Relic_rails1_development"
  sql_log_level: debug

We need a migration to create a cars table:

class Cars < ActiveRecord::Migration
  def change
    create_table :cars, force: true do |t|
      t.string :make
      t.string :model

And we need some seed data which we’ll put in our db/seeds.rb file:

Car.create(make: 'Ford', model: 'Mondeo')
Car.create(make: 'Honda', model: 'Accord')
Car.create(make: 'Audi', model: 'A4')
Car.create(make: 'Lamborghini', model: 'Murcielago')
Car.create(make: 'Toyota', model: 'Prius')

Lastly we should probably have a view cars/show_random.html.erb:

<h1>Make: <%= @car.make %></h1>
<h2>Model: <%= @car.model %></h2>

You’ll also need to add the mysql2 gem to the Gemfile if you’ve gone with MySql. After this we just need to create and populate the database, restart our server and we’re good to go:

rake db:create && rake db:migrate && rake db:seed
rails s

You’ll need to hit the URL, to make sure New Relic recognises that this transaction exists:

curl localhost:3000/random_car

We’re now ready to monitor this transaction as a key transaction. Firstly, jump into the transaction tab:


Click the ‘Track a Key Transaction’, button and pick our newly created transaction:


We can give our new key transaction a name, pick the Apdex T that we’re happy with as well as set up some alerts. When our transaction takes longer than the Apdex that we’ve chosen, New Relic will capture a detailed trace which we’ll be able to use to figure out where the performance issue is coming from. Let’s make a few calls against our new URL and see what data we get:

ab -n 300 -c 20

Hmm, it seems some of our transactions are frustrating our users:


Let’s see if New Relic has captured some transaction traces for us:


Let’s look at one of these traces. It took around 2 seconds to respond, but only 10 milliseconds were using the CPU:


All our SQL statements were fast so database is not the issue:


It looks like most of the time is spent in the controller action:


Let’s dig into the trace a little bit. It looks like the SQL SELECT was fast, a Car.find was also fast. Then we lose about 2 seconds which is followed by some very fast template rendering:


New Relic has kindly highlighted for us where we lost those two seconds. We need to look at our controller code after a Car.find call:

class CarsController < ApplicationController
  def show_random
    @car = Car.offset(rand(Car.count)).first
    if @car.make == 'Ford'

Hmm, the initial SELECT must be the Car.count call, and the Car.find, must be due to the Car.offset call. Our big delay is right after this though. Ahh look at this, some silly person has put a 2 second delay in our code when the make of the car is ‘Ford’. That would explain why our 2 second delay happens only some of the time. I better do a git blame on our repository to find out who put that horrible code in there! On second thoughts, I better not, cause it might say that it was me.

External Service Call Recording

Whenever you make calls to other services from within you app (e.g. an HTTP request to an API like Twitter), New Relic will monitor these as external calls. These days a serious application may integrate with a number of external APIs. Often these external services can significantly degrade the performance of your app, especially if you make these calls in-process. New Relic can show which of your external calls are slowest, which ones you call the most and which respond the slowest on average. You can also look at the performance of each of the external services you use individually. Let’s give it a try.

We’ll create an external service of our very own, by building a small Sinatra app. Firstly we install the gem:

gem install sinatra

Create a new file for our service:

touch external_service.rb

And put the following code in there:

require 'sinatra'

get '/hello' do
  sleep_time = rand(2000)/1000.0
  "Hello External World #{sleep_time}!"

This service will sleep for a random time (between 0 and 2000 milliseconds) and then return a ‘Hello’ response with the time it slept for. Now all we have to do is start it:

ruby external_service.rb

Back in our Rails app we’ll build a new controller to call our external service. We’ll use this route:

New RelicRails1::Application.routes.draw do
  get 'external_call', to: 'external_calls#external_call'

Our controller will call our Sinatra service via HTTP:

require 'net/http'

class ExternalCallsController < ApplicationController
  def external_call
    url = URI.parse('http://localhost:4567/hello')
    external_request =
    external_response = Net::HTTP.start(, url.port) do |http|
    @result = external_response.body

And we need a view to display the results:

<h1><%= @result %></h1>

All we have to do now is make a few calls to our new endpoint:

ab -n 100 -c 10

Let’s see what New Relic has produced for us.


New Relic has indeed picked up our new external call. We’ve got the total calls per minute we’re making to the external endpoint. And the total that was spend responding by the external service. Of course our chart looks a little sparse since we only have one external service, which means we don’t have anything to compare against.

We can also get more detailed data about the specific external call as well as where in our app this call is being made from:


We can see when the calls were made, the throughput and the average response time. This may seem simple, but when you have an app with a lot of external services this feature can give you a very nice overview of how these external services are performing, as well as when and where they are being used. This can allow you to make decisions regarding caching certain external service responses if possible, or even dropping particular external services if their performance is not up to scratch. And you no longer have to argue these things based on gut-feel and home-baked metrics, you’ll have hard data to prove your point for you.

Scalability and Capacity Analysis

There is nothing more frustrating for a developer than having your application fall over due to a traffic spike. Everything was running smooth until those extra few hundred users came along and your application exploded. You had a feeling this might happen, but couldn’t be sure – the wait and see attitude seemed to be the most pragmatic approach. Well with New Relic capacity and scalability reports, you no longer have to ‘wait and see’. You can tell straight away how well your app is scaling, you can do load tests and instantly see if you application can handle the load. You can observe your application response time trends as your user base grows and predict when you’ll need to add capacity. All of those are truly wonderful things.

First, let’s look at the capacity reports:


Hmm, this one shows a big spike, but otherwise nothing. Well we’re running in development mode, so this is understandable. That spike is for when we did a bunch of requests concurrently just a little while ago. As you can see when we did those concurrent requests, we maxed out our poor lonely Webrick instance. If this was production and that load was constant, our instance would always be 100% busy, which would probably indicate that we need another instance.

The instance analysis report is slightly different:


In our case we don’t get much out of it, but it normally shows us the number of instances that are running, and the number of instance we actually need to handle the load if all instances were 100% busy. So if we were running 10 instances and the concurrent instance load was 2, we could easily halve (or even more than halve) the number of running instance and not degrade the performance at all. For a small app that runs only a few instances, this is no big deal, but for a large application with dozens and hundreds of instances, this can translate to significant cost savings.

And then there are the scalability reports. The response time report is probably the most interesting/important one:


Once again, our graph is very distorted cause it’s a development app that we’ve been playing around with randomly. The idea with this report is that as the throughput for your application increases (more requests per minute), the response time should remain close to constant (i.e. performance does not degrade when there is more traffic). This means you should always be seeing something resembling a flat line here. If your line is sloping upwards significantly, your app is probably struggling to handle the traffic and you may need to look at adding more capacity. Where to add capacity is another question entirely (e.g. database capacity, more servers etc.). The other two scalability reports can help you answer it. There is the database report:


You can’t expect your database to not be effected by higher load, so what you should be seeing here is a line that slowly goes up as the throughput of your application increases. It is up to you when the database response time is deemed unacceptable (i.e. is affecting the response of the application too much), but when you do decide that the database responses are too slow, you know it is time to add database capacity. The other report is the CPU:


Once again you can’t really expect higher throughput to not affect your CPU load, you should be seeing a line that slowly goes up with increased throughput. This, together with the capacity reports we talked about earlier can allow you to decide when to add more Rails processes/servers to ensure your performance remains decent.


If one or all of those features have raised an eyebrow (or two) for you, the good news is that we’ve only just scratched the surface. Each of those features more than deserves an in-depth article of its own. But, New Relic also has a number of other features that are potentially even more powerful, these include Real User Monitoring, The New Relic Platform, The Thread Profiler, Alert Thresholds and Notification and many others. We will try to cover some or maybe even all of these in later tutorials.

For now, try New Relic out, deploy an agent in your favourite language and see if you can discover an out-of-the-box way of using some of the functionality that New Relic provides. And if you do have some innovative ways to use New Relic, be sure to let everyone know by leaving a comment.

May 03 2013


No Matter of Luck: What To Consider In Mobile App Development [Infographic]


There are gazillions of mobile apps out there. If you are looking for a specific use case or - not - you won't see any shortage in what you will be able to find. Thus, releasing a mobile app is not without risks. You need to create something outstanding to attract the users. But how do you know? How can you improve? Sure, you can always wait for shit storms to wipe away your Facebook page or you can rely on these nice one-star-ratings in app stores. Did you know, that 60 % of all apps in Apple's App Store have not once been downloaded. If I were you, I'd try to find alternatives to hoping and waiting (and failing) ;-)

August 08 2012


Troubleshooting Ruby Performance Problems

In this video tutorial from New Relic, you’ll see a full performance troubleshooting cycle from problem identification and diagnosis, to fixing the problem code, and then testing the optimized code and verifying the performance improvements. You will learn the fastest way to identify when your Ruby app is suffering from performance problems, how to narrow down your troubleshooting to the specific app component that is the source of the problem, and then seeing into that component to find exactly which code is causing your app to slow down and why.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
Get rid of the ads (sfw)

Don't be the product, buy the product!