Profiling with Grok

Author:Uli Fouquet

When it comes to a web framework, profiling is not as easy as with commandline or desktop applications. You normally want to trigger certain requests and see, in which parts of your code how much time or memory was spent.

Here is how you can do this.

Prerequisites

Before we start, we apparently need something to profile: your application. So if you haven’t done it yet, create a typical Grok project:

$ grokproject Sample

This will create our project in the Sample/ folder. We assume, that the created project is based on paster, which is the default as of grokproject v1.0a2.

If you have an older version of grokproject installed, update your install by:

$ easy_install -U grokproject

You should be able to start/stop the created project.

Installing a profiler

There are some profiling tools available with every Python installation. With web-frameworks, however, we often want to check only certain requests. This is difficult with the regular profiling tools, but with paster we luckily have a pipleine mechanism, where we can put in a profiler, which is even configurable over web frontend: repoze.profile

Install repoze.profile

In the buildout.cfg of your project add the repoze.profile egg to list of eggs of your application. Look out for a section named [app], which could read like this:

...
[app]
recipe = zc.recipe.egg
eggs = cptapp
       z3c.evalexception>=2.0
       Paste
       PasteScript
       PasteDeploy
       repoze.profile
interpreter = python-console
...

Here the added repoze.profile line is important.

Now run:

$ bin/buildout

to fetch the egg from the net if it is not already available and to make it known to the generated scripts.

Create a profiler.ini

To make use of the new egg we must tell paster. This is done by an appropriate initialization file we create now:

# profiler.ini
[pipeline:main]
pipeline =
         egg:repoze.profile#profile
         egg:sample

[server:main]
use = egg:Paste#http
host = 127.0.0.1
port = 8080

[DEFAULT]
# set the name of the zope.conf file
zope_conf = %(here)s/zope.conf

It is crucial, that you use the name of your project egg here in the pipeline. As we created a project named Sample, our egg is named sample.

Put this new file in the same directory as where your zope.conf lives (not: zope.conf.in). For projects created with grokproject >= v1.0a2 this is etc/, newer projects might use parts/etc/.

Start Profiling

With the given setup we can start profiling by:

$ bin/paster serve etc/profiler.ini

If your profiler.ini file resides elsewhere, you of course must use a different location.

The server will start as usual and you can do everything you like with it.

Browsing the Profiler

To get to the profiler, enter the following URL:

This brings us to the profiler web frontend. If you have browsed your instance before, you will get some values about the timings of last requests. If not, then browse a bit to collect some data. The data is collected ‘in background’ during each requests and added to the values already collected.

Profiling a certain view

Say we want to profile the performance of the index view created by the default application. To do this, we first have to install an instance of our Sample application.

So go to the admin interface (http://localhost:8080/applications) and add an instance of your application under the name app (you can actually use any name you like, of course).

Now we can access

and the usual index page will appear.

If we go back to the profiler, however, we will see the summed up values of all requests we did up to now. Including all the actions in the admin interface etc. we are not interested in.

We therefore clear the current data by clicking on clear.

Now we access the page we want to examine directly and go to the above URL directly.

When we now go back to the profiler, we only see the values of the last request. That’s the data we are interested in.

Profiling mass requests

Very often a single request to a view does not give us reliable data: too many factors can influence the request to make its values not very representative. What we often want are many requests and the average values appearing here.

This means for our view: we want to do several hundreds requests to the same view. But as we are lazy, we don’t want to press the reload button several hundred or even thousand times. Luckily there are tools available, which can do that for us.

One of this tools is the apache benchmarking tool ab from the apache project. On Ubuntu systems it is automatically installed, if you have the apache webserver installed.

We can trigger 1,000 requests to our index page now with one command:

$ ab -n1000 -c4 http://127.0.0.1/app/@@index

This will give us 1,000 requests, of which at most four are triggered concurrently, to the URL http://127.0.0.1/app/@@index. Please don’t do this on foreign machines.

The result might look like this:

Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests


Server Software:        PasteWSGIServer/0.5
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /app/@@index
Document Length:        198 bytes

Concurrency Level:      4
Time taken for tests:   38.297797 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      448000 bytes
HTML transferred:       198000 bytes
Requests per second:    26.11 [#/sec] (mean)
Time per request:       153.191 [ms] (mean)
Time per request:       38.298 [ms] (mean, across all concurrent requests)
Transfer rate:          11.41 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    94  152  17.3    151     232
Waiting:       86  151  17.3    150     231
Total:         94  152  17.3    151     232

Percentage of the requests served within a certain time (ms)
  50%    151
  66%    153
  75%    156
  80%    158
  90%    176
  95%    189
  98%    203
  99%    215
 100%    232 (longest request)

Also this benchmarking results can be interesting. But we want to know more about the functions called during this mass request and how much time they spent each. This can be seen, if we now go back to the browser and open

again.

Turning the Data into a Graph

We now want to turn the data into a graph. To do this, we first have to ‘export’ the data from the web framework.

Getting the Data out of the Web

The web frontend provided by repoze.profile is very comfortable and nice for analyzind ad-hoc. But sometimes we want to have the data ‘exported’ to process it further with other tools or simply archiving the results.

Luckily we can do so by grabbing the file wsgi.prof which contains all the data presented in the web interface. This file is created in the project directory (here: Sample/).

Be careful: when you click clear in the webinterface, then the file will vanish. So copy it to some secure location where we can process the data further.

Because repoze.profile makes use of the standard Python profiler in the profile or cProfile module, the data in the wsgi.prof file conforms to output generated by this profilers.

Converting the Data into dot-format

One of the more advanced tools to create graphs from profiling information is dot. To make use of it, we first have to convert the data in wsgi.prof into something dot-compatible.

There is a tool available, which can do the job for us, a Python script named GProf2Dot which is available here:

Download the script from:

We can now turn our profiling data into a dot graph by doing:

$ python grprof2dot.py -f pstats -o wsgi.prof.dot wsgi.prof

This will turn our input file wsgi.prof of format pstats (=Python stats) into a dot-file named wsgi.prof.dot.

Converting the dot file into Graphics

Now can do the last step and turn our dot file into a nice graphics file. For this we need of course the dot programme, which on Ubuntu systems can be easily installed doing:

$ sudo apt-get install dot

Afterwards we do the final transformation by:

$ dot -Tpng -omygraph.png wsgi.prof.dot

This will generate a PNG file.

All the used tools (ab, dot, gprof2dot) provide a huge bunch of options you might want to explore further. This way we can generate more or less complete graphs (leaving out functions of little impact), coulours etc.

In the end you hopefully know more about your application and where it spends its time.