Krzysztof Pado
Written by Krzysztof Pado
Published January 5, 2017

Tracing back Scala Future chains

I want to tell you how to write asynchronous code using Scala’s Futures. In the era of asynchronous programming, it’s important to know that function invocations that form a logical chain in the source code are not confined to one thread. I’ll show you how to benefit from this knowledge.

Preface: synchronous vs asynchronous

In the era of synchronous programming every software developer used to be able to use stack traces to trace function invocation chains from the place an error occurred, back to the root cause of the problem. I suspect that most of us can relate to the process of debugging the code by burrowing through backtraces in log files. In the production environment, it often was the only way of finding the root cause of the bug.

Rules of the game have changed, however, with asynchronous programming becoming more popular. In asynchronous code, function invocations that form a logical chain in the source code are not confined to one thread, so stack traces are not of much help in tracing back the execution. In this blog post I’ll cover some ways of tackling this problem when using Scala’s Futures. Using Futures is not the only way to write asynchronous code, but this article will focus on this topic only.

The problem

To make this article somewhat more complete I feel obliged to describe what a stack trace really is and what is so different when using Futures. If you feel comfortable with your knowledge about this topic jump right to the solutions.

On the JVM, each thread has its own data area called a stack[1]. Every time a method is invoked, its handle is pushed to the top of the stack. The handle is removed from the stack (popped) when the method returns. Nested method invocations cause the stack to contain all method handles that are part of the invocation chain. By default, every time an Exception is instantiated it saves a current thread’s stack trace which can then be inspected when the exception is caughta.

Have a look at this small example:

In the above program, after method m1 is invoked, when m3 is executing, the stack contains references to m1, m2 and m3 and the following is printed:

On the contrary, when using Futures, logical invocation chain is not executed by a single thread. Every time you use map, flatMap or other Future method that accepts an ExecutionContext, what you really do is you register a callback that is executed in the thread provided by the ExecutionContext. When an exception is thrown in the callback code it can’t access a full chain that could include all map and flatMap invocations.

Above, after method m1 is invoked, when the exception is thrown in m3 the stack trace contains neither a reference to m1 nor to m2 because invocations of these methods took place in different threads. In this case, the following is printed:

The conclusion is that stack traces have an inherent limitation of tracing invocations of just one thread. Chapters below describe how to overcome this limitation.

Compile-time approach

One of the solutions to the problem is to create a subtype of Future, say – TracingFuture and provide overloaded method definitions for map, flatMap etc. In each of these methods a “stack” frame may be created and saved inside the TracingFuture which then can be accessed on failure. To get source code location needed for the “stack” frame creation we’ll use an excellent sourcecode library that uses Scala macros under the hood.

A very similar approach was proposed by Johannes Rudolph in this GitHub repository.

In onComplete an original stack trace of the exception is enriched with the “future trace” that contains all methods which took part in future chaining. FutureTraceElement that contains information about enclosing class name and a source location is created using implicit values provided by the sourcecode library.

If we replace all Futures with TracingFutures in the previous snippet, the following trace will be printed:

First, the original stack trace is printed, and then the “future trace” frames follow.

This solution is nice because it mostly works at compile time and thus has close to zero performance overhead. It forces a programmer to use TracingFuture instead of the standard Future, though – it may be tedious to introduce this to a large codebase.

You can find a complete example in the demo project.

Runtime bytecode instrumentation

The second solution is to perform runtime bytecode instrumentation via JVM agent API. JVM allows you to register _agents_ that modify classes in runtime. If you use the agent, you can change the behavior of the Future trait’s methods like  map and flatMap to do the similar thing as our compile-time solution. Implementing such agent is a much more complex task than creating a rather simple TracingFuture class from the previous solution. The code is too long to walk through it all in this blog post. I’ll explain how to use this approach, though.

Ruslan Shevchenko implemented such agent in his GitHub repository. The project may not be 100% complete but it works just fine for most applications. I’ll use it as an example.

The project works as follows: bytecode is modified and as a result every Future function call that accepts ExecutionContext is intercepted and custom implementation is invoked instead of the original. Custom implementation obtains the current stack trace and stores it in Scala’s DynamicVariable bound to the thread that is going to execute the callback. The snippet below presents such intercepting function (in this case – a flatMap).

To enable the agent you must first obtain the JAR file containing it. I prebuilt it, so that you don’t have to compile it yourself. It’s available here. Please note that the linked JAR file is built for Scala 2.12. To register the agent, pass -javaagent:/path/to/the/agent/agent_2.12.jar argument to the java command executing your program.

If you ran our previous example with the agent enabled, you’d get the following trace:

As you can see, in this solution Future combinator calls are interleaved with “normal” stack trace.

The bytecode instrumentation has some advantages over the compile-time solution. You don’t have to change the code in any way. You can opt-in and opt-out just by enabling or disabling the agent. However, this solution comes with a price – it has a major performance hit. Every wrapped method call requests a whole thread stack trace from the JVM[2] – some might find it unacceptable for the use in production. Also, more things can break since the bytecode instrumentation is a tricky business and has to be used with care.

Again, see the demo project for a complete example.

Demo project

I implemented a very simple asynchronous “calculator” to present the solutions and also to compare them. It fails for some inputs, generating a stack trace. The code is available here.

To try it out, first clone the repository:

Start a sbt’s interactive mode:

And run a chosen project. To run a project without any tracing, execute the following:

For a compile-time solution, execute:

For an agent-based solution, execute:

Each execution will fail, but with differing stack traces.

Summary

We all are in the need for async to make systems we design scalable. With the introduction of such clever mechanisms as Futures combinators, asynchronous programming became a lot more convenient than at its beginnings (does anybody remember the callback hell?). It can get even more convenient with an easy error tracing using the solutions presented in this article – I hope that you’ll find them useful.

Notes

a. This behavior can be overridden, see this Stack Overflow answer.

References

  1. Lindholm, Yellin, Bracha, Buckley. (2015) Java Virtual Machine Specification: Java Virtual Machine Stacks
  2. Aleksey Shipilev. (2014) The Exceptional Performance of Lil’ Exception, (chapter on exception instantiation)
Written by Krzysztof Pado
Published January 5, 2017