May 13 2008

Where was this class loaded from?!

Tags: , , Rajiv @ 9:54 am GMT-0700

There are many ways to figure out where a class is being loaded from. Naveen uses the -verbose flag of the JVM to figure out where a class is being loaded from. If you are using Pramati Server, then you can use the command: who_load_me to find out which classloader loaded the class, the classloader hierarchy and where the class was loaded from.

Another way to find out where a class was loaded from is by using the getCodeSource method, like so:

public static void which(Class aClass) throws Exception {
    System.out.println(aClass.getProtectionDomain().getCodeSource().getLocation());
}

I used to use this so much that Ravi finally decided to check it into cvs. Thanks buddy! Lot of people wondered why I always called it which, it is in memory of the Unix command which, which would tell you the directory in the PATH variable from which the shell was picking up the exectuable from.

And if you are coming for an interview and give this as the answer to my question, you better be able to explain what a protection domain is and when/by-whom is the codeSource set.

Update: I was inspired to blog about this after I spoke with Deepak about this. I didn’t know he had a blog and he had blogged about the same topic too!


Sep 22 2005

Why is finalize method protected?

Tags: , Rajiv @ 11:30 am GMT-0700

Rakesh sent me a mail the other day asking why the finalize method is protected. Well, here’s what I think:

The finalize method is invoked by the JVM/GarbageCollector on Objects which are no longer referenced. So, if I were the guy who designed the finalize method, ideally I would want it to be private. Just like why the writeObject method of the Serializable interface is private. These methods are not meant to be called by other user objects, these methods are invoked only by the JVM runtime classes only. So it would make sense to make them private, and have special handling in the VM to invoke these methods. This is ok in case of the writeObject method, however, making finalize method private would have other implications.

Assume we had decided to go ahead and allow the finalize method to be private. Consider the following classes

public class SuperClass
{
    private HeavyResource resource = new HeavyResource();
 
    private void finalize()
        throws Throwable
    {
        resource.shutdown();
    }
}

public class SubClass
    extends SuperClass
{
    private AnotherResource another = new AnotherResource();
 
    public SubClass(){
        super(); //Added for clarity
    }

    private void finalize()
        throws Throwable
    {
        another.shutdown();
    }
}

Since SubClass extends the SuperClass, when we create an instance of SubClass, we would also have created an instance of HeavyResource and an instance of AnotherResource. However, when this instance of SubClass is being finalized, we shutdown only the instance of AnotherResource. The shutdown method of HeavyResource would not be called.

The recommended practice for finalize methods is that in the finally block of the finalize method one should call the finalize method of the super class. (That’s right, in the constructor the first statement has to be the call to super so for destructor/finalize the sequence should be reversed and super should be called last). So our SubClass would look something like:

public class SubClass
    extends SuperClass
{
    private AnotherResource another = new AnotherResource();
 
    public SubClass(){
        super(); //Added for clarity
    }

    private void finalize()
        throws Throwable
    {
        try{
            another.shutdown();
        }finally{
            super.finalize();//Compilation error here
        }
    }
}

This would have worked, but unfortunately this won’t even compile. You can not call the private method of the super class. There are couple of ways I can think of to solve this problem.

One solution of-course would be to relax our requirements and make the access modifier of the finalize method protected and hope people are sensible enough not to call it!

Another solution could be to allow calls to super.finalize(), even if the finalize method of the super class has private access modifier, in the Java Language Specification (JLS).

My preferred solution would be to have compilers automatically add a try finally block and insert a call to super.finalize() method in the finally block. (Modifying byte code to add a try finally block is a nightmare (as compared to adding a call to super()), but that’s a separate discussion!) This would be similar and consistent with the way compilers add the call to super() as the first statement of a constructor if it does not already exist. (You can use javap -c ClassName to look at the byte code generated by the compiler, but I prefer to use JClasslib.)

The Java language specification, (3rd ed, Section 12.6) does mention that the call to super.finalize is not injected automatically and provides a hint as to why:

The fact that class Object declares a finalize method means that the finalize method for any class can always invoke the finalize method for its superclass. This should always be done, unless it is the programmer’s intent to nullify the actions of the finalizer in the superclass. (Unlike constructors, finalizers do not automatically invoke the finalizer for the superclass; such an invocation must be coded explicitly.)

It appears this was done “So that the programmer can nullify the actions of the finalizer in the superclass.” But thanks to this choice, developers today use tools like PMD which warn them about empty finalize methods and when the finalize does not call the same method of the super class.

Update (16 Jan 2007): It appears that the designers of C# learnt from Java’s mistakes and decided to make constructor and destructor symmetric. In C#, the destructor of the super-class is called whether the destructor of the sub-class was successfuly completed or not. From “Section 16.3: How exceptions are handled” of the C# language specification 1.2:

Exceptions that occur during destructor execution are worth special mention. If an exception occurs during destructor execution, and that exception is not caught, then the execution of that destructor is terminated and the destructor of the base class (if any) is called. If there is no base class (as in the case of the object type) or if there is no base class destructor, then the exception is discarded.


Jul 05 2004

An update to: Memory leaks with non-static ThreadLocals …

Tags: , Rajiv @ 7:34 pm GMT-0700

Rejeev made some interesting comments on my previous post at javalobby. The memory leak we had faced was with JDK 1.3_02. The java bug database has a couple of bugs reported for the sameBug: 4414045 and Bug: 4455134. As per JDK 1.4 beta release notes they are now fixed.Looking at JDK 1.4 ThreadLocal sources it appears that they have made the threadLocals Map hold weak references … which was as per my expectation

The ThreadLocal should have taken care of cleaning itself up when no user code is referring to it. The way to achieve this is to make the threadLocals map of the Thread class a WeakHashMap.

Second, consider the case where the Object being put in the Map [in this case a CachedObject] has overridden the equals and hashcode methods. Say, the hashcode method returns a different hashcode based on the current fields of the CachedObject class. Now, in the suggested implementation, this CachedObject is being put in a Map. What if the hashcode of the object changes [beacause the value of some field changed], while it is in the Map. We will not be able to remove the object from the Map. The memory leak will remain.

When we decided to make the ThreadLocal a static field from non-static field we changed the value of the ThreadLocal to a Map which holds the instance of CachedObject vs Context. The assumption was that the look up’s to the Map will be based on the instance of the key. However, making the Map an instance of a HashMap broke this assumption, as the lookup’s are based on the hashcode of key. We need a Map implementation that does instance-based lookups. One way to achieve this would be to change the following HashMap methods:

260       static int hash(Object x) { 
261           /* 
262           int h = x.hashCode(); 
263    
264           h += ~(h << 9); 
265           h ^=  (h >>> 14); 
266           h +=  (h << 4); 
267           h ^=  (h >>> 10); 
268           return h; 
269           */ 
270           return System.identityHashCode(x); 
271       }
272    
273       static boolean eq(Object x, Object y) { 
274           //return x == y || x.equals(y); 
275           return x==y; 
276       } 

We change the hash method to return the identityHashCode of the object. The identityHashCodeof an Object is constant. We change the eq methodto do an instance check. However, since these are static methods we cannot override them and make these changes. So to achieve this we will have to make a copy of the HashMap and make the changes in it.

Another alternative is to wrap the key being added into the Map. This wrapper would return the identityHashCode of the key as the hashcode and check instance equals of the keys in its equals. So we write the InstanceMap and use an InstanceMap in the ThreadLocalMap instead of a HashMap.

1    import java.util.HashMap; 
2     
3    public class InstanceMap 
4            extends HashMap 
5    { 
6        private IdentityWrapper lookup = new IdentityWrapper();
7     
8        public Object get(Object key) 
9        { 
10           lookup.key=key; 
11           return super.get(key); 
12       }
13    
14       public Object put(Object key, Object value) 
15       { 
16           return super.put(new IdentityWrapper(key), value); 
17       }
18    
19       public boolean containsKey(Object key) 
20       { 
21           lookup.key=key; 
22           return super.containsKey(lookup); 
23       }
24       //other overridden method not listed for brevity ...
25       private static class IdentityWrapper 
26       { 
27           private Object key;
28    
29           public IdentityWrapper() 
30           { 
31           }
32    
33           public IdentityWrapper(Object key) 
34           { 
35               this.key = key; 
36           }
37    
38           public int hashCode() 
39           { 
40               return System.identityHashCode(key); 
41           }
42    
43           public boolean equals(Object obj) 
44           { 
45               return (obj instanceof IdentityWrapper)?((IdentityWrapper)obj).key==key:false; 
46           } 
47       } 
48   }

Since this class is being used only in our ThreadLocalMap, it will never be accessed by two threads. We have tried to optimize the number of the wrapper instances created by using a single wrapper [the field called lookup] for all the lookup methods like get and containsKey. However, every call to the put method will create a new instance and every remove call will make it GC’able.


Jun 23 2004

Of non-static ThreadLocals and memory leaks …

Tags: , , Rajiv @ 8:36 pm GMT-0700

My recent experience has made me realize that the ThreadLocal class was never really designed to be used as a non-static field. However, the implications of making it non-static are not highlighted enough in JDK API and the many ThreadLocal tutorials you find on the net.

If you want to know more about ThreadLocals I would recommend reading Threading lightly by Brian Goetz. Brian talks about why and how to use ThreadLocals along with some examples. And if you are a performance buff, you will surely have an "Aha!" moment when you read the section on the performance bottleneck with the JDK 1.2 implementation and how it was resolved in JDK1.3.

Coming back to my "Non-static ThreadLocals considered harmful" [pun intended! ;)], couple of months ago Prasad called:

  • Prasad: Hey I need to store some information per thread per
    object
    can I do that?
  • Me: What?!
  • Prasad: See when we used a ThreadLocal for Transaction id,
    it was a static singleton across the VM. In my case I need the value of my
    ThreadLocal to be per instance of a cached object.
  • Me: Yeah, so you can have a non static ThreadLocal field in your cached
    object… right?!
  • Prasad: Yeah .. that was my question. Would that work?

What he wanted was something like:

Initial reference diagram

So I went about stepping through the code/logic to prove how/why it would work if you make the ThreadLocal non static. Well he tried the whole thing and it did work. Only months later did we realize that
there was a memory leak in the application! Running through optimizeIt and looking for non GC’ed instances we realized that the number of Context objects was more than the cache size. Likely that there was some one referring the Context even after the corresponding CachedObject had been removed from the Cache. OptimizeIt’s reduced reference graph showed that the ThreadLocals were holding reference to these Context object instances. Aha! Now there were multiple places where the CachedObjects were being evicted from the Cache. We had to set the ThreadLocal to null at all these places. 

The simplest option seemed to be to set the thread local to null in the finalize of the cached object. *buzzer* Well, unfortunately that wouldn’t work. The finalize method would get called in the Finalizer thread of the VM. So it would try to set the value to null in the Finalizer thread … but we wanted it to be set to
null in the Thread that accessed it!

So we refactored the code a bit, made sure all evictions happen in one place and made sure to set(null) on the ThreadLocal of the CachedObject which was going to be evicted. Unfortunately, that wasn’t the end of the story. We still had OutOfMem issues … only that it took a little longer now. OptimizeIt’s reduced reference graph showed that the number of CachedObjects was same as the Cache size now. So our solution did work. The memory leak was due to the number of ThreadLocals. Logically the number of ThreadLocals has to be same as the number of CachedObjects. So even after the CachedObject has been GC’ed, some one was holding a reference to the ThreadLocal. Initially, while stepping through the logic we had stopped the moment we concluded any thread would find the right value. We never went further to analyze what happens to all the ThreadLocals we created. We just assumed that GC would take care of it. [Which I still think was fair enough!]

When we invoke ThreadLocal.set(), the ThreadLocal would add itself as the key and the Context as the value to the threadLocal map of the current Thread. [This is in JDK 1.3 and above, the implementation is slightly different in JDk1.2, but the end result is the same … there is a memory leak!] By setting the value of the ThreadLocal to null we have made the value of the map entry null . This made the Context to be available for garbage collection. However, the threadLocal map of Thread stills holds a reference to the key, which in our case is the ThreadLocal. We actually need to remove the ThreadLocal and not just set its value to null. If you are having trouble following this consider opening ThreadLocal.java and Thread.java from <jdk_1.3++_home>/src.zip and stepping through ThreadLocal.set and ThreadLocal.get methods. The memory leak was aggravated in this scenario because the web container reuses threads across requests and the cache objects keep getting created and garbage collected. However, the ThreadLocals don’t get GC’ed.

Unfortunately, it appears that till JDK1.5 came out, ThreadLocal‘s did not have a proper lifecycle. They were expected to be created as static singletons or in limited numbers. For a proper life cycle I would have expected:

  1. The ThreadLocal should have taken care of cleaning itself up when no user
    code is referring to it. The way to achieve this is to make the threadLocals
    map of the Thread class a WeakHashMap. I wonder why this was not done …
    performance issues?! … or ThreadLocal was just not designed for non-static usage??
  2. An explicity destroy method should have been provided on ThreadLocal. In
    JDK 1.5, a new method remove
    has been added to ThreadLocal‘s API to complete its lifecycle management.
    Users can call the remove method and avoid such memory leaks.

But for most of us who can’t make JDK 1.5 as the minimum requirement for our application we have to find some other way out.

One simple option seemed to be to write a CustomThreadLocal based on the JDK 1.2 implementation of ThreadLocal. Basically, have a synchronized Map of Thread vs threadLocalValue and expose a remove method which will remove the Thread from the map. However, this has serious performance issues, which is why ThreadLocal was redesigned in JDK 1.3! So the only realistic option is to make the
ThreadLocal a static field.

Currently each CachedObject instance has its own ThreadLocal instance and each ThreadLocal instance holds a Context instance. If we make the ThreadLocal static, then all the instances of CachedObject will refer to the same ThreadLocal instance. So, in order to retain the original semantics, we will have to make the value of the ThreadLocal to be a map of CachedObject vs Context. Since multiple threads will never access this map [which is local to the thread], this map does not need to be synchronized. So no performance bottlenecks and since the remove method is exposed, no memory issues. The new reference graph looks like: New reference diagram

and the source for the ThreadLocalMap would look like:

1     
2    import java.util.*; 
3     
4    public class ThreadLocalMap 
5            implements Map 
6    { 
7        private ThreadLocal threadLocal = new ThreadLocal(); 
8     
9        private Map getThreadLocalMap(){ 
10           Map map = (Map) threadLocal.get(); 
11           if(map==null){ 
12               map = new HashMap(); 
13               threadLocal.set(map); 
14           } 
15           return map; 
16       } 
17    
18       public Object put(Object key, Object value) 
19       { 
20           return getThreadLocalMap().put(key, value); 
21       } 
22    
23       public Object get(Object key) 
24       { 
25           return getThreadLocalMap().get(key); 
26       } 
27    
28       public Object remove(Object key) 
29       { 
30           return getThreadLocalMap().remove(key); 
31       } 
32    
33       //code snipped for brevity ... 
34        
77   }

Jun 17 2004

Of Thread dumps and stack traces …

Tags: , , , Rajiv @ 10:45 am GMT-0700

Thread dumps and stack traces are probably some of the least understood features of java. Why else would I come across developers who have no clue what do do after looking at an Exception stack trace? 

Street Side Programmer?!

An ex-colleague of mine, Manoj “The Anger” Acharya, had coined the phrase Street Side Programmer [a la Server Side Programmer] and he would dole out this title to all those who would come to him with annoying questions. Nothing annoyed him more than having some one come and ask him I am getting some exception when I do *blah* *blah*. His typical answer *bleep*’ing Street Side Programmers … what is some exception supposed to mean?! Doesn’t it have a name? Doesn’t it have a stack trace??

I was reminded of him the other day, when a trainee learning java came to me saying My program is not running … there seems to be some problem … can you come and take a look?. The kid is quite sweet, so instead of telling him about Anger, I just went to his seat. The command prompt had something like this:

C:\learn\classes>java Test
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 5
        at Test.run(Test.java:11)
        at Test.<init>(Test.java:4)
        at Test.main(Test.java:19)

I wonder if the book he was reading had any section on reading stack traces. [Monsieur Bruce Eckel … are you listening?!] I really think any introduction to java book should have this as one of the earliest chapters … right after defining a class and method! Some one makes an error while trying out samples, or is tinkering around with the code, which typically results in an exception … what is one supposed to do next?!

Anatomy of a Stack Trace

Well I explained that an exception stack trace is java’s way of telling you exactly what went wrong and where it went wrong. The first line of the stack trace gives you the exception name and the exception message and what follows is the “stack trace”. The stack trace is to be read from top to bottom, line by line. Each line has the name of the class and the name of the method being executed followed by the file name and line number in parentheses. 

In this case a java.lang.ArrayIndexOutOfBoundsException with the message “5” was raised. To know where it was raised, we look at the next line. It tells us that the exception was raised while executing the run method of the Test class at line number 11 in Test.java file. The next line tells us that the run method was called by the constructor [the stack trace shows Constructors as <init> and static blocks in a class as <clinit>] of the Test class at line number 4 in Test.java file. The next line tells us that the constructor was called by the main method of Test class at line number 19 of Test.java file.

So the java stack trace would read in English would be like:

You accessed an array with an index of 5, however the array does not have 6 elements [thanks to zero based index]. This happened when I was executing the run method of Test class which happens to be in line number 11 of Test.java file. The run method was called by the constructor of Test class at line number 4 of Test.java file. The constructor was called by the main method of the Test class at line number 19 of Test.java file. 

Well… there is a wealth of information here. It tells you exactly what the VM was doing when the exception was raised. Let us see how to debug the issue given all this information. The Test.java file looks like this: 

1    public class Test
2    { 
3        public Test(int[] nums){
4            run(nums); 
5        }
6     
7        private void run(int[] nums) 
8        { 
9            int n = nums.length; 
10           for (int i = 0; i < nums.length; i++) { 
11               int num = nums[n]; 
12               System.out.println(num); 
13           } 
14       }
15    
16       public static void main(String argv[]) 
17               throws Exception 
18       { 
19           new Test(new int[]{1,3,5,7,9}); 
20       } 
21   } 

Stepping through the code [as per the stack trace], we called the main method, which invoked the constructor at line number 19, which in turn called the run method at line number 4. Hey the stack trace was correct after all! Now we look at line number 11 where the exception was raised. The exception says that we accessed an array with incorrect index. The only array we are accessing at line number 11 is the num array. The index being used to access the array was n. So what the VM is trying to tell you is that the n is larger that the size of the array nums. Which is in fact true. n happens to be the length of the array. So it IS greater than the last index of the array. What the user really wanted to do was use i as the loop index and not n.

Another common exception raised is the java.lang.NullPointerException. The NPE!  A NullPointerException is Java’s way of telling a user that a null object reference was being used. Take a look at the following lines from a stack trace [snipped for brevity]:

java.lang.NullPointerException:
        at foo.bar.MyServlet.doGet(MyServlet.java:36)

So now we know that a null object was being referened at line number 36 of MyServlet.java. The code for the servlet looks something like:

35    String userNameParam = request.getParameter("username");
36    if(userNameParam.equals("root"))
37    { 

The only object reference being used in line number 36 happens to be userNameParam. So it was null when the VM was executing that line. Now we track down what values were assigned to the userNameParam. Line number 35 happens to be the only assignment in this case. It assigns the value of userNameParam to request.getParameter("username"). Since the VM told us that the userNameParam was null, it means that the method request.getParameter("username") returned a null value. Looking at the documentation of the method we know that the method may return null. So the users of the method need to code taking that into consideration. In this case we would change the condition like so:  

35    String userNameParam = request.getParameter("username");
36    if(userNameParam!=null && userNameParam.equals("root"))
37    { 

Thanks to stack traces some one who is not even aware of the code can pin point the exact location of the error. In most of the cases a stack trace is definitely starting points for debugging erroneous behavior. Who wants messy
core dumps anyways when you have readable stack traces?!

Innovative uses of stack traces

Once you know what a stack trace provides, there are a lot of innovative ways to use it. Basically answer questions like how did I get here or to record the location of an event.

Recently a customer noticed that the VM was performing Full GC’s very frequently. This would happen even when the application is completely idle. Looking at java -verbose:gc -XX:+PrintGCTimeStamps ... , we realized that the Full GC would occur every one minute … on the dot. We then tried adding the -XX:+DisableExplicitGC option and voila no more full GCs! So looks like some one was doing a System.gc somewhere at every one minute.

So how do we find out who is calling it?! You would extract the System.java file from <jdk-home>/src.zip!/java/lang/System.java and edit it like so:

736    public static void gc() {
737	       new Exception("Some one triggered Full GC from here").printStackTrace();
738            Runtime.getRuntime().gc();
739       }
740

Compile the modified file and prepend it to your bootclasspath using the option -Xbootclasspath/p:outputDir. Next time we ran the application, we got the stack trace:

java.lang.Exception: Some one triggered Full GC from here
        at java.lang.System.gc(System.java:737)
	at sun.misc.GC$Daemon.run(GC.java:92)

Adding one more stack trace to GC.java [You will not find sources for the com.sun.* and sun.* packages in the src.zip that comes with your jdk. You will have to download it from Sun’s Community Source site.] we get to know that sun.rmi.transport.ObjectTable is triggering the full GC based on an interval specified by the system property sun.rmi.dgc.server.gcInterval. The default value for the property happens to be one minute.

So using the printStackTrace method we could debug where Full GC was being triggered explicitly. You could
ofcourse do the same by setting a method break point for the System.gc method. Or you could be
a smart google’er and stumble upon the “Other considerations” section of the

GC options page!

Instead of doing a new Exception(…).printStackTrace(), you could alternatively do a Thread.dumpStack() which internally does the same. The only disadvantage is that Thread,dumpStack() does take a message as its
parameter.

Some times it makes sense to create an exception object and hold a reference to it until a later point in time. Suppose you have a class which looks like:

1    import java.io.IOException; 
2     
3    /** 
4     * A class that represents a heavy weight resource. 

5     */ 
6    public class Resource 
7    { 
8        private boolean closed; 
9     
10       public void close() throws IOException{ 
11           if(closed) 
12               throw new IOException("Resource already closed."); 
13           //resource cleanup 

14           closed=true; 
15       }
16       //code snipped for brevity ...

The class throws an exception when a user invokes close on an already closed resource. The stack trace of the IOException is going to tell you where in the code you tried to close the already closed connection. For example the following output tells you that when you called close on the Resource at line 41 of ResourceTest.java it was already closed.

C:\learn\classes>java ResourceTest
java.io.IOException: Resource already closed.
        at Resource.close(Resource.java:12)
        at ResourceTest.closeResource(ResourceTest.java:37)
        at ResourceTest.run(ResourceTest.java:26)
        at ResourceTest.main(ResourceTest.java:50)

But now what if you want to know where did you close it the first time?! You would change the code like so:

1    import java.io.IOException; 
2     
3    /** 
4     * A class that represents a heavy weight resource. 
5     */ 

6    public class Resource 
7    { 
8        private boolean closed; 
9     
10       private Exception closedAt; 
11
12       public void close() throws IOException{ 
13           if(closed) { 
14               closedAt.printStackTrace();
15               throw new IOException("Resource already closed."); 
16           }
17           //resource cleanup 

18           closed=true; 
19           closedAt=new Exception("Resource closed here the first time."); 
20       }
21       //code snipped for brevity ...

The output would after making the changes would look like …

C:\learn\classes>java ResourceTest
java.lang.Exception: Resource closed here the first time.
        at Resource.close(Resource.java:19)
        at ResourceTest.useResource(ResourceTest.java:32)
        at ResourceTest.run(ResourceTest.java:25)
        at ResourceTest.main(ResourceTest.java:50)
java.io.IOException: Resource already closed.
        at Resource.close(Resource.java:15)
        at ResourceTest.closeResource(ResourceTest.java:41)
        at ResourceTest.run(ResourceTest.java:26)
        at ResourceTest.main(ResourceTest.java:50)

So now from the stack traces we know that a close was called first at line 32 of ResourceTest.java and later at line 41 we called a close on the same resource for the second time.

There are a lot of multi threaded problems [NullPointers] which we were not able to debug with a debugger because the whole application would become too slow to simulate the problem scenario. However, by using Exception objects to track threads which were setting the fields to null, we were able to resolve the issues. A word of caution though … creating exception objects is resource intensive. Creating too many exception objects takes lot of CPU. And if you are holding references to all the objects it requires memory too! 

Thread dump 101

If the stack trace which gives the information on what a thread was doing at that moment can help us in so many ways, just imagine the possibilites if you could find out what every single thread in the Java VM is doing at any given moment! A Full Thread Dump or a thread dump for short gives us exactly that information. Consider the following source

1    public class Test 
2    { 
3        public Test(char[] chars){ 
4            System.out.println("New line at "+findNewLine(chars)); 
5        }
6     
7        private int findNewLine(char[] chars) 
8        { 
9            int i = 0; 
10           char aChar; 
11           do{ 
12               aChar = chars[i]; 
13           }while(aChar!='\n'); 
14           return i; 
15       }
16    
17       public static void main(String argv[]) 
18               throws Exception 
19       { 
20           new Test("Hello World!\nHowz goin?!">.toCharArray()); 
21       } 
22   }

The method findNewLine is supposed to return the first index of a new line character in a given char array. [Purists please don’t mail me with the list of reasons why this approach is not right … the idea here is not really to write the best way to find a new line character!] Now when you run the program it just won’t print the result. One look at top in unix or the task manager in windows we get to know that VM has taken the CPU for a spin…. 100% CPU consumption for ever! Now wouldn’t you want to know what the VM is doing. Why is it taking all this CPU and not printing the output it is supposed. 

One way to do this would be rerun the program in debug mode. Use the debugger and debug the application. However, many a times you come across such the situation on a live system after running the app for a long duration. Since it is a live system and we hit the issue only after running the application for a long duration we can not leave it in debug mode for ever. The first line of defense under such circumstances is the thread dump.

Run the program from the command prompt and when the CPU peaks take a thread dump. You can get a thread dump by pressing the following at the command prompt: Ctrl+\ for unices or Ctrl+Break for windows machines. If you are running your application as a back ground process in unix, you could execute kill -SIGQUIT <pid> from another command prompt. The above signals the VM to generate a full thread dump. Sun’s VM prints the dump on the error stream while IBM’s JDK generates a new file with the thread dump every time you send the signal. In our case the thread dump would look something like this:

C:\learn\classes>java Test
Full thread dump Java HotSpot(TM) Client VM (1.4.2_04-b05 mixed mode):

"Signal Dispatcher" daemon prio=10 tid=0x0091db28 nid=0x744 waiting on condition [0..0]

"Finalizer" daemon prio=9 tid=0x0091ab78 nid=0x73c in Object.wait() [1816f000..1816fd88]
        at java.lang.Object.wait(Native Method)
        - waiting on <0x10010498> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove(Unknown Source)
        - locked <0x10010498> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove(Unknown Source)
        at java.lang.ref.Finalizer$FinalizerThread.run(Unknown Source)

"Reference Handler" daemon prio=10 tid=0x009196f0 nid=0x738 in Object.wait() [1812f000..1812fd88]
        at java.lang.Object.wait(Native Method)
        - waiting on <0x10010388> (a java.lang.ref.Reference$Lock)
        at java.lang.Object.wait(Unknown Source)
        at java.lang.ref.Reference$ReferenceHandler.run(Unknown Source)
        - locked <0x10010388> (a java.lang.ref.Reference$Lock)


"main" prio=5 tid=0x00234998 nid=0x4c8 runnable [6f000..6fc3c]
        at Test.findNewLine(Test.java:13)
        at Test.<init>(Test.java:4)
        at Test.main(Test.java:20)

"VM Thread" prio=5 tid=0x00959370 nid=0x6e8 runnable

"VM Periodic Task Thread" prio=10 tid=0x0023e718 nid=0x74c waiting on condition
"Suspend Checker Thread" prio=10 tid=0x0091cd58 nid=0x740 runnable

The thread dump generated here is on Sun’s JDK 1.4.2. Though the output differs from version to version and from vendor to vendor, the basic structure is the same. The output is somewhat like going over all the threads and doing a Thread.dumpStack in each of them. In this case we can see that, at the time we took the thread dump, there were seven threads:

  1. Signal Dispatcher
  2. Finalizer
  3. Reference Handler
  4. main
  5. VM Thread
  6. VM Periodic Task Thread
  7. Suspend Checker Thread 

Each thread name is followed by whether the thread is a daemon thread or not. Then comes prio the priority of the thread [ex: prio=5]. I am not sure what the tid and nid are. My best guess is that they are the Java thread id and the native thread id. Would love if someone could comment on that. Then what follows the state of the thread. It is either:

  • Runnable [marked as R in some VMs]: This state indicates that the thread
    is either running currently or is ready to run the next time the OS thread
    scheduler schedules it. 
  • Suspended [marked as S in some VMs]: I presume this indicates that the
    thread is not in a runnable state. Can some one please confirm?!
  • Object.wait() [marked as CW in some VMs]: indicates that the thread is
    waiting on an object using Object.wait()
  • waiting for monitor entry [marked as MW in some VMs]: indicates that the
    thread is waiting to enter a synchronized block

What follows the thread description line is a regular stack trace. 

Debugging run away CPU

When we are trying to debug a run away CPU, as in this case, what we need to look at is the set of Runnable threads  in the thread dump. The question to ask is: What was the thread which was consuming CPU doing? At the instant we took the above thread dump, the thread was at line 13 of Test.java. Well … looks like it was checking the condition for the while loop. But eventually it should have returned right?! So we take a few more thread dumps. Each time it shows us the thread is within the while loop. This definitely indicates from the first time you took a dump to the last time you took a dump, the thread never got out of the loop. The problem is narrowed down that loop. Putting the loop under the magnifying glass, we realize that the counter i was never being incremented. 

Well … if you have a single class in your application it is no big deal! But when you have gazillions of classes, narrowing down the problem to a single loop within single class is a big saver! I have found this a useful tool even when I am using a debugger. It helps me choose a good location to set my first break point! 

Debugging performance issues

Its the night before the release and your application is not performing good enough. You really don’t have enough time to run the app through a profiler. Take heart! Like Ramesh says … there are always some low hanging fruits! The way a java profiler works is, it takes snapshots of what the CPU was doing at frequent intervals and generates a statistical report on where most of the CPU time was being spent during the run. If your application is performing so poorly that you could take say 10-12 thread dumps before an operation completes, you would get a rough idea of distribution of CPU time. Some of the easy kills I can think of:

  • Symptom: High CPU consumption and poor response time

    Thread dump profile: Most of the dumps show the same thread in the same method or same class

    Solution: The method/class is the one which is definitely taking a lot of CPU. See if you can optimize these calls. Some of the REALLY easy kills we have had in this category is using a Collection.remove(Object) where the backend collection is a List. Change the backed collection to be a HashSet. A word of caution though: There have been times when the runnable threads are innocent and the GC is the one consuming the CPU.
  • Symptom: Low CPU consumption most of which is kernel time and poor response time
    Thread dump profile: Most thread dumps have the runnable threads performing some IO operations
    Solution: Most likely your application is IO bound. If you are reading a lot of files from the disc, see if you can implement Producer-Consumer pattern. The Producer can perform the IO operations and Consumers do the processing on the data which has been read by the producer. If you notice that most IO operations are from the data base driver, see if you can reduce the number of queries to the database or see if you can cache the results of the query locally.
  • Symptom: Medium/Low CPU consumption in a highly multithreaded application
    Thread dump profile: Most threads in most thread dumps are waiting for a monitor on same object

    Solution: The thread dump profile says it all. See if you can: eliminate the need for synchronization [using ThreadLocal/Session-scopeobjects] or reduce the amount of code being executed within the synchronized block.
  • Symptom: Medium/Low CPU consumption in a highly multithreaded application
    Thread dump profile: Most threads in most thread dumps are waiting for a resource
    Solution: If all the threads are choked for resources, say waiting on the pool to create EJB-bean objects/DB Connection objects, see if you can increase the pool size.

Debugging “hang” problems

A textbook case of deadlock is the easiest to debug with the newer JDKs. At the end of the thread dump you will find something like this:

Found one Java-level deadlock:
=============================
"Thread-1":
  waiting to lock monitor 0x0091a27c (object 0x140fa790, a java.lang.Class),
  which is held by "Thread-0"

"Thread-0":
  waiting to lock monitor 0x0091a25c (object 0x14026800, a java.lang.Class),
  which is held by "Thread-1"

Java stack information for the threads listed above:
===================================================
"Thread-1":
        at Deadlock$2.run(Deadlock.java:48)
        - waiting to lock <0x140fa790> (a java.lang.Class)
        - locked <0x14026800> (a java.lang.Class)
"Thread-0":
        at Deadlock$1.run(Deadlock.java:33)
        - waiting to lock <0x14026800> (a java.lang.Class)
        - locked <0x140fa790> (a java.lang.Class)

Found 1 deadlock.

But many a times we come across hang’s which are not deadlocks. One thing that easily comes to my mind is a resource limit. For example in an EJB container you have set the maximum bean pool size to 1000. Now say two threads have started executing a finder each returning a collection of 1000 odd beans. Assuming a decent CPU time slice distribution it could happen that the first thread iterates over 500 beans and the next thread iterates over the 500 beans. At this moment both the threads need more beans to proceed further. However the container will not create new beans as the bean pool limit has been reached. So both the threads wait for some beans to be release to the pool … which is not going to happen. We have a hung app here…. however it is not a java-level deadlock. It is an artificial deadlock introduced due to resource limitation.

When your app is not responding and your CPU consumption is 0%, take a thread dump. If it does not have a java level dead lock, then take multiple thread dumps. If all of them show that the threads are waiting for resources [EJBs or DBConnections] see if you can increase the pool limit or decrease the number of resources required within a transaction. 

Finally

Thread Dumps and stackTraces are really good tools … they may not replace a debugging/profiling tools but are definitely good starting points and huge time savers. Unfortunately, I think they are undersold. Classes don’t teach you about them, Books don’t talk about them and tools don’t support them. I mean I can run any class from my IDE. I has buttons to Start/Pause and Stop the app from within the IDE. But why can’t I have a button for “Generate full Thread dump”. Every time I need to generate a thread dump, I have to rerun the application from
command line.

Well … maybe things are not so bad after all. What if the IDEs don’t support generation of a thread dump?! Most of them now open up the file and line number if you double click on a line in the exception stack trace obtained on runninga program! And what if the books don’t talk about it? People like Ashman make sure anyone joining the support team gets their dope on thread dump from me! 😉


May 22 2004

Of perf degradation with try-finallies and poor VM option docs …

Tags: , Rajiv @ 2:21 am GMT-0700

Recently we had a customer issue where the application was responding very slowly. After a lot of struggle we found out that Sun’s JDK had issues when handling methods with large number of try-finally blocks. Sachin has discussed the problem in greater detail in his post The Usual Suspects?.

To simulate the problem I wrote a sample class with a method having lot of try finally blocks.

9            long t = System.currentTimeMillis(); 
10           //Number of blocks = 5 
11           //Number of iterations = (600/5)*1000 = 120*1000 
12           for (int i = 0; i < 120* 1000; i++) { 
13               //Start try-finally blocks 
14               try {
15                   l.add("Test"+i);
16               } finally {
17                   l.clear();
18               }
19               try {
20                   l.add("Test"+i);
21               } finally {
22                   l.clear();
23               }
24               try {
25                   l.add("Test"+i);
26               } finally {
27                   l.clear();
28               }
29               try {
30                   l.add("Test"+i);
31               } finally {
32                   l.clear();
33               }
34               try {
35                   l.add("Test"+i);
36               } finally {
37                   l.clear();
38               }
39               //Stop try-finally blocks 
40           } 
41           System.out.println("Time taken: "+(System.currentTimeMillis()-t));

And some classes with regular blocks instead of try-finally blocks

9            long t = System.currentTimeMillis(); 
10           //Number of blocks = 5 
11           //Number of iterations = (600/5)*1000 = 120*1000 
12           for (int i = 0; i < 120* 1000; i++) { 
13               //Start regular blocks 
14               {
15                   l.add("Test"+i);
16               }{
17                   l.clear();
18               }
19               {
20                   l.add("Test"+i);
21               }{
22                   l.clear();
23               }
24               {
25                   l.add("Test"+i);
26               }{
27                   l.clear();
28               }
29               {
30                   l.add("Test"+i);
31               }{
32                   l.clear();
33               }
34               {
35                   l.add("Test"+i);
36               }{
37                   l.clear();
38               }
39               //Stop regular blocks 
40           } 
41           System.out.println("Time taken: "+(System.currentTimeMillis()-t));

Keeping the number of java operations constant [by making number of iterations = 600,000/numBlockPerMethod], you would expect a constant reponse time as you increase the number of blocks per method. However, the graph looks like this [click for full size image]:

Plot of perf degradation with increasing number of try finally blocks in a method. [Click for enlarged plot]

In case of Sun JDK the performance degrades so much that I had to plot the y-axis in logarithmic scale! [Talking about logarithmic scale … for a long time I never realized that yahoo stock quotes show prices in logarithmic scale … thanks to the .com bubble!] IBM JDK, on the other hand seems to handle this situation much better. It shows a much lower degradation in the response time. You actually don’t need the log scale in its case!

This issue is being tracked as Bug: 5049261 at the Sun’s JDK bug database. [You will need to create an account if you don’t already have one]. This bug has been marked as being related to Bug: 4493074. Interestingly, the related bug’s description mentions a whole lot of flags which I have never heard before: -XX:-Inline -XX:+PrintOptoBailouts -XX:+TraceOptoParse -XX:+TraceDeoptimization -XX:CompileThreshold -Xcomp -XX:CompileOnly. The only flag I could relate to here was the -XX:+PrintCompilation. It is another thing that I have no clue how to interpret the output of this flag either! I vaguely remember reading about this in some javaOne presentation. The presenter had mentioned that it prints the methods which could not be compiled and the user can list the methods which he doesn’t want to be compiled in a .hotspot_compiler file. I really wish Sun had documented these flags and their output better, instead of just mentioning “traces methods as compiled” in the VM Options document.