Monday, December 10, 2012

Load Balancers-3

Okay the final post in this series, lets talk about the big daddy. The last load balancer I explored was Haproxy and I fell in love with it because of its light weight, high reliability and awesome performance.


Haproxy is a very light, fast, highly reliable load balancer and proxy solution for TCP(it handles any TCP communication not just http) based applications. Its based on event model and is single process system(which enables it to handle heavy load). Its a pure proxy, unlike apache and nginx it doesn't serves any files etc, remember its not a web server. One of the really good feature it has is a status page which has all the details like how many request went to which server, bytes transfered etc which helps a lot to understand what exactly is happening.


You can download the setup from there official download page
On linux you can install by

$> sudo apt-get install haproxy

Note : If you want ssl support use a version >= 1.5dev12(You will have to compile and build)

Configure :

In my case I needed ssl support with haproxy (Authentication server was talking to the app using ssl) so I tried to install and configure version 1.5dev12 but I couldn't figure out where to put the ssl certs and enable ssl port and failed to configure it, so I needed decided to put some ssl offloader in front of Haproxy which can offload the ssl and then pass the request down to haproxy. Stunnel  is a popular option for these kind of scenario  but I really didn't have time to learn how to install and configure stunnel so once again I went ahead with my beloved Apache :).

So the final setup was something like this :

Okay enough talk, lets configure both apache and haproxy and start the whole system.
For configuration suppose haproxy and apache are one machine and apps on, etc.

Apache Config :

Created a virtual host which is listening on ssl port :

<IfModule mod_ssl.c>
Listen 8443
        ProxyRequests off
        SSLEngine on
        SSLProxyEngine on
        SSLCertificateFile    /home/apache_certs/server.crt
        SSLCertificateKeyFile /home/apache_certs/server.key

        ProxyPass /           #passing it to haproxy
        ProxyPassReverse  / #passing it to haproxy

Here i am listening on port 8443 and after offloading the ssl i am sending request to haproxy.

Haproxy config :

At haproxy side I am starting to listening ports one for direct http communications and one port which will listen the requests being forwarded by apache, and then haproxy is forwarding them down to one of the application.

        log   local0
        log   local1 notice
        #log loghost    local0 info
        maxconn 4096
        #chroot /usr/share/haproxy
        #user haproxy
        #group haproxy

        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        option redispatch
        maxconn 2000
        contimeout      5000
        clitimeout      50000
        srvtimeout      50000

listen ha_stats
          balance roundrobin
          mode http
          timeout client 30000ms
          stats enable
          stats uri /lb?stats

listen app_non_ssl
        mode http
        option httplog
        balance roundrobin
        option httpclose
        option redispatch
        maxconn 1000
        reqadd X-Forwarded-Proto:\ http
        server webserver1 maxconn 100 weight 100
        server webserver2 maxconn 100 weight 100

listen app_from_apache
        mode http
        option httplog
        balance roundrobin
        option httpclose
        option redispatch
        maxconn 1000
        reqadd X-Forwarded-Proto:\ https
        server webserver1  maxconn 100 weight 100
        server webserver2  maxconn 100 weight 100

In haproxy basically there are three sections global, default, listen global section contains all the settings for the haproxy instance like log server location, max connections etc. The default section has the default settings for each listen port(lets just say a server instance you start) you open. Listen block is where you mention on what port will you listen (you can have multiple listen blocks).  In listen block I have mentioned my backend servers where haproxy is forwardng requests(see the server definition).  I suggest to go through haproxy documentation to see all the options available. Most of the options in listen block are pretty straight forward but ill discuus these options

1. balance :  This option tells which algorithm its using to distribute the load.
2. maxconn : Maximum number of connections it will open.
3. server : What is the backend server it should forward the request to.

And you are done!!

This was the final setup I used for my Performance testing. :-)

Saturday, November 24, 2012

Load Balancers - 2

Last post was about using NGINX as the load balancer, this post is about using apache http server as a load balancer.  Lets get started with apache ( My favorite).


Apache http server needs no introduction,  its like the backbone of www. According to wikipedia :

The Apache HTTP Server, commonly referred to as Apache (/əˈpæ/ ə-pa-chee), is a web server software notable for playing a key role in the initial growth of the World Wide Web.[3] In 2009 it became the first web server software to surpass the 100 million website milestone.

Post intro now lets install and configure it.

Installation :

You can download apache http server from apache download site 
On linux you can install the package using :

sudo apt-get install apache2

Configure :

Apache provides modules to use it as a load balancer but by default they are not enabled, so first step is to enable load balancer modules and proxy modules. Lets enable them 

1. Enable Modules

  • sudo a2enmod proxy_balancer
  • sudo a2enmod proxy_connect
  • sudo a2enmod proxy_http

2. Restart apache 

  • sudo /etc/init.d/apache2 restart

3. Now we need to configure one virtual host. Lets take the last post example where we had    two app servers and and we have load balancer machine
We will direct the load from load balancer to app servers. Create one new file  /etc/apache2/sites-enabled/my_load_balancer and enter :

Listen 80
        ProxyRequests off
        ProxyPreserveHost On

        <Proxy balancer://my_app_servers>
                Balancermember loadfactor=1
                BalancerMember loadfactor=2
                #Order deny, allow
                Allow from all
                ProxySet lbmethod=byrequests

        <Location /balancer-manager>
                SetHandler balancer-manager
                Order deny,allow
                Allow from all
        ProxyPass /balancer-manager !
        ProxyPass / balancer://my_app_servers/

Here we are creating a virtual host which is listening on port 80.

4. Restart apache.

  • sudo /etc/init.d/apache2 restart

Note : You need to comment out the default NameVirtualHost (/etc/apache2/ports.conf) in case you are configuring your load balancer to listen on port 80.

Discuss : 

So as you can see here we mention the backend servers using Balancermember and can configure how much load should be directed to that member using loadfactor. And you can also configure which algorithm you can use to distribute the load using lbmethod. These settings are the bare minimum to start your load balancer.

If you want to know what all options are available with proxy module please check this as there are many options and its not possible to discuss all of them here.

Problems with apache :

Only problem with apache is when you increase the load its performance starts degrading, in my case I had decent load and it was performing well under 5000 requests in an hour.

Thursday, November 15, 2012

Load Balancers

"Scalability" First time I heard this word, I never thought one day it's gonna haunt me so much that I will have few sleepless nights.  It all started when I was asked to do a horizontal scale testing of our backend system. But don't worry I won't lecture you on scalability test rather I want to share few new things that I learned while doing it (interesting ones).

So the scenario was something like this :
Our webapp(supply chain system) is distributed on 12 VMs (test env) and I had to see how system behaves if we add one more similar setup and use load balancers on top to distribute the load. Can it handle more load. Can it scale ??I said lets do it, only problem  was I had no clue about what load balancer I can use, how load balancer works or which one is best for me.

I  googled and found three names Apache webserver, Nginx and Haproxy  which people use as load balancers. I did a small research on these three tried all of them one by one. This post is all about the pros and cons of these three software load balancers( I am not doing any benchmarking for any of these here, just sharing how to configure and use them and what problems I faced).

So lets start with the easiest to configure and really good load balancer Nginx.


Nginx is a webserver and a reverse proxy server for HTTP, SMTP, IMAP and POP3 protocols plus it can work as a software load balancer. Nginx is really fast when it comes to serve static contents, it can scale upto 10000 req/sec. What makes it so fast is its event driven architecture, it doesn't have apache type process or thread model architecture and because of this it has very small memory requirement.


Linux  :   sudo apt-get install nginx


Suppose you have two backend servers and and you installed nginx on .
Create a file /etc/nginx/sites-enabled/myloadbalancer.cfg

upstream myservers {
               server ;

server {
              listen 80;
              server_name localhost;
              access_log /var/log/nginx/access.log;
             location / {
                        proxy_pass http://myservers;
                        proxy_set_header $host;

And you are done. One important thing if your application needs the hostname you will have to explicitly set the host header( I needed it and it took me 2 hours to figure out why suddenly our application started giving bad hostname exceptions).

Problems with Nginx

After configuring the load balancers I was happy everything looked fine only problem being some particular rest calls started failing, which was unexpected. After a two days of debugging I finally found that some of the headers which our application was setting up and then making calls, were missing. Nginx was stripping off all the headers starting with X_ . I googled and finally found because of some security measures nginx does strips off certain type of headers.  So if your application needs headers starting with X_ or if any header whose name has _ in it ( it converts _ to - )  then probably nginx is not a good idea for load balancing. Though there is a patch which prevents  _(underscore) to -(dash) conversion, but in my case it was simply stripping them off so it didn't helped me.

And my 3 days of work went in gutter because I needed those headers, and it forced me to move from nginx to apache, the next easiest one to configure. Lets configure apache in next post.


Monday, August 20, 2012

Singleton Class in Java

It was my first interview when I got this question, Can you design a class for which you can create only one object?? At that time it went above my head and then i searched what is this all about and got to know about singleton pattern. In one of my latest interview once again this question was asked but this time it was something like this "Can you design a singleton class??How will you test it??What if I serialize and then deserialize the object, won't it create two different objects??What if we have multiple classloaders?? Lets see.

My first Singleton Class

class Singleton {
    private static Singleton sg=null;
    private Singleton(){
    public static Singleton getInstance() throws Exception{
        if ( sg == null){
            sg = new Singleton();
        return sg;

So far so good!! This example works fine in single threaded programs. Lets take the case of multi threaded programs, I have two threads t1 and t2 who called getInstance() method, t1 came in checked that sg is null by the time it can instantiate, JVM comes in and suspends the thread and starts the t2, t2 comes in checks sg is null creates a new object and returns, JVM brings t1 now since t1 already had checked so it will go for creating the new object and returns it. You are doomed, Your JVM has two instances of a singleton class. So we need synchronization, lets modify our class.

class Singleton {
    private static Singleton sg=null;
    private Singleton(){
    public static synchronized Singleton getInstance() throws Exception{
        if ( sg == null){
            sg = new Singleton();
        return sg;

So multiple threads problem solved, but is everything okay with this class?? Isn't it too expensive to synchronize the getInstance method given the fact that you need it only the first time?? Lets see one more conservative way :

class Singleton {
      public final static Singleton sg = new Singleton();
      private Singleton(){

So we have two options either synchronize the getInstance() or use the above implementation. Next on multiple class loaders and serialization in next post.

Wednesday, August 1, 2012


One of the most important testing in software field is Load Testing, everyone wants to know how much  his application can scale, what's the breaking point, is there any memory leak any deadlock happening,
how much cpu is being utilized?? There are lots of ways to generate load and test, but we are not talking about any testing methodologies or any testing tool here, rather we will talk about monitoring. How to monitor a running java application to determine its performance??

One very nice monitoring tool provided by JDK is JConsole. Its a GUI tool which monitors a JVM, all you need is to start your application with jmx management agent and connect jconsole to it. Lets have a look.

How to start JMX Management Agent on an Application

To start JMX agent you need to set following java options before starting application.<some port>

if you want authentication and ssl you need to set following options<some port><path to access file><path to password>

Start JConsole

To start jconsole type jconsole on cmd, make sure JAVA_HOME is set on your machine. You will see something like this on your monitor.


If you have any JVM running local you can see it under local process tab, you can connect directly double clicking the process. 

You can even connect to some remote JVM using host:port combination or using following complete URL  service:jmx:rmi:///jndi/rmi://hostName:portNum/jmxrmi 

Start Monitoring

Once you are connected to a JVM you will see four blank graphs, you are done, Now sit back and relax and let your application run(Make some dummy requests, calls to your app). After some time you will see graphs getting generated.

At the top you can find tabs (overview, memory,threads classes, vm summary, Mbeans). These are basically different resources you can monitor. 

Memory Monitoring

Memory tab shows you the heap utilization, you can even select type of memory you want to monitor ie eden space, survivor space, permanent space. If after a long time any memory graph's base line is going up it means your application has a memory leak and it will crash after some time. Usually a memory graph is an up and down graph about a flat baseline. See the graph below

Same you can monitor Threads, CPU utilization, VM summary etc.


Jconsole is not about only monitoring you can even manage your VM. You can even pause your service, perform GC, kill threads. For all the information I suggest play with it. You will enjoy.

When You are done playing with jconsole try to play with yourkit.

Tuesday, July 24, 2012


Last week I got a mail from my lead "Hey !! we need all the sql queries that are executed when we make any api call", I was like, really???That much of copy paste I'll have to do(poor me). Anyway I had all the rest calls and the queries for them in JSON format, all I had to do was to write one awk script. But with awk too I knew I will have to do lots of formatting so I searched online if there is any command line JSON parser or any utility so that i don't have to do it manually and then I found one cool tool "JSAWK"(AWK with javascript power).

Its pretty cool and if you know awk its pretty easy to use. I am not an expert, I am just sharing what I learnt and what I did.

Lets take a simple JSON data example :
{ "A" :[{ "B":"b","C":"c"},{"D":"d","E":"e"}]}

above is one simple json, now i want to extract each key value pair, here is a valid jsawk command.
Supposing above json is in file json.txt

cat json.txt | jsawk 'return this.A'
response := [{"B":"b","C":"c"},{"D":"d","E":"e"}]

Here we are getting an array as response what if I want to get elements of array??Let's try.

cat json.txt | jsawk 'return this.A[0]'
response := {"B":"b","C":"c"}

You can even use |(pipes) with JSAWK
cat json.txt | jsawk 'return this.A[0]' | jsawk 'return this.B'
response := b

What JSAWK does is it uses javascript to filter JSON arrays in results array and print them to stdout.
Since it gives you an array as result, traversing and fetching data becomes too easy. These are ways to access the elements now what if we want to modify our json, make some selection on elements?? Can we do that??

YES!!!!We have couple of options with jsawk to help us, they are below :

Options                              Usage
-n                                        Suppress the printing of result set

-q                                        Filter Json through JSONQuery query

-v <name=value>                Set global value name to value

-f                                          load and run specified javascript file before processing the JSON

-a <script>                           Run the specified snippet of javascript after processing JSON input.

-b<script>                            Run the specified snippet of javascript before processing JSON input.

JSAWK supports spidermonkey Javascript interpreter and hence all the powers of it plus it has some other methods available. For more info check this link

Next time you have to parse some JSON, you know what you can use.

Thursday, July 5, 2012

JAVA Garbage Collection and JVM Arguments

This post is about various JVM arguments available to tune Garbage Collection. Its not an exhaustive list but contains almost all important parameters you will ever need to tune your application. Once we are done with the arguments, we will see how to analyze the gc logs.

 I will take what I want !!

  We can choose which garbage collector we want for our application. Following are the available
  1. -XX:+UseSerialGC                         -Use serial GC
  2. -XX:+UseParallelGC                      -Use Parallel GC
  3. -XX:+UseParallelOldGC                -Use Parallel Compacting GC
  4. -XX:+UseConcMarkSweepGC      -Use Concurrent Mark Sweep(CMS) 

  Whats Going On??

  Choosing the garbage collector is one thing but question is what its doing behind??Let's hear it from   
  the collector himself. We can enable printing of details of garbage collection every time it happens
  using following parameters.

  1. -XX:+PrintGC                                   -Print basic details every time garbage collection happens
  2. -XX:+PrintGCDetails                        -Print more detailed information for each gc.
  3. -XX:+PrintGCTimeStamps              -Prints a timestamp at start of each gc.
  4. -Xloggc: <filepath>                             -Print the details in given log file.

   Distribute(Be Generous but don't waste)

   Okay!!! So we have selected our collector, we have enabled the logs but we haven't done anything
   about the memory yet, heart of the GC. Lets tune it. 
  1. -Xmsn : You can specify the initial heap size to be allocated. n is number of bytes eg -Xms512m  (m for MB)
  2. -Xmxn: The maximum size of heap in bytes eg. -Xmx1024m
  3. -XX:MinHeapFreeRatio : Minimum percentage of free space of total heap size. Suppose minimum is fixed to 20%, when percentage of free space in any generation reaches 20% then size of generation is expanded to maintain the ratio.  eg -XX:MinHeapFreeRatio=40
  4. -XX:MaxHeapFreeRatio : Maximum percentage of free space of total size. If in any generation free space exceeds this value then heap is shrunk to maintain it.
  5. -XX:NewSize : Default initial size of new young generation eg. -XX:NewSize=128m
  6. -XX:NewRatio : Ratio between young and old generation eg -XX:NewRatio=3
  7. -XX:SurvivorRatio :  Ratio between survivor space and eden space -XX:newSurvivorRatio=7
  8. -XX:MaxPermSize : Maximum size of permanent generation.

Other than these parameters there are some which are specific to type of collectors, we are not talking about them right now. Now next thing is how and when we setup these parameters. Answer is before starting your application you need to set these parameters.

On Unix machines you can use:
        export JAVA_OPTS="<argumrnt> <value>, .... "
        eg :  export JAVA_OPTS="-Xms1024m -Xmx2048m -XX:NewSize=256m"

Usually every java application, servers have start script, there also you can put the JAVA_OPTS values.

!!!!!!!!!!!!!!!!!!!!!!!!!!!ENOUGH THEORY, LETS DO SOME PRACTICAL !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

If you have any java application, you can use it. I am using apache geronimo app server which is hosting a sample daytrader application(developed by IBM, later donated to apache. Used for benchmarking of geronimo app server). I am generating load through apache jmeter. I used following 
JAVA_OPTS to start my app server. 

JAVA_OPTS ="-Xms128m -Xmx256m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc=
/home/administrator/gclog.log -XX:MaxPermSize=128m"

I generated a small load on server, this was logged in the gclog file.

19.542: [GC [PSYoungGen: 36486K->5213K(53632K)] 44251K->15877K(141056K), 0.0113620 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
20.802: [GC [PSYoungGen: 53597K->3797K(65728K)] 64261K->17705K(153152K), 0.0182770 secs] [Times: user=0.04 sys=0.00, real=0.02 secs] 
22.633: [GC [PSYoungGen: 61589K->1652K(79360K)] 75497K->19223K(166784K), 0.0090320 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
26.096: [GC [PSYoungGen: 73268K->3566K(75200K)] 90839K->22617K(162624K), 0.0160600 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 

The format of log is :
<Time_stamp_since_start_of_server> : [GC [<Collector> : <starting_occupancy_young_gen> -> <ending_occupancy_young_gen>(max_young_generation_size)]<starting_heap_occupancy> -> <ending_heap_occupancy> <time taken>

NOTE : The above log was generated for JDK1.6 and the format changes with every version.

So taking the first line from log we can say that Using Parallel scavenge young generation garbage collection happened after 19.542 seconds from starting of the server, initially 36 MB was being used after gc only 5 Mb is being used in young generation, while initially 44MB was being used in old generation but after GC only 15MB is used and the GC took 0.01 sec to complete.

So now we know how to select collector, what all params are there and how can we see what actually is 
going on behind the scene. Stay tuned for next post where we will look at different profiling tools available and try to learn what should be the values of parameters and how can we find the optimal values for them.

Thursday, June 14, 2012

JAVA Garbage Collection Algorithms and JVM Tuning

This post is about the way garbage collector works in Java and how can we tune it for any special need of our application. First part is all about the algorithms and the way GC works. Second part is about the different tuning parameters JVM provides us to tune garbage collector(stay tuned). I have tried to create some scenarios to better understand the tuning of GC(Garbage Collector, I will use GC from now onwards).

What is a Garbage Collector and why we need it??

Those having a c/c++ background must have used malloc/free or new/delete operators to allocate and deallocate memory. In java deallocation is done automatically, programmer doesn't needs to care about the deallocation. So GC is someone who does it for programmers in background(Say thanks to GC :P ).

A GC is responsible for :

  • Allocating memory
  • Make sure that objects which are still referenced remain in memory
  • All the objects which are no longer in use are removed and memory is freed. 

Since now we know what a GC, Question is how it works??So lets dig into the GC algorithms, but first
lets see what are the choices GC have to chose an algo.

Options With GC Algorithms 

  • Parallel Vs Serial
          A garbage collector algorithm can be parallel or serial. In former case if a system has multiple  
          cores the collection work is divided and ran on different core. In case of serial, even if machine
          has multiple cores, only one core will be used for collection.

  • Concurrent Vs Stop the World
         As the name suggests in case of stop the world, the application is stopped while collection while 
         in case of concurrent algorithm collection is done with application running. Stop the world case
         is easy but some applications may have a no pause requirement. In case of concurrent, collection   
         happens on objects which may get updated while collecting, hence they add some extra
         overheads and require a bigger heap size.
  • Compacting Vs Non-Compacting
          Once garbage collection is done, GC may or may not compact the memory. Moving the live 
          objects to one end of memory creates a free pool of memory at other end. It's easy and fast to
          allocate memory with one free end. Non compacting GC algorithm are fast but it causes memory
          fragmentation and slow down the allocation.

Generational Collection and JAVA HotSpot JVM

As of J2SE 5 update 6, in JVM there are in total four garbage collectors and all of them use generational collection technique. In generational collection, memory is divided into different generations, that is separate pools holding objects of different ages. Most widely  used implementation has two generations young and old .  Young generation collection happens frequently and contains most of the unreferenced objects.  Objects which survive few collection cycles are aged and moved to old generation. Old generation is typically larger than young generation and takes significantly large time to fill. So collection is infrequent but it takes lots of time.

Hotspot Generations

Memory in java hotspot jvm is divided into 3 generations a young generation, old generation and permanent generation. As name suggests young generation contains young objects, old generation contains objects which survived two or three young collection cycle and large objects which got allocated directly in old generation, permanent generation contains holds objects that jvm finds convenient to have GC manage, such as objects describing classes and methods.

The young generation consists of three areas one eden space and two survivor spaces. Most objects are allocated in eden space, while one of the survivor space contains objects who have survived one collection cycle and have been given another chance to die and get collected before they age enough to be moved to old generation, one of the survivor space is empty all the time.

Hotspot Collectors

Lets talk about four garbage collectors in hotspot jvm.

  1. Serial Collector
          Using serial collector both young generation and old generation collection happens in stop the 
          world fashion ie the application execution is halted while performing the collection.
          Young generation collection using serial collector
          In case of young generation the live objects in eden space is copied to one of the empty survivor  
          space (To in image), if the object size is large then those are tenured and directly copied to old 
          space. Objects in occupied survivor space that are still young are also copied to the other survivor
          space while objects which are relatively old are moved to old space. If the To survivor space is 
          filled while eden and occupied survivor space still contain some objects, then they are moved 
          straight to the old space.  Once the copy is done the objects which are in eden and from space are
          collected (such objects are marked with red cross). Once collection is over eden and from space 
         are empty and to space contains all live objects, at this point of time the survivor spaces swap 

         Old Generation collection using serial collector
         Serial collector uses mark sweep and compact  algorithm to collect old and permanent generations        
         In mark phase the collector identifies which objects are live. In next phase the collector performs 
         sliding compaction, moving the live objects to one side thus creating one free memory pool. This 
         helps in increasing the allocation speed of new objects as we can use the bump the pointer  
         method for allocation.

        Serial collector is by default used for any application on non-server type machines. On other
        machines serial garbage collector can be chosen by using -XX:+UseSerialGC command line 

    2.  Parallel Collector  

         Parallel collector is a parallel version of serial collector which takes advantage of multiple cpus
         and large memory available on today's server class machines.  

         Young generation collection using Parallel Collector
         Young generation collection algorithm is parallel version of serial collection. Its still works in 
          stop the world fashion but the garbage collection happens in parallel with  reduced overheads
          to increase the throughput of application.

         Old Generation collection using Parallel Collector
         Parallel collector uses the same serial mark, sweep and compact algorithm as serial collector for
         old and permanent generations.

        Parallel collectors are used on server type machines and applications which have no low pause
        time constraints. Parallel collector can be explicitly requested by using -XX:+UseParallelGC 
        command line option.

    3.  Parallel Compacting Collector
        The difference between parallel collector and parallel compacting collector is that it uses new
        algorithm for old generation collection     

        Young generation collection using Parallel Compacting Collector
        It uses the same algorithm as parallel collector for young generation collection.

        Old Generation collection using Parallel Compacting Collector
        It uses a new algorithm which works in three phases in stop the world, mostly parallel and sliding
        compaction manner. In first phase the generation is divided into regions. In marking phase the 
        objects are divided between the garbage collector threads and are marked in parallel, as the object 
        is identified live, the region data in which the object is in, is updated. 

        Due to previous collections it happens that generations have high density of live objects in left side
        and low density in right side. Compaction of left side is not worth it because of very small memory
        to be recovered. So in summary phase first thing it does is to find a dividing point in region so that
        from right side of it enough memory can be recovered. The summary phase next calculates the 
        address and size of first byte of live data for each compacted region. 
        In compaction phase garbage collection threads use the summary data to identify regions to be
        filled and then the threads copy data independently.

        Parallel compacting collector can be explicitly requested by using -XX:+UseParallelOldGC

   4.  Concurrent Mark Sweep(CMS) Collector

         For many applications the overall throughput is not that important as fast response time. Young
         generation collection doesn't takes much time but old generation does which causes high
         response time. To overcome this problem JVM has CMS collector.

         Young generation collection using CMS Collector
         CMS works same as parallel collector for young generation.

          Old Generation collection using CMS Collector
          Collection cycle for CMS starts with a short pause called initial mark phase which identifies
          the live objects directly accessible from code. Next is the concurrent mark phase, which marks
          all the objects in this set. Since marking is happening in parallel with application so it might
          happen that all the objects are not marked. So there is one more application pause called remark.
          In remark phase all objects which were modified during mark phase are revisited. Marking is
          finalized. Since this pause is long multiple threads are used to increase efficiency. At the
          end of remark phase concurrent sweep reclaims all the garbage.

         CMS Collector is only collector which is non compacting. Unlike other collectors CMS doesn't
          starts when permanent generation is filled rather it starts much before so that it can complete
          early. CMS collector starts based on time statistics regarding previous collection times and
          time it takes to fill old generations. CMS collector will also start collecting if occupancy of
          old generation exceeds initiating occupancy. The value of initiating occupancy can be set by
          command line option -XX:CMSInitiatingOccupancyFraction=n default value is 68.

         CMS collector can be explicitly requested by using -XX:+UseConcMarkSweepGC.