Memory Profiling

  • Memory profiling is basically Done using heap dumps,profile code, thread, dump, and garbage collector logs.
  • Heap dumps
    • Heap dumps Provide a complete view of all LIVE objects in the running heap.
      • Captures the size and address of every object.
      • Captures references of all objects
    • Garbage collection is forced before a heap dump
      • This makes the dump more accurate since only live objects are present.
    • Generated when the Java heap is exhausted(OOM Conditions), Can also be generated in other situations by use of X dump:heap
    • Dumps can either be Classic(Text View) Or portable, heap dumps(PHD).
      • Default format for heap dump is PHD from WAS v6.6
    • Comparing multiple heap dumps and object sizes Overtime will help establish leak suspects.
    • Tools to analyse heap dump/memory dump include ISA heap, analyser.
  • Garbage collection analysis
    • Native_std out and native_std err illustrate The trend of IVM Heap usage Overtime. Key metrics to be analysed, include
      • Occupancy(MB)
      • Allocation rate(KB/sec)
      • Total GC Pause time(ms)
      • Mark and Sweep time(ms)
      • Compact time(ms)
      • GC cycle length and distribution time(ms)
      • Free space after GC(MB)
      • Free space Before AF(allocation failure)(MB)
      • Size of request and cause of AF(bytes)
      • Captures references of all objects
    • GC analysis is extremely useful to analyse
      • Out of memory trends
      • Fragmentation
      • Minor collections and full GC intervals.
      • GC pause impact, if any
  • Thread Dumps
    • It is not uncommon to run into a situation where a JVM is extremely slow or Is a “not responding” State. Such conditions can occurred due to multiple reasons
      • ID/database/network interface bottlenecks.
      • Thread starvation due to one or more blocking threads.
      • Deadlocks
      • Few poor performing functionalities.
    • Thread dumps/Java cores Provide the best diagnostic information information to troubleshoot such issues.
    • Thread monitor architecture In WAS has the ability to monitor all(Managed thread pools hosted in a container). This includes
      • Web container, ORB, and async, bean thread pools
      • Unmanaged, thread pools are not monitored.
    • Information in a Java core include
      • Running Call stack for every thread.
      • Thread state
        • Runnable
        • Conditional wait
        • Suspended
        • Monitor wait
      • Once a hang is suspected, Obtain a thread dumb, Java cores, Can either be generated by
        • WSADMIN COMMANDS(assuming the process is responding to JVM commands)
          • Lower level OS functions
        • What typical hand, collect three dumps
          • Examine the thread dumps With thread, analyser or by hand.
          • Look for threads involving a network collision and waiting for response.
          • If a thread is hung, notification is sent by three was
            • JMX Notification for its listeners
            • thread pool metric for, PMI clients
            • Message written to system out log
        • Hunger, thread detection Is driven by The following configurations
          • Com.ibm.websphere. Threadmonitor. Interval
          • Com.ibm.we.threadmonitor monitor.next
  • Important points
    • Site performance is the single biggest crosscutting concern for applications.
    • Need to approach performance tuning in a “tired” fashion. Every tire can independently, bring the site down and will need careful, tuning and testing.
    • caching is often the single biggest level to achieve high performance.
    • WCS is a feature which, but also resource intensive Out of the box framework, which offers good performance, trouble design, And customisation/configuration decisions with high-performance.
    • Profile code, heap dumps, thread dump, And GC logs(in an iterative fashion) Is the best way to measure performance and improvement?

No comments:

Post a Comment