Sunday, July 18, 2010

Heap dump on a Unix machine

After exploring the Jay/Faisal's blog on 'jmap' java utility usage and Heap dump. It is really great work by Jay/Faisal. I just thought that similar kind of experiment we did long back on UNIX machines. I am glad to sharing that with you guys.

Last year, we were struggling to overcome the OutOfMemoryError, which would effect the most of productive hours. In this assignment I need to figure out what process is causing the low memory in the environment is identified by searching a all log files in the machine. Assume that all the WebLogic instance log files are collected into common directory structure, each of them are stored respective instance named folder.

Script 1:

After identifying the impacted instances, I need to take the heap dump of that particular instance with corresponding process id.


#==============================================================
# File Name  : CheckLogs.sh
# Author  : Pavan Devarakonda
# Purpose  : Script for searching all WebLogic instances logs in a box
#==============================================================
instances=`ls /path/instances|grep web`
phrase=$1
date
for x in $instances
do
        echo 'Checking in the :' $x
        cd /path/instances/$x/logs
        egrep -i $phrase instance*log
        egrep -i $phrase instance*out
done
 
# Know the CPU load average at this time
date
uptime
-->

Script 2: Automatic script for Heap dump, here you need to provide the managed server name at the command-line argument. 
#!/bin/bash

#=======================================================
# Name    : InstanceHeapdump.sh
# Purpose : This script will takes instance name as input
# Takes the thread dump and also takes heap dump
#=======================================================
 
if [ "$1" = "" ]; then
        echo "Usage : $0 "
        exit
else
        instance="$1"
        user=$LOGNAME
        ppid=`ps -fu $user|grep $instance|grep startMan|grep -v grep|awk '{print $2}'`
        wpid=`ps -fu $user|grep $ppid|grep startWebLogic.sh|awk '{print $2}'`
        jpid=`ps -fu $user|grep $wpid|grep java|awk '{print $2}'`
        echo 'The Java PID:' $jpid
        kill -3 $jpid
        if [ $jpid = "" ]; then
                echo "Invalid instance input..."
                exit
        else
               jmap -dump:live,format=b,file=heap.bin $jpid
                mv heap.bin $instance_heap.bin
        fi
fi

This could give you one more way of finding a java process in UNIX machine. You can use jps command instead of three lines of awk filters. In this same script to make hold the java process not to crash, we can call a WLST script to suspend the instance and then you can happily take the heap dump.

What to do after heapdump creation?
Use jhat command to run the analyzer to show the outcome on a browser.
Follow the Faisal tips to find memory leaks, use eclipse MAT that is comfortable for your system.

Monday, July 12, 2010

Clearing Cache for WebLogic instance

Hey smart WLA, 

Here I am with one more interesting story of WebLogic Administration "Clearing Cache". This is most often in development environments, where you need to clear the cache for new releases for Web-tier changes, XML loadings, JDBC connection changes, JMS issues, etc.,.

Some of the great developers think like this "I can change minor things directly in the jsp files to test stuff"

What is actually WebLogic Cache?

Basically all the web-tier related files (.jsp .class, JSPCompiled files etc.,) get stored in ./wlnotdelete/app_yourapplicaiton directory. This is treated as cache whenever there is a restart of a WebLogic instance happen then the WebLogic server will look up for last serviced object status stored in the cache to service for any pending requests. Usually, when your EJB Classes need sessions, JMS object requires persistence, your web-tier may contain static contents then Cache will be used by WebLogic Application Server instance.

WebLogic Clear Cache
Cache Clearing in WebLogic Domain



Why we need to remove Cache?

Whenever your application is accessed for the first time that fresh deployment of a new version, WebLogic server lookup in this directory, if there are older objects persists that will be a conflict with new code objects. This is where the need of removal of cache arises.

Where there is a need of a new version deployment we might need to clear the cache when the changes to the new version is not reflected. In the old version WebLogic 8.1 below we used to remove the .wldonotdelete folder. In the new version of WebLogic 9.x onwards removal of cache means deleting each server instance's tmp folder contains cache, stage folders or you can simply remove tmp folder too provided there should not be configuration changes happen to the server.

Generally for WebLogic 9.x and higher versions
WIN: C:\bea\user_projects\domains\yourdomain\servers\yourserver\tmp
UNIX: /bea/user_projects/domains/yourdomain/servers/yourserver/tmp

you can use the following commands to clear the cache:
WIN: rmdir C:\bea\user_projects\domains\yourdomain\servers\yourserver\tmp \s
UNIX: rm -rf /bea/user_projects/domains/yourdomain/servers\yourserver/tmp

Here I am removing all the subdirectories and files in the given directory.

When to do this Clearing Cache?

After Stopping the WebLogic server instance you can go for removal of cache.
Mostly, Spring framework users, struts framework users have this no changes reflected issue for their web applications.

An alternative solution is you can use the 'stage' mode set to 'no_stage' deployment. when undeployed an application then WebLogic server itself removes the cache objects.

Wednesday, July 7, 2010

Copying to multiple remote machines

Here is another interesting story of WLA (of-course mine), When I visited US in 2007 there was lot of restrictions in work places. "Hey its production you know what happen if you touch it??" "Don't open this files", "Don't enter into that folders", it will be dangerous... I know that very well what is missing in the system, where it is required a change but my hands kept criss cross!!

Days passed I got opportunity to come again on long term. Now, the whole new System is going shapeup with my hands. The system is awaiting for me since long days. :) All those sparkling colorful ideas running around my mind, got chance to flow onto the system to form various automated scripts, which are having little in size with greater capabilities.

Whenever there is a application version release the archive files(.jar, .war, .ear) need to copied to all over the remote machines. In olden days we were using 'sftp' command and its related 'put', 'mput', 'get' and 'mget' commands to complete the task. Manually double checking wheather the copying is done correct or not, by verifying in each machine content by comparing the each file sizes. Here I found a flaw that there could be chance of human error. While understanding 'Six Sigma Course', where I learnt about human errors makes greater defect to many customer's business. To avoid this better option is automation of the task as possible as much.I remembered Mr. Neil Gogte words about "Cyber Coolie". The software engineer who works as per his contractor asked him to do only those things he will do. Never think other than the work which is assigned to him. My soul shouts out 'NO!!', I cannot be a Cyber coolie any more !!

My beautiful sparkling colorful ideas SSH password less connection to multiple remote machines, powerful bright idea of 'scp' command usage with verification option built within a script come out as a wonderful shell script, which had mighty power of built-in checking with no chances of human error. When I show the execution of this script to my teammates they are very happy and appreciated me. Many productive hours are saved though this activity was disturging to other regular job. The script made almost hands free task!! Finally, That's the way team turn happy ever by using the easy script.

Script is :
TADAA!!!!!!!!!!

# Define variables values
src=
target=
hostlist=
user=
Logfile=

#=== script logic starts here ====
if [ -d $src ]
then
echo "Code folder found, preparing to transfer\n"
while read server
do
result=scp -r $src $user@${server}:$target
if [ $result -eq 0 ]
then
echo $server transfer done >> $Logfile
else
echo $server transfer failed.
exit
fi
done < $hostlist
else
echo "Code folder \"$src\" not found\n"
fi

Blurb about this blog

Blurb about this blog

Essential Middleware Administration takes in-depth look at the fundamental relationship between Middleware and Operating Environment such as Solaris or Linux, HP-UX. Scope of this blog is associated with beginner or an experienced Middleware Team members, Middleware developer, Middleware Architects, you will be able to apply any of these automation scripts which are takeaways, because they are generalized it is like ready to use. Most of the experimented scripts are implemented in production environments.
You have any ideas for Contributing to a Middleware Admin? mail to me wlatechtrainer@gmail.com
QK7QN6U9ZST6