Showing posts with label shell script. Show all posts
Showing posts with label shell script. Show all posts

Friday, 22 April 2016

blogger

 [12.2.1.0.0] Failed to start the BAM Alert Engine


Hello Viewers,

I was trying to configure the BAM Alerts with Email Notification feature. I created the BAM alerts in BAM composer and configured the BAM properties and user messaging driver.


while deactivating the alert I was getting the pop up saying "unable to deactivate the alert" but actually it gets deactivate. same behavior when i was trying to activate it.
When i was trying to save the alert it is "showing unable to load the alert"

In logs i was getting the below error:

-------------------------------------------------------------------------------------------------------------------------------------------------------------
<Apr 3, 2016 1:54:44 AM EDT> <Warning> <oracle.beam.server> <BEA-000000> <BAM Alerts Engine Service failed to start. 
Exception: java.lang.StringIndexOutOfBoundsException: String index out of range: 4 
at java.lang.String.substring(String.java:1963) 
-------------------------------------------------------------------------------------------------------------------------------------------------------------

Applied all the mandatory patches for BAM and restarted the server after clearing tmp, cache and data folder but still got the same issue.

Solution :

1) Go to EM -> Navigate to Business Activity Monitoring -> BAMServer -> BAM Properties
2) Click on "More Advanced configuration
3) Search for ScheduledDataPurgeTimeForDataObject property, change its value from 1:0:0 to 01:00:00
4) Save your changes.
5) Restart the environment.

Thanks a lot for your patience!!!

Regards
-Ashish

Sunday, 3 January 2016

blogger

 

Script to target/untarget multiple datasource to/from the   cluster

Hello to Viewers,
This script will help in performing the target and untarget operation on multiple datasource in one go. You don't need to go to the console and do the same task manually which is of-course time taking and error prone.

FLOW:

shell script ---> python --> wlst command.


Here shell script will call python script and in python we have mentioned few relevant wlst command that perform the actual task.

Follow the below steps:

1) Create a shell script under below directory: QueueOperation.sh

I have taken the directory structure as below:

/opt/soauser/automation/SOADataSourceOperation/


DataSourceOperation.sh

----------------------------------------------------------------------------------------------


#!/bin/sh

export LOGFILE="/opt/soauser/automation/SOADataSourceOperation/DataSource.log"
if [ -f $LOGFILE ]; then
  rm -f $LOGFILE
fi
rm -rf /opt/soauser/automation/SOADataSourceOperation/DataSource.log
WL_HOME="/xxxxxx/xxxx/xxx/wlserver_10.3"
export WL_HOME
echo "Please enter target to Target and untarget to Untarget the Datasources:"
read PARAMETER1
cd /opt/soauser/automation/SOADataSourceOperation/
sh ${WL_HOME}/common/bin/wlst.sh /opt/soauser/automation/SOADataSourceOperation/DataSourceOperation.py $PARAMETER1 $1 >> /opt/soauser/automation/SOADataSourceOperation/DataSource.log
exit

------------------------------------------------------------------------------------------------

2) Create a text file in same directory that contains the datasource name:

Suppose you need to untarget the datasource related to SAP then name it as SAPdsList.txt(targetdsList.txt)

here target could be SAP or any end system to which datasources are related.

SAPdsList.txt
-----------------------------------------------------------------------------------------------------
DSNAME1
DSNAME2

------------------------------------------------------------------------------------------------------



3) Create a python file under the same directory: DataSourceOperation.py

Here i am assuming that 
datasources are targeted to only weblogic cluster.


DataSourceOperation.py

------------------------------------------------------------------------------------------------------
from java.io import FileInputStream

import java.lang
import os
import string
import sys , traceback

operation=sys.argv[1]
target=sys.argv[2]
def connectToServer():
        USERNAME = 'username'
        PASSWORD = 'password'
        URL='t3://AdminServerhost:AdminServerport'
        #Connect to the AdminServer
        print 'starting the script ....'
        connect(USERNAME,PASSWORD,URL)

def disconnectFromServer():
    print "Disconnecting from the Admin Server"
    disconnect()
    print "Exiting from the Admin Server"
    exit()
    print "Mission Accomplished"


def Target(DSName):
     try:
        print 'Entry point1...'
        edit()
        tgName = 'CLUSTER_NAME'
        startEdit()
        DSName = DSName.strip()
        print 'test0'
        cd ('/JDBCSystemResources/'+ DSName)
        print 'test1'
        set('Targets',jarray.array([ObjectName('com.bea:Name='+tgName+',Type=Cluster')], ObjectName))
        activate()
        print 'DataSource: "', DSName ,'" has been TARGETED TO CLUSTER Successfully'
     except :
        print 'Something went wrong...'
        exit()

def Untarget(DSName):
     try:
        print 'Entry point1...'
        edit()
        tgName = 'CLUSTER_NAME'
        startEdit()
        DSName = DSName.strip()
        print 'test0'
        cd ('/JDBCSystemResources/'+ DSName)
        print 'test1'
        set('Targets',jarray.array([], ObjectName))
        activate()
        print 'DataSource: "', DSName ,'" has been UNTARGETED FROM CLUSTER Successfully'
     except :
        print 'Something went wrong...'
        exit()

###############     Main Script   #####################################
#Conditionally import wlstModule only when script is executed with jython
if __name__ == '__main__':
    from wlstModule import *#@UnusedWildImport
print('This will enable you to perform operation on  datasource')
listName = target+'dsList.txt'
if operation=='target':
   f = open(listName,'r')
   out = f.readlines()
   for DSName in out:
     DSName.strip()
     print 'Trying to target  '+DSName
     connectToServer()
     Target(DSName)
     disconnect()
     print 'Target the '+DSName
else:
   f = open(listName,'r')
   out = f.readlines()
   for DSName in out:
     DSName.strip()
     print 'Trying to untarget '+DSName
     connectToServer()
     Untarget(DSName)
     disconnect()
     print 'Untargeted the '+DSName
disconnectFromServer()

-------------------------------------------------------------------------------------------------------

HOW TO RUN:

Simply run the shell script and provide the target system name as a parameter for ex:

cd /opt/soauser/automation/SOADataSourceOperation/

sh DataSourceOperation.sh SAP 

(here SAP is the target so python file will look for SAPdsList.txt)

then it will ask for the operation to be performed : Please enter target to Target and untarget to Untarget the Datasources

type the operation name and press "enter" and then verify the log file(DataSource.log) and queue status from console.

Thanks a lot for your patience !!!! 

Regards
-Ashish 

Friday, 4 December 2015

blogger

Script to Pause/Resume the multiple distributed queues at a time



Hello to Viewers,
This script will help in performing the pause and resume operation on multiple distributed queues in one go. You don't need to go to the console and do the same task manually which is of-course time
taking and error prone.

FLOW:

shell script ---> python --> wlst command.

Here shell script will call python script and in python we have mentioned few relevant wlst command that perform the actual task.

Follow the below steps:

1) Create a shell script under below directory: QueueOperation.sh


I have taken the directory structure as below:

/opt/soauser/automation/SOAQueueOperation/


QueueOperation.sh
---------------------------------------------------------------------------------------------------------------

#!/bin/sh

export LOGFILE="/opt/soauser/automation/SOAQueueOperation/Q_Pause_Resume.log"

if [ -f $LOGFILE ]; then
  rm -f $LOGFILE
fi
rm -rf /opt/soauser/automation/SOAQueueOperation/Q_Pause_Resume.log
WL_HOME="/xxxxxx/xxxx/xxx/wlserver_10.3"
export WL_HOME
echo "Please enter pause to PAUSE the queues or resume to RESUME the queues:"
read PARAMETER1
cd /opt/soauser/automation/SOAQueueOperation/
sh ${WL_HOME}/common/bin/wlst.sh /opt/soauser/automation/SOAQueueOperation/QueueOperation.py $PARAMETER1 $1 >> /opt/soauser/automation/SOAQueueOperation/Q_Pause_Resume.log
exit
--------------------------------------------------------------------------------------------------------------

2) Create a text file in same directory that contains the queue name:

Suppose you need to pause the queues related to SAP then name it as SAPQueueList.txt(targetQueueList.txt)

here target could be SAP or any end system to which queues are related.

SAPQueueList.txt
----------------------------------------------------------------------------------------------------------
QueueName1
QueueName2
----------------------------------------------------------------------------------------------------------


3) Create a python file under the same directory: QueueOperation.py

Here i am assuming that 
I  have 2 JMS servers : JMSSERVER1 and JMSSERVER2
My Managed servers name is MS1 and MS2
JMS module: JMSMODULE

QueueOperation.py
----------------------------------------------------------------------------------------------------------
from java.io import FileInputStream

import java.lang
import os
import string
import sys , traceback

operation=sys.argv[1]
target=sys.argv[2]
def connectToServerJMS1():
        USERNAME = 'username'
        PASSWORD = 'password'
        URL='t3://managedserver1host:managedserver1port'
        #Connect to the ManagedServer1
        print 'starting the script ....'
        connect(USERNAME,PASSWORD,URL)

def connectToServerJMS2():
        USERNAME = 'username'
        PASSWORD = 'password'
        URL='t3://managedserver2host:managedserver2port'
        #Connect to the ManagedServer2
        print 'starting the script ....'
        connect(USERNAME,PASSWORD,URL)

def disconnectFromServer():
    print "Disconnecting from the ManagedServer"
    disconnect()
    print "Exiting from the Managed Server"
    exit()
    print "Mission Accomplished"

def pauseQueueConsumptionJMS1(queueName):
     try:
        print 'Entry point1...'
        serverRuntime()
#       queueName = str(queueName.strip())
        queueName = queueName.strip()
        print 'test0'
        cd('/JMSRuntime/MS1.jms/JMSServers/JMSSERVER1/Destinations/JMSMODULE!JMSSERVER1@'+ queueName)
        print 'test1' 
        cmo.pauseConsumption()
        print 'Queue: "', queueName ,'" has been CONSUMPTION Paused ON JMS1 Successfully'
     except :
        print 'Something went wrong...'
        exit()

def pauseQueueConsumptionJMS2(queueName):
     try:
        print 'Entry point2...'
        serverRuntime()
        queueName = queueName.strip()
        print 'test2'
        cd('/JMSRuntime/MS2.jms/JMSServers/JMSSERVER2/Destinations/JMSMODULE!JMSSERVER2@'+ queueName)
        print 'test3'
        cmo.pauseConsumption()
        print 'Queue: "', queueName ,'" has been CONSUMPTION Paused ON JMS2 Successfully'
     except :
        print 'Something went wrong...'
        exit()


def resumeQueueConsumptionJMS1(queueName):
    try:
        print 'Entry point...'
        serverRuntime()
        queueName = queueName.strip()
        cd('JMSRuntime/MS1.jms/JMSServers/JMSSERVER1/Destinations/JMSMODULE!JMSSERVER1@'+ queueName)
        cmo.resumeConsumption()
        print 'Queue: "', queueName ,'" has been CONSUMPTION resumed Successfully'
    except :
        print 'Something went wrong...'
        exit()

def resumeQueueConsumptionJMS2(queueName):
    try:
        print 'Entry point...'
        serverRuntime()
        queueName = queueName.strip()
        cd('JMSRuntime/MS2.jms/JMSServers/JMSSERVER2/Destinations/JMSMODULE!JMSSERVER2@'+ queueName)
        cmo.resumeConsumption()
        print 'Queue: "', queueName ,'" has been CONSUMPTION resumed Successfully'
    except :
        print 'Something went wrong...'
        exit()

###############     Main Script   #####################################
#Conditionally import wlstModule only when script is executed with jython
if __name__ == '__main__':
    from wlstModule import *#@UnusedWildImport
print('This will enable you to perform operation on  distributed JMS Queues')
#connectToServer()
listName = target+'QueueList.txt'
if operation=='pause':
   f = open(listName,'r')
   out = f.readlines()
   for queueName in out:
     queueName.strip()
     print 'Trying Consumption Pause on '+queueName      
     connectToServerJMS1()
     pauseQueueConsumptionJMS1(queueName)
     disconnect()
     connectToServerJMS2()
     pauseQueueConsumptionJMS2(queueName)
     disconnect()
     print 'Consumption Paused on '+queueName
else:
   f = open(listName,'r')
   out = f.readlines()
   for queueName in out:
     queueName.strip()
     print 'Trying Consumption Resume on '+queueName
     connectToServerJMS1()
     resumeQueueConsumptionJMS1(queueName)
     disconnect()
     connectToServerJMS2()
     resumeQueueConsumptionJMS2(queueName)
     print 'Consumption Resumed on'+queueName
#exit()
disconnectFromServer()
#################################### 

---------------------------------------------------------------------------------------------------------

HOW TO RUN:

Simply run the shell script and provide the target system name as a parameter for ex:

cd /opt/soauser/automation/SOAQueueOperation/

sh QueueOperation.sh SAP 

(here SAP is the target so python file will look for SAPQueueList.txt)

then it will ask for the operation to be performed : Please enter pause to PAUSE the queues or resume to RESUME the queues

type the operation name and press "enter" and then verify the log file(Q_Pause_Resume.log) and queue status from console.

Thanks a lot for your patience !!!! 

Regards
-Ashish 

Monday, 17 November 2014

Script to List All the OSB projects, Business Services and Proxy Services deployed in sbconsole

blogger

Script to List All the OSB projects, Business Services and Proxy Services deployed in sbconsole.

Hello to viewer,

This script will provide you the list of all the OSB projects , business services and proxy services deployed in sbconsole in a .txt file.

Benefits: This is helpful for techies supporting non prod environment where lots of dummy projects are created by developers just for testing purpose. This script can help in performing the cleanup activity.It will provide the list of projects , BS and proxy services and later unused services and projects can be identified and deleted.

Here is the Flow :
Shell calls the python ---> python executes the wlst command.

Follow the below steps:

Step 1) : Create the directory structure as below:

cd /shared/fmw/build/script/List/


Step 2) : Under your current directory "List" create osbservices.py file with content below:

import sys
import os
import socket

connect('username', 'password', 't3://host:AdminPort')

from com.bea.wli.sb.management.configuration import ALSBConfigurationMBean
from com.bea.wli.config import Ref
from java.lang import String
from com.bea.wli.sb.util import Refs
from com.bea.wli.sb.management.configuration import CommonServiceConfigurationMBean
from com.bea.wli.sb.management.configuration import SessionManagementMBean
from com.bea.wli.sb.management.configuration import ProxyServiceConfigurationMBean
from com.bea.wli.monitoring import StatisticType
from com.bea.wli.monitoring import ServiceDomainMBean
from com.bea.wli.monitoring import ServiceResourceStatistic
from com.bea.wli.monitoring import StatisticValue
from com.bea.wli.monitoring import ServiceDomainMBean
from com.bea.wli.monitoring import ServiceResourceStatistic
from com.bea.wli.monitoring import StatisticValue
from com.bea.wli.monitoring import ResourceType

domainRuntime()

alsbCore = findService(ALSBConfigurationMBean.NAME, ALSBConfigurationMBean.TYPE)
refs = alsbCore.getRefs(Ref.DOMAIN)
it = refs.iterator()
print "List of Project in OSB"
while it.hasNext():
    r = it.next()
    if r.getTypeId() == Ref.PROJECT_REF:      
        print "Project Name : " + (r.getProjectName())

allRefs= alsbCore.getRefs(Ref.DOMAIN)
print "List of Proxy Service"
for ref in allRefs:  
  typeId = ref.getTypeId()
  if typeId == "ProxyService":    
     print "Proxy Service: " + ref.getFullName()
allRefs= alsbCore.getRefs(Ref.DOMAIN)
print "List of Business Service"
for ref in allRefs:
  typeId = ref.getTypeId()
  if typeId == "BusinessService":         
     print "Business Service: " + ref.getFullName()
disconnect()
exit()


Step 3)  : Under your current directory "List" create "osbservices.sh" with the content below :


#!/bin/sh
# set up WL_HOME and OSBHOME the root directory of your WebLogic installation
WL_HOME="WL_HOME Directory"
OSBHOME="OSB_HOME Directory"
rm output.txt
umask 027

# set up common environment
WLS_NOT_BRIEF_ENV=true
. "${WL_HOME}/server/bin/setWLSEnv.sh"


CLASSPATH="${CLASSPATH}${CLASSPATHSEP}${FMWLAUNCH_CLASSPATH}${CLASSPATHSEP}${DERBY_CLASSPATH}${CLASSPATHSEP}${DERBY_TOOLS}${CLASSPATHSEP}${POINTBASE_CLASSPATH}${CLASSPATHSEP}${POINTBASE_TOOLS}"

CLASSPATH=$CLASSPATH:$OSBHOME/modules/com.bea.common.configfwk_1.6.0.0.jar:$OSBHOME/lib/sb-kernel-api.jar:$OSBHOME/lib/sb-kernel-impl.jar:$WL_HOME/server/lib/weblogic.jar:$OSBHOME/lib/alsb.jar;
export CLASSPATH


if [ "${WLST_HOME}" != "" ] ; then
        WLST_PROPERTIES="-Dweblogic.wlstHome='${WLST_HOME}'${WLST_PROPERTIES}"
        export WLST_PROPERTIES
fi
JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${MEM_ARGS} ${CONFIG_JVM_ARGS}"

ORACLE_HOME/common/bin/wlst.sh osbservices.py  >> output.txt
date >> output.txt







Note : Change the value of wl_home, oracle_home, OSBHOME according to your environment.

After this you just need to run the shell script : sh osbservices.sh and you will get the output in output.txt file.

Thanks a lot for your patience !!!! 

Regards
-Ashish 

Sunday, 30 March 2014

blogger

Shell script to get list of services along with there count having more than two version deployed in any domain.

Hello to viewer,

This shell script will provide you the list of services that are having more than two versions deployed in any domain along with there count and partition in which they are deployed.

Benefit: This is helpful for techies supporting non prod environment where there are many versions deployed in a domain for the same service. Since large number of versions creates confusion and leads to slowness of EM console.
Maintenance activity of any non prod environment includes undeploying older versions of the services and maintaining only one version that will respond to client request.Gathering the list of service having more versions ,manually is a time consuming process.This script will surely save your Time.

Here is the Flow :

Shell calls the python ---> python executes the wlst command.

Wlst command: It will list all the composites that are deployed in any domain.

sca_listDeployedComposites('host','manageserver_port','user','password')

Change the value of host,manageserver_port,user and password accrding to your environment.

Follow the below steps:

Step 1) : Create the directory structure as below:

cd /shared/fmw/build/script/versioncount

Step 2) : Under your current directory "versioncount " create serviceList.py file with content below:

import ConfigParser
def connectToServer():
        USERNAME = 'user'
        PASSWORD = 'password'
        URL='t3://host : adminport'
        #Connect to the Administration Server
        print 'starting the script ....'
        connect(USERNAME,PASSWORD,URL)

def disconnectFromServer():
    print "Disconnecting from the Admin Server"
    disconnect()
    print "Exiting from the Admin Server"
    exit()
    print "Mission Accomplished"

def readConfigurationFile():
    try:
        print 'Entry point...'
        sca_listDeployedComposites('host','manageserver_port','user','password')


    except :
        print 'Unable to find admin server...'
        exit()

    print 'Command executed successfully'

###############     Main Script   #####################################
#Conditionally import wlstModule only when script is executed with jython
if __name__ == '__main__':
    from wlstModule import *#@UnusedWildImport
print('This will enable you to create distributed JMS Queues')
connectToServer()
readConfigurationFile()
disconnectFromServer()
####################################     


Step 3)  : Under your current directory "yourscript " create serviceList.sh with the content below :


#!/bin/sh
# set up WL_HOME, the root directory of your WebLogic installation
WL_HOME="wl_home"

umask 027

cd /shared/fmw/build/script/versioncount

# set up common environment
WLS_NOT_BRIEF_ENV=true
. "${WL_HOME}/server/bin/setWLSEnv.sh"

#CLASSPATH="${CLASSPATH}${CLASSPATHSEP}${FMWLAUNCH_CLASSPATH}${CLASSPATHSEP}${DERBY_CLASSPATH}${CLASSPATHSEP}${DERBY_TOOLS}${CLASSPATHSEP}${POINTBASE_CLASSPATH}${CLASSP
ATHSEP}${POINTBASE_TOOLS}"

if [ "${WLST_HOME}" != "" ] ; then
        WLST_PROPERTIES="-Dweblogic.wlstHome='${WLST_HOME}' ${WLST_PROPERTIES}"
        export WLST_PROPERTIES
fi

#echo
#echo CLASSPATH=${CLASSPATH}

JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${MEM_ARGS} ${CONFIG_JVM_ARGS}"

sh ORACLE_HOME/common/bin/wlst.sh serviceList.py > output.out

grep -E "partition" output.out > final.out

cat final.out | sed -e 's/mode.*//' > a.out

cat a.out | sed -e 's/^[0-9]*. //g' | sed -e's/[0-9]*[.]*//g' | sed -e 's/\[//g' | sed -e 's/\]//g' | sed -e 's/, /,/' > b.out

N=0
while read LINE ; do
  var[$N]=$(echo $LINE)
  #echo {$var[$N]}
  N=$((N+1))
done < b.out
for i in ${var[*]};
do
   COUNT=0
   for j in ${var[*]};
   do
     if [ "$i" = "$j" ]; then
     COUNT=$((COUNT+1))
     fi
   done
   if [ "$COUNT" -gt 1 ]; then
   echo $i $COUNT >> result.txt

   fi
done
sort result.txt | uniq  > finalresult.txt
rm result.txt    


Note : Change the value of wl_home, oracle_home according to your environment.

After this you just need to run the shell script : sh serviceList.sh and you will get the output in finalresult.txt file like mentioned below:

service1,partition=partition_name, count 


Thanks a lot for your patience !!!! 

Regards
-Ashish 



 
 

Monday, 10 February 2014

Shell script to get Email Notification for every new occurence of java.lang.OutOfMemoryError in logs

blogger
Shell script to get Email Notification for every new occurrence of java.lang.OutOfMemoryError in logs

This script will send the Email Notification whenever there is new occurrence of java.lang.OutOfMemoryError in logs with latest Timestamp that will help you in diagnosing the issue.

Benefit: This script is helpful in monitoring the Environment and will let the administrator know about the issue ASAP so that he can perform the required solution and can prevent unbearable delay.As it sends the notification it also takes the backup of log file.

Below are steps you need to follow:

Step 1) Create the directory

cd /shared/fmw/build/myscript

Step 2) Create AutocheckOOM.sh file in the current directory with below content:

#!/bin/bash
##########
ENVNAME="Env:"
COUNTER="0"
WORKDIR="/shared/fmw/build/myscript"
msName="MngdSvr1"
Log_LOC="logfile_location"

LogFiles=( MngdSvr1.log )

# email notifications will be sent via mail
EMAIL="mail_id"
# CC list in the notification mail
CCList="ccmail_id"
# From email address in the notification Email
FromAdd="frommail_id"



#Functions

OOM() {
for logfile in ${LogFiles[@]} ;do
Count=`grep "java.lang.OutOfMemoryError" $Log_LOC/$logfile | wc -l`
COUNTER=$[$COUNTER + $Count]
export COUNTER
done
}


BackupLogs() {
for logfile in ${LogFiles[@]} ;do
if [ -f $Log_LOC/$logfile ]; then
tar -czf $Log_LOC/$logfile.tar.gz $Log_LOC/$logfile
fi
done
}

Main() {
OOM
#ProcCheck
if [[ "$COUNTER" != "0" ]] ; then
echo "`date` :Out Of Memory Condition Detected.."
echo "`date` :Backing up logs for future reference.."
BackupLogs
cat $Log_LOC/NGS_MngdSvr1.log | grep java.lang.OutOfMemoryError | grep "####<" > temp.txt
output=$(tail -1 temp.txt | awk -F'>' '{print $1}' | awk -F'<' '{print $2}')
echo ''>> a.txt
test=$(cat a.txt | grep "$output")
if [ "$test" == "" ]; then
echo $output >> a.txt
#echo $output
echo "$ENVNAME OutOfMemory Error Detected at $output" | mail -s "$(echo -e "Auto-Msg: $ENVNAME : OutOfMemory Error\nContent-Type: text/html")" $EMAIL $CCList
 $FromAdd
fi
#CleanLogs

else
echo "`date` :Exiting, No Out of Memory found...";
exit 0
fi
}
Main


Note : change the location of log according to your Env.
           change the email_id accordingly.
          change the name of managed server accordingly

After this you just need to run the shell script or better way is to set this as a job in cron.

sh AutocheckOOM.sh

Thanks a lot for your patience!!!!

Regards
-Ashish




Sunday, 2 February 2014

SQL Scripts for Monitoring Transactions

blogger
                                           SQL Scripts for Monitoring Transactions

There are some useful scripts that are helpful in monitoring the instances or transactions for particular composites deployed in weblogic Environment.

Benefit: These scripts are helpful for techies working in production support that will help them in analyzing
the load coming to there environment for particular services or for all the services deployed in weblogic Environment.

Below are the scripts:

Average, minimum, and maximum duration of components

SELECT DOMAIN_NAME,
COMPONENT_NAME,
DECODE(STATE,'5','COMPLETE','9','STALE','10','FAULTED') STATE,
TO_CHAR(MIN((TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),12,2))*60*60) +
(TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),15,2))*60) +
TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),18,4))),'999990.000') MIN,
TO_CHAR(MAX((TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),12,2))*60*60) +
(TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),15,2))*60) +
TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),18,4))),'999990.000') MAX,
TO_CHAR(AVG((TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),12,2))*60*60) +
(TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),15,2))*60) +
TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),18,4))),'999990.000') AVG,
COUNT(1) COUNT
FROM CUBE_INSTANCE
WHERE CREATION_DATE >= SYSDATE-1
--AND COMPONENT_NAME LIKE '%%'
AND COMPOSITE_NAME LIKE '%%'
GROUP BY DOMAIN_NAME, COMPONENT_NAME, STATE
ORDER BY COMPONENT_NAME, STATE


Note: Enter the name of the component or composite name accordingly.


Number of instance in every hour (load query)


SELECT inner_tab.hour_time,count(*) no_of_incidents  FROM ( select to_number(to_char(created_time, 'HH24')) hour_time  from COMPOSITE_INSTANCE
where created_time BETWEEN to_date('23-09-2013 19:00:00','DD-MM-YYYY HH24:MI:SS')
AND to_date('24-09-2013 00:00:00','DD-MM-YYYY HH24:MI:SS')
)inner_tab GROUP BY inner_tab.hour_time order by inner_tab.hour_time


Note: change the date accordingly.


Running instances of any particular composite in last one hour


select compin.id, substr(compin.composite_DN, 0, instr(compin.composite_DN, '!')-1) Composite_name, compin.source_name, compin.conversation_id
, to_char(compin.created_time, 'MM/DD/YY-HH:MI:SS')
from composite_instance compin
where
compin.state = '0'
and compin.id not in (select cmpst_id from cube_instance cubein)
and compin.created_time > sysdate - 1/24
and substr(compin.composite_DN, 0, instr(compin.composite_DN, '!')-1) IN('composite_name1','composite_name2');


Note: change the name of the composites accordingly.


Instance processing times


SELECT create_cluster_node_id, cikey, conversation_id, parent_id, ecid, title, state, status, domain_name, composite_name, cmpst_id, TO_CHAR
(creation_date,'YYYY-MM-DD HH24:MI:SS') cdate, TO_CHAR(modify_date,'YYYY-MM-DD HH24:MI:SS') mdate,
extract (day from (modify_date - creation_date))*24*60*60 +
extract (hour from (modify_date - creation_date))*60*60 +
extract (minute from (modify_date - creation_date))*60 +
extract (second from (modify_date - creation_date))
FROM   cube_instance
WHERE  TO_CHAR(creation_date, 'YYYY-MM-DD HH24:MI') >= '2013-05-06 11:00'
AND    TO_CHAR(creation_date, 'YYYY-MM-DD HH24:MI') <= '2013-05-06 18:00'
ORDER BY cdate;


Note: change the date and time accordingly.


Number of long Running(More than 7 days) instances for any particular composite.


select compin.id, substr(compin.composite_DN, 0, instr(compin.composite_DN, '!')-1) Composite_name, compin.source_name, compin.conversation_id
, to_char(compin.created_time, 'MM/DD/YY-HH:MI:SS')
from composite_instance compin
where
compin.state = '0'
and compin.id not in (select cmpst_id from cube_instance cubein)
and compin.created_time < sysdate - 7
and substr(compin.composite_DN, 0, instr(compin.composite_DN, '!')-1) IN('Partition/composite_name');


Note : change the partition and composite name accordingly.

Thanks a lot for your patience!!!!

Regards
-Ashish