Showing posts with label oracle database.. Show all posts
Showing posts with label oracle database.. Show all posts

Wednesday, 15 June 2016

blogger
[WLS 12.2.1.0.0] Managed servers going to FORCE_SHUTTING_DOWN state when resumed from ADMIN state.

Hello Viewer,

I have noticed one of the weird behavior in wls 12.2.1.0.0 version , where in when we start the server and in case any of the datasource do not come up because of associated database issue(database down or any other related issue). In that case server will stuck to Admin state which is normal behavior but when we will try to resume it from console it will again go to shutdown state.

Workaround: 

Reduce the initial connection capacity for datasource to ZERO.

Thanks a lot for your patience!!!

Regards
-Ashish


Friday, 22 April 2016

blogger

 [12.2.1.0.0] Failed to start the BAM Alert Engine


Hello Viewers,

I was trying to configure the BAM Alerts with Email Notification feature. I created the BAM alerts in BAM composer and configured the BAM properties and user messaging driver.


while deactivating the alert I was getting the pop up saying "unable to deactivate the alert" but actually it gets deactivate. same behavior when i was trying to activate it.
When i was trying to save the alert it is "showing unable to load the alert"

In logs i was getting the below error:

-------------------------------------------------------------------------------------------------------------------------------------------------------------
<Apr 3, 2016 1:54:44 AM EDT> <Warning> <oracle.beam.server> <BEA-000000> <BAM Alerts Engine Service failed to start. 
Exception: java.lang.StringIndexOutOfBoundsException: String index out of range: 4 
at java.lang.String.substring(String.java:1963) 
-------------------------------------------------------------------------------------------------------------------------------------------------------------

Applied all the mandatory patches for BAM and restarted the server after clearing tmp, cache and data folder but still got the same issue.

Solution :

1) Go to EM -> Navigate to Business Activity Monitoring -> BAMServer -> BAM Properties
2) Click on "More Advanced configuration
3) Search for ScheduledDataPurgeTimeForDataObject property, change its value from 1:0:0 to 01:00:00
4) Save your changes.
5) Restart the environment.

Thanks a lot for your patience!!!

Regards
-Ashish

Sunday, 3 January 2016

blogger

 

Script to target/untarget multiple datasource to/from the   cluster

Hello to Viewers,
This script will help in performing the target and untarget operation on multiple datasource in one go. You don't need to go to the console and do the same task manually which is of-course time taking and error prone.

FLOW:

shell script ---> python --> wlst command.


Here shell script will call python script and in python we have mentioned few relevant wlst command that perform the actual task.

Follow the below steps:

1) Create a shell script under below directory: QueueOperation.sh

I have taken the directory structure as below:

/opt/soauser/automation/SOADataSourceOperation/


DataSourceOperation.sh

----------------------------------------------------------------------------------------------


#!/bin/sh

export LOGFILE="/opt/soauser/automation/SOADataSourceOperation/DataSource.log"
if [ -f $LOGFILE ]; then
  rm -f $LOGFILE
fi
rm -rf /opt/soauser/automation/SOADataSourceOperation/DataSource.log
WL_HOME="/xxxxxx/xxxx/xxx/wlserver_10.3"
export WL_HOME
echo "Please enter target to Target and untarget to Untarget the Datasources:"
read PARAMETER1
cd /opt/soauser/automation/SOADataSourceOperation/
sh ${WL_HOME}/common/bin/wlst.sh /opt/soauser/automation/SOADataSourceOperation/DataSourceOperation.py $PARAMETER1 $1 >> /opt/soauser/automation/SOADataSourceOperation/DataSource.log
exit

------------------------------------------------------------------------------------------------

2) Create a text file in same directory that contains the datasource name:

Suppose you need to untarget the datasource related to SAP then name it as SAPdsList.txt(targetdsList.txt)

here target could be SAP or any end system to which datasources are related.

SAPdsList.txt
-----------------------------------------------------------------------------------------------------
DSNAME1
DSNAME2

------------------------------------------------------------------------------------------------------



3) Create a python file under the same directory: DataSourceOperation.py

Here i am assuming that 
datasources are targeted to only weblogic cluster.


DataSourceOperation.py

------------------------------------------------------------------------------------------------------
from java.io import FileInputStream

import java.lang
import os
import string
import sys , traceback

operation=sys.argv[1]
target=sys.argv[2]
def connectToServer():
        USERNAME = 'username'
        PASSWORD = 'password'
        URL='t3://AdminServerhost:AdminServerport'
        #Connect to the AdminServer
        print 'starting the script ....'
        connect(USERNAME,PASSWORD,URL)

def disconnectFromServer():
    print "Disconnecting from the Admin Server"
    disconnect()
    print "Exiting from the Admin Server"
    exit()
    print "Mission Accomplished"


def Target(DSName):
     try:
        print 'Entry point1...'
        edit()
        tgName = 'CLUSTER_NAME'
        startEdit()
        DSName = DSName.strip()
        print 'test0'
        cd ('/JDBCSystemResources/'+ DSName)
        print 'test1'
        set('Targets',jarray.array([ObjectName('com.bea:Name='+tgName+',Type=Cluster')], ObjectName))
        activate()
        print 'DataSource: "', DSName ,'" has been TARGETED TO CLUSTER Successfully'
     except :
        print 'Something went wrong...'
        exit()

def Untarget(DSName):
     try:
        print 'Entry point1...'
        edit()
        tgName = 'CLUSTER_NAME'
        startEdit()
        DSName = DSName.strip()
        print 'test0'
        cd ('/JDBCSystemResources/'+ DSName)
        print 'test1'
        set('Targets',jarray.array([], ObjectName))
        activate()
        print 'DataSource: "', DSName ,'" has been UNTARGETED FROM CLUSTER Successfully'
     except :
        print 'Something went wrong...'
        exit()

###############     Main Script   #####################################
#Conditionally import wlstModule only when script is executed with jython
if __name__ == '__main__':
    from wlstModule import *#@UnusedWildImport
print('This will enable you to perform operation on  datasource')
listName = target+'dsList.txt'
if operation=='target':
   f = open(listName,'r')
   out = f.readlines()
   for DSName in out:
     DSName.strip()
     print 'Trying to target  '+DSName
     connectToServer()
     Target(DSName)
     disconnect()
     print 'Target the '+DSName
else:
   f = open(listName,'r')
   out = f.readlines()
   for DSName in out:
     DSName.strip()
     print 'Trying to untarget '+DSName
     connectToServer()
     Untarget(DSName)
     disconnect()
     print 'Untargeted the '+DSName
disconnectFromServer()

-------------------------------------------------------------------------------------------------------

HOW TO RUN:

Simply run the shell script and provide the target system name as a parameter for ex:

cd /opt/soauser/automation/SOADataSourceOperation/

sh DataSourceOperation.sh SAP 

(here SAP is the target so python file will look for SAPdsList.txt)

then it will ask for the operation to be performed : Please enter target to Target and untarget to Untarget the Datasources

type the operation name and press "enter" and then verify the log file(DataSource.log) and queue status from console.

Thanks a lot for your patience !!!! 

Regards
-Ashish 

Monday, 17 November 2014

Script to List All the OSB projects, Business Services and Proxy Services deployed in sbconsole

blogger

Script to List All the OSB projects, Business Services and Proxy Services deployed in sbconsole.

Hello to viewer,

This script will provide you the list of all the OSB projects , business services and proxy services deployed in sbconsole in a .txt file.

Benefits: This is helpful for techies supporting non prod environment where lots of dummy projects are created by developers just for testing purpose. This script can help in performing the cleanup activity.It will provide the list of projects , BS and proxy services and later unused services and projects can be identified and deleted.

Here is the Flow :
Shell calls the python ---> python executes the wlst command.

Follow the below steps:

Step 1) : Create the directory structure as below:

cd /shared/fmw/build/script/List/


Step 2) : Under your current directory "List" create osbservices.py file with content below:

import sys
import os
import socket

connect('username', 'password', 't3://host:AdminPort')

from com.bea.wli.sb.management.configuration import ALSBConfigurationMBean
from com.bea.wli.config import Ref
from java.lang import String
from com.bea.wli.sb.util import Refs
from com.bea.wli.sb.management.configuration import CommonServiceConfigurationMBean
from com.bea.wli.sb.management.configuration import SessionManagementMBean
from com.bea.wli.sb.management.configuration import ProxyServiceConfigurationMBean
from com.bea.wli.monitoring import StatisticType
from com.bea.wli.monitoring import ServiceDomainMBean
from com.bea.wli.monitoring import ServiceResourceStatistic
from com.bea.wli.monitoring import StatisticValue
from com.bea.wli.monitoring import ServiceDomainMBean
from com.bea.wli.monitoring import ServiceResourceStatistic
from com.bea.wli.monitoring import StatisticValue
from com.bea.wli.monitoring import ResourceType

domainRuntime()

alsbCore = findService(ALSBConfigurationMBean.NAME, ALSBConfigurationMBean.TYPE)
refs = alsbCore.getRefs(Ref.DOMAIN)
it = refs.iterator()
print "List of Project in OSB"
while it.hasNext():
    r = it.next()
    if r.getTypeId() == Ref.PROJECT_REF:      
        print "Project Name : " + (r.getProjectName())

allRefs= alsbCore.getRefs(Ref.DOMAIN)
print "List of Proxy Service"
for ref in allRefs:  
  typeId = ref.getTypeId()
  if typeId == "ProxyService":    
     print "Proxy Service: " + ref.getFullName()
allRefs= alsbCore.getRefs(Ref.DOMAIN)
print "List of Business Service"
for ref in allRefs:
  typeId = ref.getTypeId()
  if typeId == "BusinessService":         
     print "Business Service: " + ref.getFullName()
disconnect()
exit()


Step 3)  : Under your current directory "List" create "osbservices.sh" with the content below :


#!/bin/sh
# set up WL_HOME and OSBHOME the root directory of your WebLogic installation
WL_HOME="WL_HOME Directory"
OSBHOME="OSB_HOME Directory"
rm output.txt
umask 027

# set up common environment
WLS_NOT_BRIEF_ENV=true
. "${WL_HOME}/server/bin/setWLSEnv.sh"


CLASSPATH="${CLASSPATH}${CLASSPATHSEP}${FMWLAUNCH_CLASSPATH}${CLASSPATHSEP}${DERBY_CLASSPATH}${CLASSPATHSEP}${DERBY_TOOLS}${CLASSPATHSEP}${POINTBASE_CLASSPATH}${CLASSPATHSEP}${POINTBASE_TOOLS}"

CLASSPATH=$CLASSPATH:$OSBHOME/modules/com.bea.common.configfwk_1.6.0.0.jar:$OSBHOME/lib/sb-kernel-api.jar:$OSBHOME/lib/sb-kernel-impl.jar:$WL_HOME/server/lib/weblogic.jar:$OSBHOME/lib/alsb.jar;
export CLASSPATH


if [ "${WLST_HOME}" != "" ] ; then
        WLST_PROPERTIES="-Dweblogic.wlstHome='${WLST_HOME}'${WLST_PROPERTIES}"
        export WLST_PROPERTIES
fi
JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${MEM_ARGS} ${CONFIG_JVM_ARGS}"

ORACLE_HOME/common/bin/wlst.sh osbservices.py  >> output.txt
date >> output.txt







Note : Change the value of wl_home, oracle_home, OSBHOME according to your environment.

After this you just need to run the shell script : sh osbservices.sh and you will get the output in output.txt file.

Thanks a lot for your patience !!!! 

Regards
-Ashish 

Sunday, 30 March 2014

blogger

Shell script to get list of services along with there count having more than two version deployed in any domain.

Hello to viewer,

This shell script will provide you the list of services that are having more than two versions deployed in any domain along with there count and partition in which they are deployed.

Benefit: This is helpful for techies supporting non prod environment where there are many versions deployed in a domain for the same service. Since large number of versions creates confusion and leads to slowness of EM console.
Maintenance activity of any non prod environment includes undeploying older versions of the services and maintaining only one version that will respond to client request.Gathering the list of service having more versions ,manually is a time consuming process.This script will surely save your Time.

Here is the Flow :

Shell calls the python ---> python executes the wlst command.

Wlst command: It will list all the composites that are deployed in any domain.

sca_listDeployedComposites('host','manageserver_port','user','password')

Change the value of host,manageserver_port,user and password accrding to your environment.

Follow the below steps:

Step 1) : Create the directory structure as below:

cd /shared/fmw/build/script/versioncount

Step 2) : Under your current directory "versioncount " create serviceList.py file with content below:

import ConfigParser
def connectToServer():
        USERNAME = 'user'
        PASSWORD = 'password'
        URL='t3://host : adminport'
        #Connect to the Administration Server
        print 'starting the script ....'
        connect(USERNAME,PASSWORD,URL)

def disconnectFromServer():
    print "Disconnecting from the Admin Server"
    disconnect()
    print "Exiting from the Admin Server"
    exit()
    print "Mission Accomplished"

def readConfigurationFile():
    try:
        print 'Entry point...'
        sca_listDeployedComposites('host','manageserver_port','user','password')


    except :
        print 'Unable to find admin server...'
        exit()

    print 'Command executed successfully'

###############     Main Script   #####################################
#Conditionally import wlstModule only when script is executed with jython
if __name__ == '__main__':
    from wlstModule import *#@UnusedWildImport
print('This will enable you to create distributed JMS Queues')
connectToServer()
readConfigurationFile()
disconnectFromServer()
####################################     


Step 3)  : Under your current directory "yourscript " create serviceList.sh with the content below :


#!/bin/sh
# set up WL_HOME, the root directory of your WebLogic installation
WL_HOME="wl_home"

umask 027

cd /shared/fmw/build/script/versioncount

# set up common environment
WLS_NOT_BRIEF_ENV=true
. "${WL_HOME}/server/bin/setWLSEnv.sh"

#CLASSPATH="${CLASSPATH}${CLASSPATHSEP}${FMWLAUNCH_CLASSPATH}${CLASSPATHSEP}${DERBY_CLASSPATH}${CLASSPATHSEP}${DERBY_TOOLS}${CLASSPATHSEP}${POINTBASE_CLASSPATH}${CLASSP
ATHSEP}${POINTBASE_TOOLS}"

if [ "${WLST_HOME}" != "" ] ; then
        WLST_PROPERTIES="-Dweblogic.wlstHome='${WLST_HOME}' ${WLST_PROPERTIES}"
        export WLST_PROPERTIES
fi

#echo
#echo CLASSPATH=${CLASSPATH}

JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${MEM_ARGS} ${CONFIG_JVM_ARGS}"

sh ORACLE_HOME/common/bin/wlst.sh serviceList.py > output.out

grep -E "partition" output.out > final.out

cat final.out | sed -e 's/mode.*//' > a.out

cat a.out | sed -e 's/^[0-9]*. //g' | sed -e's/[0-9]*[.]*//g' | sed -e 's/\[//g' | sed -e 's/\]//g' | sed -e 's/, /,/' > b.out

N=0
while read LINE ; do
  var[$N]=$(echo $LINE)
  #echo {$var[$N]}
  N=$((N+1))
done < b.out
for i in ${var[*]};
do
   COUNT=0
   for j in ${var[*]};
   do
     if [ "$i" = "$j" ]; then
     COUNT=$((COUNT+1))
     fi
   done
   if [ "$COUNT" -gt 1 ]; then
   echo $i $COUNT >> result.txt

   fi
done
sort result.txt | uniq  > finalresult.txt
rm result.txt    


Note : Change the value of wl_home, oracle_home according to your environment.

After this you just need to run the shell script : sh serviceList.sh and you will get the output in finalresult.txt file like mentioned below:

service1,partition=partition_name, count 


Thanks a lot for your patience !!!! 

Regards
-Ashish 



 
 

Monday, 10 February 2014

Shell script to get Email Notification for every new occurence of java.lang.OutOfMemoryError in logs

blogger
Shell script to get Email Notification for every new occurrence of java.lang.OutOfMemoryError in logs

This script will send the Email Notification whenever there is new occurrence of java.lang.OutOfMemoryError in logs with latest Timestamp that will help you in diagnosing the issue.

Benefit: This script is helpful in monitoring the Environment and will let the administrator know about the issue ASAP so that he can perform the required solution and can prevent unbearable delay.As it sends the notification it also takes the backup of log file.

Below are steps you need to follow:

Step 1) Create the directory

cd /shared/fmw/build/myscript

Step 2) Create AutocheckOOM.sh file in the current directory with below content:

#!/bin/bash
##########
ENVNAME="Env:"
COUNTER="0"
WORKDIR="/shared/fmw/build/myscript"
msName="MngdSvr1"
Log_LOC="logfile_location"

LogFiles=( MngdSvr1.log )

# email notifications will be sent via mail
EMAIL="mail_id"
# CC list in the notification mail
CCList="ccmail_id"
# From email address in the notification Email
FromAdd="frommail_id"



#Functions

OOM() {
for logfile in ${LogFiles[@]} ;do
Count=`grep "java.lang.OutOfMemoryError" $Log_LOC/$logfile | wc -l`
COUNTER=$[$COUNTER + $Count]
export COUNTER
done
}


BackupLogs() {
for logfile in ${LogFiles[@]} ;do
if [ -f $Log_LOC/$logfile ]; then
tar -czf $Log_LOC/$logfile.tar.gz $Log_LOC/$logfile
fi
done
}

Main() {
OOM
#ProcCheck
if [[ "$COUNTER" != "0" ]] ; then
echo "`date` :Out Of Memory Condition Detected.."
echo "`date` :Backing up logs for future reference.."
BackupLogs
cat $Log_LOC/NGS_MngdSvr1.log | grep java.lang.OutOfMemoryError | grep "####<" > temp.txt
output=$(tail -1 temp.txt | awk -F'>' '{print $1}' | awk -F'<' '{print $2}')
echo ''>> a.txt
test=$(cat a.txt | grep "$output")
if [ "$test" == "" ]; then
echo $output >> a.txt
#echo $output
echo "$ENVNAME OutOfMemory Error Detected at $output" | mail -s "$(echo -e "Auto-Msg: $ENVNAME : OutOfMemory Error\nContent-Type: text/html")" $EMAIL $CCList
 $FromAdd
fi
#CleanLogs

else
echo "`date` :Exiting, No Out of Memory found...";
exit 0
fi
}
Main


Note : change the location of log according to your Env.
           change the email_id accordingly.
          change the name of managed server accordingly

After this you just need to run the shell script or better way is to set this as a job in cron.

sh AutocheckOOM.sh

Thanks a lot for your patience!!!!

Regards
-Ashish




Sunday, 2 February 2014

SQL Scripts for Monitoring Transactions

blogger
                                           SQL Scripts for Monitoring Transactions

There are some useful scripts that are helpful in monitoring the instances or transactions for particular composites deployed in weblogic Environment.

Benefit: These scripts are helpful for techies working in production support that will help them in analyzing
the load coming to there environment for particular services or for all the services deployed in weblogic Environment.

Below are the scripts:

Average, minimum, and maximum duration of components

SELECT DOMAIN_NAME,
COMPONENT_NAME,
DECODE(STATE,'5','COMPLETE','9','STALE','10','FAULTED') STATE,
TO_CHAR(MIN((TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),12,2))*60*60) +
(TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),15,2))*60) +
TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),18,4))),'999990.000') MIN,
TO_CHAR(MAX((TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),12,2))*60*60) +
(TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),15,2))*60) +
TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),18,4))),'999990.000') MAX,
TO_CHAR(AVG((TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),12,2))*60*60) +
(TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),15,2))*60) +
TO_NUMBER(SUBSTR(TO_CHAR(MODIFY_DATE-CREATION_DATE),18,4))),'999990.000') AVG,
COUNT(1) COUNT
FROM CUBE_INSTANCE
WHERE CREATION_DATE >= SYSDATE-1
--AND COMPONENT_NAME LIKE '%%'
AND COMPOSITE_NAME LIKE '%%'
GROUP BY DOMAIN_NAME, COMPONENT_NAME, STATE
ORDER BY COMPONENT_NAME, STATE


Note: Enter the name of the component or composite name accordingly.


Number of instance in every hour (load query)


SELECT inner_tab.hour_time,count(*) no_of_incidents  FROM ( select to_number(to_char(created_time, 'HH24')) hour_time  from COMPOSITE_INSTANCE
where created_time BETWEEN to_date('23-09-2013 19:00:00','DD-MM-YYYY HH24:MI:SS')
AND to_date('24-09-2013 00:00:00','DD-MM-YYYY HH24:MI:SS')
)inner_tab GROUP BY inner_tab.hour_time order by inner_tab.hour_time


Note: change the date accordingly.


Running instances of any particular composite in last one hour


select compin.id, substr(compin.composite_DN, 0, instr(compin.composite_DN, '!')-1) Composite_name, compin.source_name, compin.conversation_id
, to_char(compin.created_time, 'MM/DD/YY-HH:MI:SS')
from composite_instance compin
where
compin.state = '0'
and compin.id not in (select cmpst_id from cube_instance cubein)
and compin.created_time > sysdate - 1/24
and substr(compin.composite_DN, 0, instr(compin.composite_DN, '!')-1) IN('composite_name1','composite_name2');


Note: change the name of the composites accordingly.


Instance processing times


SELECT create_cluster_node_id, cikey, conversation_id, parent_id, ecid, title, state, status, domain_name, composite_name, cmpst_id, TO_CHAR
(creation_date,'YYYY-MM-DD HH24:MI:SS') cdate, TO_CHAR(modify_date,'YYYY-MM-DD HH24:MI:SS') mdate,
extract (day from (modify_date - creation_date))*24*60*60 +
extract (hour from (modify_date - creation_date))*60*60 +
extract (minute from (modify_date - creation_date))*60 +
extract (second from (modify_date - creation_date))
FROM   cube_instance
WHERE  TO_CHAR(creation_date, 'YYYY-MM-DD HH24:MI') >= '2013-05-06 11:00'
AND    TO_CHAR(creation_date, 'YYYY-MM-DD HH24:MI') <= '2013-05-06 18:00'
ORDER BY cdate;


Note: change the date and time accordingly.


Number of long Running(More than 7 days) instances for any particular composite.


select compin.id, substr(compin.composite_DN, 0, instr(compin.composite_DN, '!')-1) Composite_name, compin.source_name, compin.conversation_id
, to_char(compin.created_time, 'MM/DD/YY-HH:MI:SS')
from composite_instance compin
where
compin.state = '0'
and compin.id not in (select cmpst_id from cube_instance cubein)
and compin.created_time < sysdate - 7
and substr(compin.composite_DN, 0, instr(compin.composite_DN, '!')-1) IN('Partition/composite_name');


Note : change the partition and composite name accordingly.

Thanks a lot for your patience!!!!

Regards
-Ashish

Friday, 24 January 2014

Shell script to create Multiple Queues at a time in a Domain.

blogger
                                Shell script to create Multiple Queues at a time in a Domain.

Hello to viewer,

Here is script that will help in creation of multiple queues at a time and will surely save an extra effort and time.

Benefit: It will avoid one by one creation of distributed queues from admin console and allow creation of multiple queue in one go just by executing a shell script.

You need to follow the below steps:

Note: Change the of wlhome, username, password and url according to your Environment.

Step1) Create a directory :

cd /shared/fmw/build/myscript/newscripts/createQueue

Step 2) Create the createQueues.sh file under the current directory location with below content:

#!/bin/sh

# set up WL_HOME, the root directory of your WebLogic installation
WL_HOME="wlhome"

umask 027

# set up common environment
WLS_NOT_BRIEF_ENV=true
. "${WL_HOME}/server/bin/setWLSEnv.sh"

CLASSPATH="${CLASSPATH}${CLASSPATHSEP}${FMWLAUNCH_CLASSPATH}${CLASSPATHSEP}${DERBY_CLASSPATH}${CLASSPATHSEP}${DERBY_TOOLS}${CLASSPATHSEP}${POINTBASE_CLASSPATH}${CLASSPATHSEP}${POINTBASE_TOOLS}"

if [ "${WLST_HOME}" != "" ] ; then
        WLST_PROPERTIES="-Dweblogic.wlstHome='${WLST_HOME}' ${WLST_PROPERTIES}"
        export WLST_PROPERTIES
fi

echo
echo CLASSPATH=${CLASSPATH}

JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${MEM_ARGS} ${CONFIG_JVM_ARGS}"
eval '"${JAVA_HOME}/bin/java"' ${JVM_ARGS} weblogic.WLST createQueues.py


Step 3) Create the  createQueues.py file under the current directory location with below content:


import ConfigParser
def connectToServer():
    try:
        USERNAME = 'username'
        PASSWORD = 'password'
        URL='t3://adminhost:adminport'
        #Connect to the Administration Server
        print 'starting the script ....'
        connect(USERNAME,PASSWORD,URL)
    except:
        print 'Unable to find admin server...'
        exit()

def startEditSession():
    print "Starting the Edit Session"
    edit()
    startEdit()

def activateTheChanges():
    print "Saving and Activating the changes..."
    try:
        save()
        activate(block="true")
        print "script returns SUCCESS"
    except Exception, e:
        print e
        print "Error while trying to save and/or activate!!!"
        dumpStack()
        raise

def disconnectFromServer():
    print "Disconnecting from the Admin Server"
    disconnect()
    print "Exiting from the Admin Server"
    exit()
    print "Mission Accomplished"

def createQueue(queueName,queueJNDIName,systemModuleName,subDeploymentName,targetServerCluster,targetName):
    print 'Creating Queue ', queueName
    cd('/JMSSystemResources/'+systemModuleName+'/JMSResource/'+systemModuleName)
    cmo.createUniformDistributedQueue(queueName)
    cd('/JMSSystemResources/'+systemModuleName+'/JMSResource/'+systemModuleName+'/UniformDistributedQueues/'+queueName)
    cmo.setJNDIName(queueJNDIName)
    cd('/SystemResources/'+systemModuleName+'/SubDeployments/'+subDeploymentName)
    #set('Targets',jarray.array([ObjectName('com.bea:Name=Test2JMSCluster,Type=Cluster')], ObjectName))
    if targetServerCluster in ('C','c') :
        clstrNam=targetName
        set('Targets',jarray.array([ObjectName('com.bea:Name='+clstrNam+',Type=Cluster')], ObjectName))
    else:
        servr=targetName
        set('Targets',jarray.array([ObjectName('com.bea:Name='+servr+',Type=JMSServer')], ObjectName))
    cd('/JMSSystemResources/'+systemModuleName+'/JMSResource/'+systemModuleName+'/UniformDistributedQueues/'+queueName)
    cmo.setSubDeploymentName(subDeploymentName)
    cmo.unSet('Template')
    cmo.setForwardDelay(5)
    print 'Saving the changes'
    try:
        save()
    except weblogic.management.mbeanservers.edit.ValidationException, err:
        print 'Could not save ', err
        dumpStack()
        raise

def readConfigurationFile():
    try:
        parser = ConfigParser.ConfigParser()
        parser.read('/shared/fmw/build/myscript/newscripts/createQueue/propFile/jmsQueues.ini')
    except ConfigParser.ParsingError, err:
        print 'Could not parse:', err
    for section_name in parser.sections():
        print 'Section:', section_name
        print '  Options:', parser.options(section_name)
        createQueue(parser.get(section_name, 'queueName'),
                    parser.get(section_name, 'queueJNDIName'),
                    parser.get(section_name, 'systemModuleName'),
                    parser.get(section_name, 'subDeploymentName'),
                    parser.get(section_name, 'targetServerCluster'),
                    parser.get(section_name, 'targetName'))

    print 'Queue created successfully'
#Read Configuration File ends here

###############     Main Script   #####################################
#Conditionally import wlstModule only when script is executed with jython
if __name__ == '__main__':
    from wlstModule import *#@UnusedWildImport
print('This will enable you to create distributed JMS Queues')
connectToServer()
startEditSession()
readConfigurationFile()
#createQueue()
activateTheChanges()
disconnectFromServer()

####################################


Step 3) Create a folder propFile > cd propFile > Create a file jmsQueues.ini under this folder with below content. It contains the details of queues that you want to create in one go. Below example is for two queues ,for more number of queues you can add the snippet accordingly.

[TestQ]
queueName = TestQ1
queueJNDIName = jms/Q/TestQ
systemModuleName = SOAJMSModule
subDeploymentName = SOAJMSServer1984410823
targetServerCluster = x
targetName = SOAJMSServer_auto_1
[TestQueue]
queueName = TestQueue2
queueJNDIName = jms/Q/TestQueue
systemModuleName = SOAJMSModule
subDeploymentName = SOAJMSServer1984410823
targetServerCluster = x
targetName = SOAJMSServer_auto_1


After this you just need to execute the shell script and then verfiy it from admin console.
sh createQueues.sh

Thanks a lot for your patience!!!

Regards
-Ashish

Sunday, 19 January 2014

Script for Running instances Email Notification for Composite deployed in any domain

blogger
          Script for Running instances Email Notification for Composite deployed in any domain

Hello to viewer,

This script is for getting the Email Notification in case there is any Running instance of any composite.
This script will monitor the instances of last one hour and if there is any running instance found for any composite deployed in our domain , it simple sends an alert in the form of Email Notification.

Benefit: It is helpful in monitoring the critical transaction especially in production environment so that any transaction failure can be recovered by getting the prior notification.

You need to follow the below steps:

Step 1) Create the below directory structure.

cd /shared/fmw/myscript/monitor

Step 2) Create a file InstanceMonitor.sh under current directory with below content.

#!/bin/sh
export ORACLE_HOME=oracle_home


COMPMON_HOME='/shared/fmw/myscript/monitor'
cd $COMPMON_HOME

MAILTO=xxxxxx@xxxxxx.com
MAILCC=xxxxxx@xxxxxx.com
DBUSER=database_user
PASSWORD=password
HOST=hostname
PORT=port
SERVICE_NAME=service_name
INFO=OFF
ENV=ENV_NAME
MAILFROM=xxxxxx@xxxxx.com
START=ON
OUTPUT=/shared/fmw/myscript/monitor/output.html

if [ -f $COMPMON_HOME/sqloutput.txt ]; then
  rm -f $COMPMON_HOME/sqloutput.txt
fi

if [ -f $COMPMON_HOME/.tmp ]; then
  rm -f $COMPDBMON_HOME/.tmp
fi

########### For MY Domain ########

#if [ ! -f /shared/fmw/myscript/monitor/donotmail ]; then

        echo " " >$OUTPUT
        (
        echo "<br>"
        echo "<H2>LIST OF RUNNING INSTANCES for Composite in Mydomain</H2>"
        #echo "<br>"
        echo "<table border = 1 cellSpacing= 1 cellPadding=1 >"
        echo "<th>"
        echo "<tr bgcolor='#CFCFFF' ><FONT face=Tahoma color='blue'> "
        echo "<td colspan='1' align='center' font-color='blue'><b>INSTANCE ID</b></td>"
        echo "<td colspan='1' align='center' font-color='blue'><b>COMPOSITE NAME</b></td>"
        echo "<td colspan='1' align='center' font-color='blue'><b>SOURCE NAME</b></td>"
        echo "<td colspan='1' align='center' font-color='blue'><b>CONVERSATION ID</b></td>"
        echo "<td colspan='1' align='center' font-color='blue'><b>CREATED TIME</b></td>"
        echo "</font>"
        echo "</tr>"
        echo "</th>"
        echo "<tr>"
        )>>$OUTPUT

        RETURN=`sqlplus -S 'database_user/password@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=hostname)(PORT=port)))(CONNECT_DATA=(SERV
ER=DEDICATED)(SERVICE_NAME=servicename)))' < query.sql`

        len=`expr length "$RETURN"`
        if [ `echo $len` -gt 0 ]; then
        sed '/^$/d' < $COMPMON_HOME/sqloutput.txt >$COMPMON_HOME/.tmp
        i=1
        while read record; do
                if [ $i -ge 6 ]; then
                                echo "</tr>" >>$OUTPUT
                                echo "<tr>" >>$OUTPUT
                                i=1
                else
                                (echo "<td>"
                                echo $record
                                echo "</td>" )>>$OUTPUT
                                i=`expr $i + 1`

                fi
        done < /shared/fmw/myscript/monitor/.tmp
        echo "</tr>" >>$OUTPUT
        echo "</table>" >>$OUTPUT
        echo "<hr>" >>$OUTPUT


(
                        echo "From: $MAILFROM"; \
                        echo "To: $MAILTO"; \
                        echo "Cc: $MAILCC"; \
                        echo "Content-Type: text/html";\
                        echo "Subject:Running Instances Monitoring"; \
                        echo ""; \
                        cat $OUTPUT; \
                ) | /usr/lib/sendmail -t
        fi



Note: you need to provide the value of  below variable according to your environment:

oracle_home

TNS_ENTRY: 

DBUSER=database_user
PASSWORD=password
HOST=hostname
PORT=port
SERVICE_NAME=service_name


Step 3) Create the file query.sql file that contains the sql script for getting the Running instance details of composites deployed in any domain.

SET ECHO ON
WHENEVER SQLERROR EXIT -1 ROLLBACK
SET TRIMOUT ON
SET HEAD OFF
SET feedback OFF
SET serveroutput OFF

spool sqloutput.txt
select mycom.id, substr(mycom.composite_DN, 0, instr(mycom.composite_DN, '!')-1) Composite_name, mycom.source_name, mycom.conversation_id
, to_char(mycom.created_time, 'MM/DD/YY-HH:MI:SS')
from composite_instance mycom
where
mycom.state = '0'
and mycom.id not in (select cmpst_id from cube_instance mycube)
and mycom.created_time > sysdate - 1/24
and substr(mycom.composite_DN, 0, instr(mycom.composite_DN, '!')-1) IN('composite_name separated by comma for which you want to monitor running instances');
spool off

quit


After this you just need to execute the shell script or you can set this job in cron for every 30 min.

Thanks a lot for your pateince!!!!

Regards
-Ashish

Thursday, 16 January 2014

Shell script to Automate the process of Retiring and Activation of Multiple composites in different domain at one time.

blogger

Shell script to Automate the process of Retiring and Activation of Multiple composites in different domain at one time.
Hello to viewer,

Here is the Shell script to Automate the process of Retiring and Activation of Multiple composites in different domain at one time.

Benefit: Some time we are required to stop the Inflow of the data whenever we need to do any release in order to avoid any data failure or data loss.At that time we can use this script to Retire multiple composite in different domain that are getting data from external resource.
This can be one of the scenario ,however there can be several other reason.

You just need to follow the below steps :

Step 1) Create the directory structure as below:

cd /shared/fmw/build/Myscript/retservice

Step 2) Create Myservicelist.txt file under current directory "retservice"

This file contains the details of composites that we need to retire in below format. File can contains entry from different domain as shown below in given format.

Format:

composite_name,version,partition_name,domain_name

Example:
               Mycomposite,1.0.0,Mypartition,MyDomain.
               Mycomposite1,1.0.0,Mypartition1,MyDomain1

Step 3) Create "MyEnv.txt" in current directory file that contains details of your Environment

# My_MyDomain deployment server weblogic
 My_MyDomain_serverURL=http://host : managed_server_port
 My_MyDomain_user=user
 My_MyDomain_password=password
 My_MyDomain_host=host
 My_MyDomain_port=managed_server_port
 My_MyDomain_adminhost=admin_host
 My_MyDomain_adminport=admin_port


# My_MyDomain1 deployment server weblogicMy_MyDomain1_serverURL=http://host : managed_server_port
 My_MyDomain1_user=user
 My_MyDomain1_password=password
 My_MyDomain1_host=host
 My_MyDomain1_port=managed_server_port
 My_MyDomain1_adminhost=admin_host
 My_MyDomain1_adminport=admin_port


Step 3) Create "retireservice.py" in the current directory that executes the wlst command.


import sys
sca_retireComposite(sys.argv[1],sys.argv[2],sys.argv[3],sys.argv[4],sys.argv[5],sys.argv[6],partition=sys.argv[7])



Step 4) Create "retireComposite.sh" in the current directory that calls the python script.


#!/bin/sh
# set up WL_HOME, the root directory of your WebLogic installation
WL_HOME="wl_home"

umask 027

# set up common environment
WLS_NOT_BRIEF_ENV=true
. "${WL_HOME}/server/bin/setWLSEnv.sh"

CLASSPATH="${CLASSPATH}${CLASSPATHSEP}${FMWLAUNCH_CLASSPATH}${CLASSPATHSEP}${DERBY_CLASSPATH}${CLASSPATHSEP}${DERBY_TOOLS}${CLASSPATHSEP}${POINTBASE_CLASSPAT
H}${CLASSPATHSEP}${POINTBASE_TOOLS}"

if [ "${WLST_HOME}" != "" ] ; then
        WLST_PROPERTIES="-Dweblogic.wlstHome='${WLST_HOME}' ${WLST_PROPERTIES}"
        export WLST_PROPERTIES
fi

#echo
#echo CLASSPATH=${CLASSPATH}

JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${MEM_ARGS} ${CONFIG_JVM_ARGS}"
    while read LINE; do
      domain=$(echo "$LINE" | awk -F, '{ print $4 }');
      servicename=$(echo "$LINE" | awk -F, '{ print $1 }');
      version=$(echo "$LINE" | awk -F, '{ print $2 }');
      partition=$(echo "$LINE" | awk -F, '{ print $3 }');
          while read LINE1; do
          temp=$(echo "$LINE1" | grep $2'_'$domain'_adminhost'| awk -F= '{ print $2 }');
          if [ "$temp" != "" ]; then
          targethost=$temp;
          fi
          temp=$(echo "$LINE1" | grep $2'_'$domain'_port'| awk -F= '{ print $2 }');
          if [ "$temp" != "" ]; then
          targetport=$temp;
          fi
          temp=$(echo "$LINE1" | grep $2'_'$domain'_user'| awk -F= '{ print $2 }');
          if [ "$temp" != "" ]; then
          user=$temp;
          fi
          temp=$(echo "$LINE1" | grep $2'_'$domain'_password'| awk -F= '{ print $2 }');
          if [ "$temp" != "" ]; then
          password=$temp;
          fi
          done < $2'Env'.txt
    ORACLE_HOME/common/bin/wlst.sh retireservice.py ${targethost} ${targetport} ${user} ${password} $servicename $version $partition > output.txt
    done < $1'servicelist'.txt

                                
Note: Change the value of wl_home, ORACLE_HOME according to your environment.

How to execute this:

It will require two parameter.

sh retireComposite.sh My My ( $1'servicelist'.txt and $2'Env'.txt , Hope you understand the difference between Two twins My ;))

After this just refersh the FARM and verify the composite.

For Activation : You jus need to change the wlst command in retireservice.py rather you should prefer to give some meaning full name . Replace retireservice with activateservice, retireComposite with activateComposite and retservice with actservice.

import sys
sca_activateComposite(sys.argv[1],sys.argv[2],sys.argv[3],sys.argv[4],sys.argv[5],sys.argv[6],partition=sys.argv[7])

Thanks a lot for your patience!!!

Regards
-Ashish