Monday, December 24, 2012

Message Reliability in OSB

Message reliability is one of the features which customer look for. OSB too provides options to preserve the message. Consider a typical scenario where in proxy service reads a message from a source queue and send that message to a target queue. Once proxy service picks up the message it will try to post the message to the target queue. If for any reason the target queue is not accessible or any failure happens during posting there is a chance to loose the original message if proper steps not taken.

To ensure message reliability OSB provides transactional support while configuring  the proxy service.

As the first step create a source queue(TestQueue) and a corresponding error queue(TestErrorQueue) for the source queue


Set the delivery failure option for the source queue - TestQueue

Create a proxy service to consume the message from the TestQueue and enable the transaction parameters as follows

Create a business service to post the message to target queue( To generate error I have configured  a fictitious server and queue)

Create a message flow to route the message to business service


Now post a message to the TestQueue. Set the Redelivery limit to 2



Now on the OSB console we can see message is getting retried 2 times








Each time when the business process try to post the message to the target queue on the remote server, it will fail and the transaction will roll back the message back to the source queue.


After the retry limit is reached the message will be removed from the TestQueue and will move to the TestErrorQueue





We can see the message content in the ErrorQueue. So in any case the original message is not lost . It is preserved for further action which can be automated or manual






Friday, December 21, 2012

JMS Message Retry in BPEL

Reliability is one of the key features in integration scenario. Customer never wants to hear that application failed to process the message and it is even worst when application lost the message.

 Consider a typical scenario where client posts a message to queue and BPEL is consuming the message from the queue. If any failure happens in the BPEL the message should be rolled back to the original JMS Queue and should retry after some delay. If the retry limit is reached then the message should be moved to the ErrorQueue. Later the failed messages can be retrieved from the error queue and can be reprocessed. Ultimately this helps to preserve the customer data and shows the applications reliability.

Lets create a queue, error queue and connection factory as shown below


 For the MyDistributedQueue configure the error queue as follows


 Goto Deployments->jmsAdapter->configuration->OutboundConnectionPool

 Create a new connection pool  and specifying the connection factory which we created in the first step


Set the acknowledgement mode as client_ack

 

Create a BPEL process with a JMSAdapter for consuming the message from the queue




Also its important to set the oneWayDeliveryPolicy to sync in the composite.xml


Now in the BPEL process add an throw activity with rollback exception/system exception





Deploy the process and post a message in the MyDistributedQueue

Set the ReDeliveryLimit to 5 while posting the message






Now watch the emconsole , we can see 6 instances of the process getting created as the redelivery limit value was set as 5



After 5 retries the message will be redirected to the ErrorQueue.

Now if we watch the MyDistributeQueue, the message wont be available there.


Now check the error queue, we can see the message over there


Select the queue and go to show message. we can see the original message


Thursday, December 20, 2012

Sending message to Distributed Queue from OSB in a clustered environment

Sending a message from a OSB business service to JMS queue is very straight forward. This will work fine in any regular environment. But the code is moved to production which is clustered , then there are few points to consider.

The normal approach to set the EndPoint URI in OSB business service is using the below format
jms://hostname:port/connectionfactoryJNDI/resourceJNDI.

But in a clustered environment there will be multiple nodes. so to which node OSB will send the message?. To which node the queue is associated? What will happen if that node is down?

To address such challenges weblogic provides the Distributed Destinations
A distributed destination is a set of destinations (queues or topics) that are accessible as a single, logical destination to a client

Applications that use distributed destinations are more highly available than applications that use simple destinations because WebLogic JMS provides load balancing and failover for member destinations of a distributed destination within a cluster. Once properly configured, your producers and consumers are able to send and receive messages through the distributed destination. WebLogic JMS then balances the messaging load across all available members of the distributed destination. When one member becomes unavailable due a server failure, traffic is then redirected toward other available destination members in the set

Creating a distributed queue is the same as normal queue and it should target on all nodes of the server

If we monitor the distributed queue it will  look as shown below.


so now a distributed destination is set, how do we send the message to this distributed destination.

From OSB business service for the end point URI, provide all the nodes in the cluster as coma separated values


This will ensure it will post the message to the first node and if that failed it will try to post to the second node and so on. Complete fail over can be achieved using this approach in a clustered environment

TF_GENERAL_TRANSLATION_FAILURE. Please correct the NXSD schema.

I had business requirement to move the FTP polling job to OSB instead of using the FTP Adapter directly from BPEL. Earlier implementation was FTPAdapter will get the DAT file from remote location, translate the file content based on the native schema defined and then initiate the RequesterABCS component.

As part of the new requirement I used OSB FTP Protocol approach to get the file from ftp server and then from OSB the file content to a JMS Queue.

Then a JMSAdapter will consume the message from the queue and it will initiate the same requester component.

The new approach was working fine for some payloads. But for some payloads it was throwing "TF_GENERAL_TRANSLATION_FAILURE,Please correct the NXSD schema". The same payload were working fine with the initial approach.

I compared the DAT file content and the message posted in the queue by OSB. The message size in the queue were slightly higher than the original DAT file. This lead me to think that OSB file transfer might be adding some characters like carriage return or line feed, which might be the cause of this increase size of the message.

In the OSB proxy service, the transfer mode was ASCII, I changed this to Binary. Retested, the original file size and the message size matched . Also the JMSAdapter started processing the messages without any errors.

Thursday, December 13, 2012

xsl transformation remains in the pending state

Sometimes in BPEL the xsl transformation remain in the "pending" state and logs show errors like "javax.xml.transform.TransformerException: XML-22900: (Fatal Error) An internal error condition occurred".

Its often difficult to debug the problem. In such situations verify and ensure the name space associated with each element in the xsl mapping is declared in the beginning of the xsl file.

If a namespace is used without declaring it, the transformation will be shown as in pending state in the BPEL

Wednesday, December 5, 2012

OSB Scripted modification of customization file

While migrating OSB artifacts from one environment to another, we use customization file to apply changes specific to the environment. But to changing the customization file for each environment still demands a manual work.

This manual change is error prone and might create unexpected results.

Here is an alternative for that. Instead of changing the customization file manually, the following ANT script will do the job by reading the target environment specific values from a property file.

Step 1:  Create a base folder called automation and create a sub folder called source under it

Step 2 :  In the customization file identify the properties that might change from environment to envrironment and tokenize them in braces { }

For eg. the below excerpts from the customization file , in which a proxy service picks a file from FTP server and a business service will post that file content to a JMS queue

Proxy Service
=============

   <cus:envValueAssignments>
      <xt:envValueType>Service URI</xt:envValueType>
      <xt:location xsi:nil="true"/>
      <xt:owner>
        <xt:type>ProxyService</xt:type>
        <xt:path>InitiateOrderProcessing/ProxyServices/OrderProcessingInitiateFTP_PS</xt:path>
      </xt:owner>
      <xt:value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema">ftp://{FTPHOST}:{FTPPORT}/import</xt:value>
    </cus:envValueAssignments>

Business Service
================


 <tran:tableElement xmlns:tran="http://www.bea.com/wli/sb/transports">
          <tran:URI>jms://{JMSHOST}:{JMSPORT}/weblogic.jms.XAConnectionFactory/Jms.nOrderProcessing</tran:URI>
          <tran:weight>1</tran:weight>
        </tran:tableElement>



Step 3 : Save the customization file as .seed file and keep that in the source folder

Step 4: Create target specific properties file and save it in the automation folder

==========build.dev.properties===================

JMSHOST=localhost
JMSPORT=8001
FTPHOST=testftp
FTPPORT=24
ARCHIVE_DIR=/archive/test
ERROR_DIR=/error/test
DOWNLOAD_DIR=/download/test
=============end of build.dev.properties==============

==============build.qa.properties===================

JMSHOST=myhost
JMSPORT=8011
FTPHOST=mytftp
FTPPORT=21
ARCHIVE_DIR=/archive/testmy
ERROR_DIR=/error/testmy
DOWNLOAD_DIR=/download/testmy
=============end of build.qa.properties================

Step 5 : Create and store the following build.xml file in the base directory(automation)
================build.xml============================

<project name="SOA_OSB_Deployment" basedir="." default="init" xmlns:ac="antlib:net.sf.antcontrib">
    <input message="Enter the environment to deploy:" addproperty="env"/>
    <property file="${basedir}/build.${env}.properties"/>
<property name="sa.source.dir" value="${basedir}/source"/>
<property name="sa.target.dir" value="${basedir}/target"/>
 <taskdef resource="net/sf/antcontrib/antlib.xml">
  <classpath>
    <pathelement location="/automation/ant-contrib-1.0b3.jar"/>
  </classpath>
</taskdef>
<target name="init">
    <delete dir="${sa.target.dir}" verbose="${OSBVerbose}" failonerror="false" includeemptydirs="true"/>
    <mkdir dir="${sa.target.dir}"/>
<antcall target="updateCustomizationFile" inheritAll="Yes"/>
 </target>
<target name="replaceTokens">
    <!-- replace ${} tokens into <param1>.new file -->
    <echo message="Replace Tokens in ${param1}"/>
<echo message="properties file in ${param2}"/>
    <copy file="${param1}" tofile="${param1}.new" overwrite="true">
      <filterchain>
        <replaceregex pattern="\$\{" replace="{"/>
        <filterreader classname="org.apache.tools.ant.filters.ReplaceTokens">
          <param type="propertiesfile" value="${param2}"/>
          <param type="tokenchar" name="begintoken" value="{"/>
          <param type="tokenchar" name="endtoken" value="}"/>
        </filterreader>
      </filterchain>
    </copy>
  </target>
    <target name="updateCustomizationFile" description="Updates the property values for the target environment">
    <tstamp prefix="Start updateCustomizationFile"/>
    <echo message="Parse service accounts"/>
 <for param="seed.file">
<path>
<fileset dir="${sa.source.dir}" includes="**/*.seed"/>
      </path>
      <sequential>
        <echo message="Parse service accounts file: @{seed.file}"/>
         <antcall target="replaceTokens" inheritAll="No">
          <param name="param1" value="@{seed.file}"/>
          <param name="param2" value="build.${env}.properties"/>
        </antcall>
        <copy file="@{seed.file}.new" tofile="customization.xml"/>
<copy file="customization.xml" todir="${sa.target.dir}"/>
<delete file="@{seed.file}.new" quiet="true"/>
           
     </sequential>
    </for>
    <tstamp prefix="finished updateCustomizationFile"/>
  </target>
  </project>
================ end of build.xml=========================




Step 6 : open command prompt and go to the base folder automation . Execute the run command
automation> run

It will prompt for you to give the target environment name.

Step7 :Based on the environment name given it will read the property from the respective properties file and update the customization file.

Result : The modified customization file will be stored in the target folder. Now just apply this customization file from sbconsole or using script as mentioned in the previous listing




Monday, December 3, 2012

Updating Service Account Values in OSB using script

OSB customization file does not support the update of Service Account values.

If we move from one environment to another there is a possibility of the username and password change.

For e.g during QA testing the FTP server may be a sample server and while moving the code to PROD we might to target to a another FTP server which has totally different set of credentials.

Its a hard job to do this change via sb-console after the deployment. Instead of that before the deployment we can change these values for the target environment

Step1 : In the Service Account configuration , instead of hard coding the username and password , provide some tokens as below
================ FTPUser.ServiceAccount ===================

<?xml version="1.0" encoding="UTF-8"?>
<ser:service-account xsi:type="ser:StaticServiceAccount" xmlns:ser="http://www.bea.com/wli/sb/services" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:con="http://www.bea.com/wli/sb/resources/config">
    <ser:description/>
    <ser:static-account>
        <con:username>{ServiceAccount.FTPUsername.value}</con:username>
        <con:password>{ServiceAccount.FTPPassword.value}</con:password>
    </ser:static-account>
</ser:service-account>


================ end of FTPUser.ServiceAccount====================

Step 2 : Create a build.{env}.properties file as follows

Note {env} could be the any target environment(dev/qa/prod)

=============== build.dev.properties=================


ServiceAccount.FTPUsername.value=developer
ServiceAccount.FTPPassword.value=exotic


==========end of  build.dev.properties===========


Step 3 : Create the build.xml as follows

================build.xml========================


<project name="SOA_OSB_Deployment" basedir="." default="updateCredentials" xmlns:ac="antlib:net.sf.antcontrib">
 
 <taskdef resource="net/sf/antcontrib/antlib.xml">
  <classpath>
    <pathelement location="/automation/ant-contrib-1.0b3.jar"/>
  </classpath>
</taskdef>


<target name="replaceTokens">
    <!-- replace ${} tokens into <param1>.new file -->
    <echo message="Replace Tokens in ${param1}"/>
<echo message="properties file in ${param2}"/>
    <copy file="${param1}" tofile="${param1}.new" overwrite="true">
      <filterchain>
        <replaceregex pattern="\$\{" replace="{"/>
        <filterreader classname="org.apache.tools.ant.filters.ReplaceTokens">
          <param type="propertiesfile" value="${param2}"/>
          <param type="tokenchar" name="begintoken" value="{"/>
          <param type="tokenchar" name="endtoken" value="}"/>
        </filterreader>
      </filterchain>
    </copy>
  </target>

    <target name="updateCredentials" description="Replace ServiceAccounts with correct passwords">
    <tstamp prefix="Start updateCredentials"/>


     <input message="Please enter the environment to deploy:" addproperty="env"/>
     <property file="${basedir}/build.${env}.properties"/>
    <property name="sa.source.dir" value="${basedir}/source"/>
<property name="sa.target.dir" value="${basedir}/target"/>
    <delete dir="${sa.target.dir}" verbose="${OSBVerbose}" failonerror="false" includeemptydirs="true"/>
    <mkdir dir="${sa.target.dir}"/>
    <unzip src="${sa.source.dir}/sbconfig.jar" dest="${sa.target.dir}"/>

    <echo message="Parse service accounts"/>
 <for param="sa.file">
<path>
<fileset dir="${sa.target.dir}" includes="**/*.ServiceAccount"/>
      </path>
      <sequential>
        <echo message="Parse service accounts file: @{sa.file}"/>
         <antcall target="replaceTokens" inheritAll="No">
          <param name="param1" value="@{sa.file}"/>
          <param name="param2" value="build.${env}.properties"/>
        </antcall>
        <copy file="@{sa.file}.new" tofile="@{sa.file}"/>
        <delete file="@{sa.file}.new" quiet="true"/>
   
        <zip destfile="target/sbconfig.jar" basedir="${sa.target.dir}" update="false"/>
     </sequential>
    </for>
    <tstamp prefix="finished updateCredentials"/>
  </target>
  </project>

================end of build.xml===================

Step 4 : Create a base folder called automation in the file system
            Create sub folders source and target under automation folder

Step 5:  Copy the build.dev.properties and build.xml files under automation directory

Note: ensure that you have ant-contrib-1.0b3.jar in the path mentioned in the build.xml
 <pathelement location="/automation/ant-contrib-1.0b3.jar"/>

Step 6 : copy the to be updated sbconfig.jar to the source folder

Step 7 : open command prompt and go to the automation folder and run ant
 automation > run
on prompt give the target environment  dev/qa/prod.

The updated sbconfig.jar will now appear in the target folder.

To verify the results unzip the sbconfig.jar and verify the Service Account values.

Now this new sbconfig.jar can be deployed to the target environment

For task in ANT

We might require to use for task in the ANT scripts.

By default for task in not available with basic ANT installation

Using the ant-contrib jar file we can define "for" task in build script.

Copy the ant-contrib-version.jar to the ANT_HOME lib folder and in the build script add the following section

  <project name="SOA_OSB_Deployment" basedir="." default="init">
     <taskdef resource="net/sf/antcontrib/antlib.xml"/>
     <target name="init">
            <for list="1,2,3,4,5" param="letter">
<sequential>
<echo>Letter @{letter}</echo>
</sequential>
</for>
      </target>
  </project>

Note : we should use net/sf/antcontrib/antlib.xml  not  net/sf/antcontrib/antcontrib.properties

Also the ant-contrib-version.jar can be kept any location, but we need to explicitly tell ant script where to locate the file


<taskdef resource="net/sf/antcontrib/antlib.xml">
  <classpath>
    <pathelement location="/usr/share/ant/lib/ant-contrib-version.jar"/>
  </classpath>
</taskdef>

Thursday, November 29, 2012

Remote archiving with FTP Adapter (JCA)

The file archiving location can be local or remote. To enable the remote archiving in the FTP adapter add the property  UseRemoteArchive with value as true to the jca file.

Apart from the above change ensure the LogicalArchiveDirectory/PhysicalArchiveDirectory is existing on the FTP server and the folders have write permission enabled



<adapter-config name="ReadOperation" adapter="Ftp Adapter" wsdlLocation="ReadOperation.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
 
  <connection-factory location="eis/ftp/TestFTP" UIincludeWildcard="Process*.txt" adapterRef=""/>
  <endpoint-activation portType="Get_ptt" operation="Get">
    <activation-spec className="oracle.tip.adapter.ftp.inbound.FTPActivationSpec">
      <property name="FileType" value="ascii"/>
      <property name="UseHeaders" value="false"/>
      <property name="AsAttachment" value="true"/>
      <property name="LogicalDirectory" value="inbound"/>
      <property name="UseRemoteArchive" value="true"/>
      <property name="Recursive" value="true"/>
      <property name="LogicalArchiveDirectory" value="archive"/>
      <property name="DeleteFile" value="true"/>
      <property name="IncludeFiles" value="Process.*\.txt"/>
      <property name="PollingFrequency" value="30"/>
      <property name="MinimumAge" value="0"/>
         </activation-spec>
  </endpoint-activation>

</adapter-config>

Cannot call Connection.rollback in distributed transaction. Transaction Manager will commit the resource manager when the distributed transaction is committed

If we take care the configuration details for the data source and connection pool, often transaction roll back exceptions can be avoided

For e.g the below error happened due to the incorrect configuration of d atasource

Internal Exception: java.sql.SQLException: Cannot call Connection.rollback in distributed transaction.  Transaction Manager will commit the resource manager when the distributed transaction is committed

In such situations verify the connection pool and check the type of datasource whether it is XA or non-XA.

If a non-XA data source is used in the connection pool, ensure you unchecked the 'Supports Global Transactions' option in the data source configuration





Wednesday, November 28, 2012

ORA-28001: the password has expired

There is an expiry date for the passwords we set for user account in oracle. Once the password expired the SOA infra will fail to load.

We can check the account status using

select USERNAME,ACCOUNT_STATUS,EXPIRY_DATE from dba_users where USERNAME  like '%DEV%'

If any of the user account_status is expired, using the following query we can see the values for PASSWORD_LIFE_TIME and PASSWORD_GRACE_TIME

SELECT * FROM dba_profiles WHERE profile = 'DEFAULT' AND resource_type = 'PASSWORD'




Set these values to unlimited using the query


ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME UNLIMITED;

ALTER PROFILE DEFAULT LIMIT PASSWORD_GRACE_TIME  UNLIMITED;



After this for every user name set the new password as follows

 alter user <username> identified by <password>;
 for eg.
 alter user DEV1_ESS identified by manager;

Now execute the query once again to see the account status
select USERNAME,ACCOUNT_STATUS,EXPIRY_DATE from dba_users where USERNAME  like '%DEV%'




Friday, November 16, 2012

Testing a secured proxy service from OSB test console

In the previous blogs I had mentioned how to secure a proxy service using the username token policy. In this article I am going to explain how to test a secured service from the osb test console.

Step1:  In the OSB console using the security configuration create a new user account as follows.
                                                    username - test_user
                                                    password - welcome1


Step2 : Now a key store has to be configured in the em console. Prior to that create a key store (testks.jks) as follows


 Step 3: In the emconsole goto the domain and on the right click menu goto Security->Security Provider Configuration



 
  Step 4: Configure the above created key store as follows

 provide the password as welcome1 for every highlighted item

Save and restart the server. The restart is mandatory to effect the changes done.

Step5 : Create a credential key. In the em console goto the domain->rightclick->Security->Credentials
Step6 : Create a Security Map called oracle.wsm.security , if it not existing

Under this map create a new security key as follows


Now we are ready to test the OSB service from the test console. Open the test console
In the override attribute section provide the security key as test_user and then click the execute button.

Now we can see a SOAP Header is attached to the request and the service executed successfully.




Monday, November 12, 2012

List all the deployed composites

It will be handy if we have a script to list all the deployed composites belong to a partition. Here is a sample

Step1:

Create a property file as follows

=============Env.properties====================


url=t3://localhost:7001
server=localhost
port=8001
username=weblogic
password=weblogic1
PARTITIONS=default,masterdata,marketing


============end of Env.properties==================

Step 2 :

Create the following python script

=================ListDeployedComposites.py=========

from java.io import FileInputStream

try:
propInputStream = FileInputStream("D:\\tech\\wlst\\Env.properties")
configProps = Properties()
configProps.load(propInputStream)
except NameError, e:
print 'EXCEPTION OCCURED'
print 'Please check there is: ', sys.exc_info()[0], sys.exc_info()[1]
else:
print 'properties loaded properly'
print 'configProps',configProps
url = configProps.get("url")
print 'url=', url
server = configProps.get("server")
print 'server=', server
port = configProps.get("port")
print 'port=', port
username = configProps.get("username")
print 'username=', username
password = configProps.get("password")
print 'password = ', password

partitions = configProps.get("PARTITIONS")
print 'partitions = ', partitions


print '========================'
partitionList = partitions.split(",")
count = 0
for i in partitionList:
try:
sca_listCompositesInPartition(server,port,username,password,partitionList[count])
except NameError, e:
print 'EXCEPTION OCCURED'
print "Please check there is: ", sys.exc_info()[0], sys.exc_info()[1]
count = count+1

============= end of ListDeployedComposites.py==========

Step 3:
Start the wlst command as follows
D:\Oracle\Middleware\Oracle_SOA1\common\bin>wlst.cmd
Run the script from wlst prompt as follows
wls:/offline> execfile('D:\\tech\wlst\\ListDeployedComposites.py')

Result:

Following 21 composites are currently deployed to the platform, in partition: default.

1. AIAB2BInterface[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-06-08T09:03:08.211Z
2. ProductionRecipeResponseEBS[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T09:59:04.233Z
3. SyncProductionRecipeListEbizProvABCSImpl[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T
4. SyncSpecPLM4PAdapter[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T09:54:10.927Z
5. SyncRecipeListEbizAdapter[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T09:59:49.244Z
6. SyncItemListEbizProvABCSImpl[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T10:03:57.380
7. ItemResponseEBSV2[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T09:57:59.963Z
8. TransformAppContextEbizService[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T10:00:21.9
9. SyncItemListEbizAdapter[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T09:59:28.608Z
10. ProductionRecipeEBS[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T09:58:45.184Z
11. AgToAGPollingTest[1.0], partition=default, mode=active, state=off, isDefault=true, deployedTime=2012-08-09T11:48:04.849Z
12. SimpleApproval[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-05-21T15:51:27.432+05:30
13. AIAErrorTaskAdministrationProcess[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-06-08T08:53:
14. AIAB2BErrorHandlerInterface[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-06-08T09:04:31.440
15. SampleCursorRef[1.0], partition=default, mode=retired, state=on, isDefault=true, deployedTime=2012-11-02T16:00:17.268+05:30
16. AIAReadJMSNotificationProcess[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-06-08T09:00:29.4
17. FirstBPEL[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-11-09T21:26:47.672+05:30
18. ReloadProcess[1.0], partition=default, mode=retired, state=on, isDefault=true, deployedTime=2012-06-08T09:02:14.366Z
19. AIAAsyncErrorHandlingBPELProcess[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-06-08T09:01:2
20. QueryResponsibilityEbizAdapter[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T09:52:36.
21. SyncSpecPLM4PReqABCSImpl[1.0], partition=default, mode=active, state=on, isDefault=true, deployedTime=2012-10-10T10:02:53.116Z





start/stop/activate/retire composites using wlst

Here is a script for doing the state change for a deployed composite. This is a generic python script which will cater the requirement for start/stop/retire/activate any number of composites at a time

Step1 :
Create a property file, specify the server connection details, the list of composites and the action required on them

========= composite.properties ==================

url=t3://localhost:7001
host=localhost
port=8001
username=weblogic
password=weblogic1

COMPOSITES=default/SampleCursorRef/1.0,default/ReloadProcess/1.0
operation=retire

========== end of composite.properties =============

Step 2 :
Create the following python script

=================StateChange.py===============
from java.io import FileInputStream
propInputStream = FileInputStream("D:\\tech\\wlst\\composite.properties")
configProps = Properties()
configProps.load(propInputStream)
url = configProps.get("url")
print 'url=', url
host = configProps.get("host")
print 'host=', host
port = configProps.get("port")
print 'port=', port
username = configProps.get("username")
print 'username=', username
password = configProps.get("password")
print 'password = ', password
operation = configProps.get("operation")
print 'operation = ', operation

print '========================'
print 'host=', host
COMPOSITES = configProps.get("COMPOSITES")
print 'COMPOSITES = ', COMPOSITES
print 'host=', host
compositeList=COMPOSITES.split(',')
print compositeList[1]
count = 0
for i in compositeList:
   compositeProps = compositeList[count].split("/")
   partition = compositeProps[0]
   compoisteName = compositeProps[1]
   version = compositeProps[2]
   print 'partition ..',partition
   print 'newPartition..',newPartition
   print 'compoisteName ..',compoisteName
   print 'version ..',version
   print '========================'
   
   print 'trying to  ',operation ,'the', compoisteName
   try:
if operation=='start':
sca_startComposite(host, port, username, password, compoisteName, version)
elif operation=='stop':
sca_stopComposite(host, port, username, password, compoisteName, version)
elif operation=='activate':
sca_activateComposite(host, port, username, password, compoisteName, version)
elif operation=='retire':
sca_retireComposite(host, port, username, password, compoisteName, version)
else:
   print '=======DO NOTHING ========='
   except NameError, e:
print 'EXCEPTION OCCURED'
print "Please check there is: ", sys.exc_info()[0], sys.exc_info()[1]
   else:
print 'Completed the action on ',compoisteName
   count = count+1
=================end of StateChange.py============

Step3: Open a command promt and start the wslt command from D:\Oracle\Middleware\Oracle_SOA1\common\bin as follows  

D:\Oracle\Middleware\Oracle_SOA1\common\bin>wlst.cmd

This will start the wslt engine . Now execute the python script from the wlst prompt as follows
wls:/offline> execfile('D:\\tech\wlst\\StateChange.py')

To do start/stop/activate change the value for operation in the composite.properties file

Sample result
===================== 
trying to   retire the ReloadProcess
host = localhost
port = 8001
user = weblogic
partition = default
compositeName = ReloadProcess
revision = 1.0
label =  None
compositeDN =default/ReloadProcess!1.0
Connecting to: service:jmx:t3://localhost:8001/jndi/weblo
Composite (default/ReloadProcess!1.0) is successfully ret
Completed the action on  ReloadProcess
=======================================


Friday, November 9, 2012

Securing OSB proxy service

Security is an important aspect in the webservice domain. Let us see how to secure an OSB proxy service using OWSM

Create a OWSM enabled osb domain


Create a proxy service. Add the security policy as shown below

Enable the Header Processing as below

Save and Activate the changes in the osb console.

Now test the service from Test Console without passing the SOAP Header

It will throw an error as shown below

The service cannot be invoked without passing user credentials as part of SOAP Header.


Now lets see how to invoke this secured service from BPEL 

Create a partner link for the above OSB service in the BPEL.

In the composite.xml design view select the partnerlink

Right click on the partner link and select the Configure WS Policies option

Choose the wss_username_token_client_policy for the Security field

In the property inspector window, go to binding property section and click on the add button.


Add two new properties for username and password with appropriate values for the properties

Now the composite.xml will appear like as shown below. 



Deploy and test the BPEL. Now this BPEL will successfully invoke the secured OSB proxy service