Showing posts with label JMS. Show all posts
Showing posts with label JMS. Show all posts

Saturday, November 30, 2019

Throttling JMS adapters in SOA Suite

When working with JCA adapters that need to process huge amounts of transactions per second it's important to have some sort of control over the processing. Failing to do so can lead to memory issues and stuck threads on your servers. In this blog I will show you how you can implement a throttling mechanism on inbound JMS adapters. This way you can control the number of transactions processed per time interval. I will also show you how you can adjust the properties run-time in the Enterprise Manager.

The following two properties will be used for our JMS adapter:

1) adapter.jms.receive.threads: Specifies the number of poller threads that are created when an endpoint is activated. The default is 1.
2) minimumDelayBetweenMessages: Default is False, or no delay. Inbound-only. This property is configured in milliseconds. Ensures that there at least will be MILLI_SECONDS delay between two consecutive messages posted to the downstream composite application.


Note that minimumDelayBetweenMessages is effective per adapter polling thread. If you have configured multiple adapter polling threads, this setting controls the delay between messages processed by each thread only.
 


Source: https://docs.oracle.com/middleware/1221/adapters/develop-soa-adapters/GUID-2BB20502-6F62-462F-A490-3AF301AB6089.htm#TKADP2128

I'm working with SOA Suite 11g but these properties exist for 12c too as seen in the Oracle documentation.

These properties are set in the composite.xml as part of the service definition that represents your JCA adapter, as follows:


When setting the values you have to think about what number of transactions per second is desirable. You also need to take the number of managed servers in your environment into account. The following formula could be helpful:

(Number of Managed Servers x thread count) / (minimumDelayBetweenMessages / 1000) 

Calculation example:
-There are two managed servers.
-Each adapter will handle two threads at the same time (set via property adapter.jms.receive.threads = 2).
-Each thread waits at least half a second before processing a new transaction (set via property minimumDelayBetweenMessages = 500ms).

When the formula is applied that means (2 x 2) / (500  / 1000) = 8 transactions per second will be processed per JMS queue.


It's possible to adjust these values run-time. This is done via the Enterprise Manager. Navigate to your SOA Composite and then the Service/Reference Properties:

Adjust the values and click 'Apply'.


Let's do some testing now. To view the results unambiguously, I will just use one active managed server with one queue and one thread per adapter so we can clearly see the time interval.

I will use the following values for the properties: 

<property name="adapter.jms.receive.threads">1</property>
<property name="minimumDelayBetweenMessages">2000</property>


Now according to the formula (Number of Managed Servers x thread count) / ( minimumDelayBetweenMessages / 1000) that means we have (1 x 1) / (2000 /1000) = 0,5 transaction per second. Or 1 transaction every 2 seconds per queue.

I will dump a batch of messages on one JMS queue. Then we will check the logs how the JMS adapter has processed those messages. Of course we would like to see that indeed 1 message was processed every 2 seconds.

First we need to adjust the Diagnostic Logging Level. This is done via the Log Configuration in the Enterprise Manager. Set the level for the oracle.soa.adapter logger to TRACE. This is the logging for JCA adapters.


 

When the batch of messages is imported on the JMS queue to be processed we can check our logging:

 
If we look at the Time column we can verify that the JMS adapter indeed processed 1 message every 2 seconds, great!

Friday, October 5, 2018

JMS proxy service exception: Validation of RefValueStore _static/Ref_Store/ProxyService - JMS Transport.InboundJMSProxy

I have only started working with JMS queues recently and find that problem solving around JMS proxy's can be so time consuming. I recall my previous blog.
Last week I had another JMS proxy problem and couldn't find much info on the error I was getting, so hope this will help you out :)

After rework done on a JMS proxy service where I tried to move it to another project I noticed messages weren't read from the queue anymore. Redeploying didn't work and when I tried to delete the project for a clean deployment I got the following error message within the console: Validation of RefValueStore _static/Ref_Store/ProxyService - JMS Transport.InboundJMSProxy. : RefValueStore _static/
Ref_Store/ProxyService - JMS Transport.InboundJMSProxy. does not exist failed.

It seemed that the JMS proxy was unable to find its related MDB. Every time a JMS proxy service is deployed an MDB is created that handles the connection with the JMS queue. See also my previous blog for more details. In this case OSB was unable to verify the MDB as it didn't exist anymore. You can check the deployments in the WebLogic Server, MDB deployments have the prefix SB_JMS_Proxy_*.

So I was basically stuck with a corrupt OSB service and couldn't remove or redeploy it to fix the problem. I figured I had to redeploy the service so the MDB is recreated and in order to do that I had to delete the service manually through the server. I did as follows to resolve the problem:

-Find the project or folder of your service in <OSB Domain>/osb/configfwk/core and rename the folder.

-Stop the AdminServer.
-Backup and clear cache, log and tmp folder of AdminServer.
-Start the AdminServer. When you now check the OSB console you will see the service is gone.
-In your IDE change the name of the project or folder of your service and deploy to OSB server.
-A new MDB should be created. Verify this in the deployments list on the WebLogic Server.
-Rename your project or folder to its original name.

These steps helped me to resolve the problem and messages were read from the queue again.

Monday, May 7, 2018

Exception after redeploying JMS proxy service on OSB 12c: Failure while processing an incoming message for endpoint ProxyService EventQueue/ProxyServices/PS_EventQueue due to service configuration in flux


So I had to redeploy my JMS proxy on OSB and I noticed messages weren’t getting through anymore. When I checked the logs I saw the following exception when the proxy tried to process the messages:

[2018-05-04T15:42:19.696+02:00] [osb_server1] [ERROR] [OSB-381502] [oracle.osb.transports.main.jmstransport] [tid: [ACTIVE].ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 005Qq5tI3tl4QtD5vBt1iX0003Q30003tS,1:1] [APP: SB_JMS_Proxy_a470f11.N7ff2315c.0.162e3b1fee2.N8000] [partition-name: DOMAIN] [tenant-name: GLOBAL] Exception in JmsInboundMDB.onMessage: com.bea.wli.sb.transports.TransportException: Failure while processing an incoming message for endpoint ProxyService EventQueue/ProxyServices/PS_EventQueue due to service configuration in flux[[
com.bea.wli.sb.transports.TransportException: Failure while processing an incoming message for endpoint ProxyService EventQueue/ProxyServices/PS_EventQueue due to service configuration in flux
        at com.bea.wli.sb.transports.jms.JmsInboundMDB.onMessage(JmsInboundMDB.java:121)
        at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:438)
        at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:361)
        at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:297)
        at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:5107)
        at weblogic.jms.client.JMSSession.execute(JMSSession.java:4775)
        at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:4170
        at weblogic.jms.client.JMSSession.access$000(JMSSession.java:127)
        at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5627)
        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:666)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:640)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:406)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)

This problem was caused, so I learned later, by the fact that messages were still being processed while I was redeploying my proxy. For every redeployment the JMS proxy will update the Message-Driven Bean (MDB) that will take care of the message consumption. This MDB is deployed on the WebLogic Server.  It could take some time for this MDB to be updated and start consuming messages again and get in sync with the pipeline runtime objects. This will cause the service to be “in flux”.

To solve the problem I had tried the obvious first: redeploying the service, restarting the server and the JMS queue. None of this worked because the problem was in the MDB, so I found out. Eventually I did the following to resolve my problem:

  • Undeploy your JMS service in OSB.
  • Select your JMS Server via WebLogic Server Console: Services > Messaging > JMS Servers  and go to tab “Control”. Pause Production and Consumption.
  • Look for the config.xml file located in your <OSB Domain>/config folder. In this file, look for the app-deployment that represents your MDB. You can identify it by checking the APP ID in the exception. I marked it yellow in the exception above. It should correspond with the ID in the name tag. Remove the app-deployment part.
    <app-deployment>
      <name>SB_JMS_Proxy_a470f11.N7ff2315c.0.162e3b1fee2.N8000</name>
        <target>osb_cluster</target>
        <module-type>ear</module-type>
        <source-path>sbgen/jms/SB_JMS_Proxy_a470f11.N7ff2315c.0.162e3b1fee2.N8000.ear</source-path>
        <plan-path>sbgen/jms/plan/SB_JMS_Proxy_a470f11.N7ff2315c.0.162e3b1fee2.N8000_Plan.xml</plan-path>
        <security-dd-model>DDOnly</security-dd-model>
        <staging-mode>stage</staging-mode>
        <plan-staging-mode xsi:nil="true"></plan-staging-mode>
        <cache-in-app-directory>false</cache-in-app-directory>
      </app-deployment>
  • Go to your Deployments in the OSB WebLogic Server Console.
  • In your list you should see the MDB’s for your JMS proxies. The names start with SB_JMS_Proxy_xxxx. If you’re working with OSB 11g the prefix is _ALSB_xxxx.
    In my case I could identify the MDB quickly as health was set to “Warning”. Look up the MDB and remove it. I’m not sure if a server restart is necessary but I restarted to be sure.



  • Redeploy your JMS service to OSB, resume actions for your JMS Server and your proxy service should be processing the messages again.

    Please note that I advise against executing these steps in a production environment as data might get lost!

    My lessons learned? Next time I need to redeploy my JMS proxy service I will make sure the JMS Server is paused to prevent the in flux error.


    Did this blog help resolve your problem? Let me know in a comment!