Monday, May 7, 2018

Exception after redeploying JMS proxy service on OSB 12c: Failure while processing an incoming message for endpoint ProxyService EventQueue/ProxyServices/PS_EventQueue due to service configuration in flux


So I had to redeploy my JMS proxy on OSB and I noticed messages weren’t getting through anymore. When I checked the logs I saw the following exception when the proxy tried to process the messages:

[2018-05-04T15:42:19.696+02:00] [osb_server1] [ERROR] [OSB-381502] [oracle.osb.transports.main.jmstransport] [tid: [ACTIVE].ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 005Qq5tI3tl4QtD5vBt1iX0003Q30003tS,1:1] [APP: SB_JMS_Proxy_a470f11.N7ff2315c.0.162e3b1fee2.N8000] [partition-name: DOMAIN] [tenant-name: GLOBAL] Exception in JmsInboundMDB.onMessage: com.bea.wli.sb.transports.TransportException: Failure while processing an incoming message for endpoint ProxyService EventQueue/ProxyServices/PS_EventQueue due to service configuration in flux[[
com.bea.wli.sb.transports.TransportException: Failure while processing an incoming message for endpoint ProxyService EventQueue/ProxyServices/PS_EventQueue due to service configuration in flux
        at com.bea.wli.sb.transports.jms.JmsInboundMDB.onMessage(JmsInboundMDB.java:121)
        at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:438)
        at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:361)
        at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:297)
        at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:5107)
        at weblogic.jms.client.JMSSession.execute(JMSSession.java:4775)
        at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:4170
        at weblogic.jms.client.JMSSession.access$000(JMSSession.java:127)
        at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5627)
        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:666)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:640)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:406)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)

This problem was caused, so I learned later, by the fact that messages were still being processed while I was redeploying my proxy. For every redeployment the JMS proxy will update the Message-Driven Bean (MDB) that will take care of the message consumption. This MDB is deployed on the WebLogic Server.  It could take some time for this MDB to be updated and start consuming messages again and get in sync with the pipeline runtime objects. This will cause the service to be “in flux”.

To solve the problem I had tried the obvious first: redeploying the service, restarting the server and the JMS queue. None of this worked because the problem was in the MDB, so I found out. Eventually I did the following to resolve my problem:

  • Undeploy your JMS service in OSB.
  • Select your JMS Server via WebLogic Server Console: Services > Messaging > JMS Servers  and go to tab “Control”. Pause Production and Consumption.
  • Look for the config.xml file located in your <OSB Domain>/config folder. In this file, look for the app-deployment that represents your MDB. You can identify it by checking the APP ID in the exception. I marked it yellow in the exception above. It should correspond with the ID in the name tag. Remove the app-deployment part.
    <app-deployment>
      <name>SB_JMS_Proxy_a470f11.N7ff2315c.0.162e3b1fee2.N8000</name>
        <target>osb_cluster</target>
        <module-type>ear</module-type>
        <source-path>sbgen/jms/SB_JMS_Proxy_a470f11.N7ff2315c.0.162e3b1fee2.N8000.ear</source-path>
        <plan-path>sbgen/jms/plan/SB_JMS_Proxy_a470f11.N7ff2315c.0.162e3b1fee2.N8000_Plan.xml</plan-path>
        <security-dd-model>DDOnly</security-dd-model>
        <staging-mode>stage</staging-mode>
        <plan-staging-mode xsi:nil="true"></plan-staging-mode>
        <cache-in-app-directory>false</cache-in-app-directory>
      </app-deployment>
  • Go to your Deployments in the OSB WebLogic Server Console.
  • In your list you should see the MDB’s for your JMS proxies. The names start with SB_JMS_Proxy_xxxx. If you’re working with OSB 11g the prefix is _ALSB_xxxx.
    In my case I could identify the MDB quickly as health was set to “Warning”. Look up the MDB and remove it. I’m not sure if a server restart is necessary but I restarted to be sure.



  • Redeploy your JMS service to OSB, resume actions for your JMS Server and your proxy service should be processing the messages again.

    Please note that I advise against executing these steps in a production environment as data might get lost!

    My lessons learned? Next time I need to redeploy my JMS proxy service I will make sure the JMS Server is paused to prevent the in flux error.


    Did this blog help resolve your problem? Let me know in a comment!