Tech Noob
Tech Noob

Reputation: 550

Oracle service bus with BigData

I do not have much experiences with Oracle Service Bus, I am trying to design a logging solution with BigData.

As I read, the default log and report activity in OSB will put the data into the domain's server log file or into the database where we setup the server domain. If I want to put all the logs into a separate BigData database. I will need to either of these approaches:

  1. Java callout, use JMS or some other technology to send data to the bigdata server.
  2. Web service callout, create a separate web service to handle the logging.
  3. Create custom report provider to replace the default one in OSB Reporting.
  4. Something else

Please tell give me some ideas about what method I should be using, and please provide your reasons if you can, thank you so much.

Upvotes: 0

Views: 665

Answers (2)

Jang-Vijay Singh
Jang-Vijay Singh

Reputation: 732

There are multiple ways to achieve this. You could use the report activity to push to JMS or use the log activity.

You can also write a small routine such as this (either on OSB or outside it), that can read anything that you are logging (such as via the log activity but also additional metadata that is logged when you turn on monitoring of OSB components) and do with it whatever is needed (such as pushing it to a database or BigData store).

The key is to avoid writing an explicit service call in each pipeline/flow and the above approach(es) use standard OSB/ODL* loggers

*Oracle Diagnostic Logging

Upvotes: 0

Trent Bartlem
Trent Bartlem

Reputation: 2253

Isn't the logging framework in weblogic based on Log4j? That means you can use a JMSAppender (probably prudent to wrap in an Async log4j appender if you can) and handle it however you want.

Or, if you're talking about the OSB Reporting framework, there's a few options:

  1. Configure the default JMS reporting provider (which uses the underlying SOAINFRA database which hopefully is set up to be something better than the default Derby instance), then write a MDB that pulls reports off the queue and inserts it into SAS BigData
  2. Turn the JMS provider off and use a custom provider, which can do anything you want. If you want, you can still do a two-step process, where the reporting provider itself puts reports on a JMS queue so it returns quickly, and a different MDB pulls messages off and persists them at its own pace.

I do not recommend a web service or database callout without an async step in the middle, because you need logging and reporting to be very quick and use as little resources for as short a period as possible.

You don't want logging to hog threads while you're experiencing load. I have seen entire buses brought down because of one hiccup, because the logging database suffered a performance blip, which caused a bunch of open threads trying to log to it, which caused thread starvation or timeouts, which caused more error logging...

if you have a buffer like a JMS queue, then you can handle peaks by planning ahead. You can say "actually I want a JMS queue of 10,000 messages, and if that overflows due to whatever reason, I want to (push the overflow to a separate queue over on this other box) or (filter out all the non-essential messages) or (throw new messages away) or (action of your choice). Oh yeah, and if the logging database fails then I will try 3 times to commit and if not, move it to this other queue". Or whatever you want.

Upvotes: 1

Related Questions