Moving messages between destinations in a WebSphere Application Server ND 7 system - websphere-7

This relates to the management of messages in a WebSphere Application Server ND 7.0 system. We are looking for a robust tool for viewing / moving / deleting messages between (JMS) destinations. In particular we need advise about a tool that can help us efficiently manage large number of messages between destinations. Please advise.

Assuming that you are using SIBus (aka WebSphere platform default messaging) as JMS provider, you may want to have a look at IBM Service Integration Bus Destination Handler.

A small notice from my side. I got the tool (version 1.1.5) to work quickly (read, export to file, import from file, move), but I discovered that the tool is of limited use.
I could not find a setting that will export the full message body for message that is queued on a JMS subscription. When I export messages from a JMS subscription it only fetches 1024 bytes of data.
It did however export the full message on a regular queue destination. However when I put it back and exported it again, there were small differences as evidenced by Beyond Compare file comparison.
The tool looks promising with scripts that can be exported but it is necessary to test your use case scenario.
The tool could use a revision and better testing before being put on the internet.
Hope that helps.

Related

How exactly Nagios server communicates with remote nodes i.e which protocol does it use in agent and agentless settings?

I installed Nagios Core and NCPA on a Mac. Implemented a few checks via custom plugins to understand how to use it. I am trying to understand the following:
Protocol that Nagios server actually use to communicate with NCPA agent and how exactly does NCPA return the result back to Nagios. Does it ssh into Nagios server and writes a file that server processes?
From application monitoring standpoint how can it be leveraged? Is it just to monitor that application is up and running (I read its not just for that it can do more but couldn't find any place where I could see how its actually implemented) or is there a restful API as well that we invoke from with in our application to send custom notification to Nagios server. I understand it might require some configuration at Nagios server end as well.
I came across Pager Duty and Sematext articles i.e PagerDuty Integration and SemaText Nagios Alert Integration where they have integrated their solution with Nagios I am trying to do something similar. Adding integration support for Nagios so that a user can utilise our applications UI to configure alerts/notification. For e.g. if a condition is met then alert or notify Nagios server to show a notification on its dashboard.
Can we generate an alert from within a spark streaming application based on a variable e.g. if its value is above a threshold or some condition is met send an alert to Nagios Server to display as notification on Nagios Dashboard. I came across a link where we can monitor status of a spark application but didn't find anything for something within a spark application.
I tried looking for answers to above questions but couldn't find anything useful or complete as such online. I would really appreciate if someone could help me understand above.
Nagios is highly configurable, and can communicate across many protocols. NCPA can return JSON or XML data. The most common agentless protocol is probably SNMP. If you can read Python, look directly at the /usr/local/nagios/libexec/check_ncpa.py file to see what's up.
Nagios can check whether a system is running a service, how much resources it is consuming, etc... There is a restful API.
Nagios offers an application with a more advanced graphical interface called Nagios XI. Perhaps that is what you are after.
I bet you probably could, yeah. It might take some development work to get the systems to communicate though.

Creating a task (push) queue programmatically in google app engine using python/flask

The docs on this aren't great. Really I want to find a way to create a task queue programmatically if it does not already exist without having to install the google cloud package locally and deploying a yaml that specifies the queues.
The short answer: this is not possible, at least not at this time.
The only way to create/update a task queue configuration, at least presently, is to deploy a queue configuration file with the corresponding information. From Creating Push Queues:
To add queues or change the default configuration, edit the
queue.yaml file for your application, which you upload to App
Engine.
This can be scripted, qualifying in a sense as being done programmatically. See related Create TaskQueue programmatically
Technically (but most likely not what you're after) deploying the queue configuration file can also be done with a GAE language specific SDK, not only with the google cloud (gcloud) SDK.
Side note: you tagged your post with python-3.x, which is only supported in the flexible environment, you should be aware of the Task Queue limitations in such case.

Sending command-line parameters when using node-windows to create a service

I've built some custom middleware on Node.js for a client which runs great in user space, but I want to make it a service.
I've accomplished this using node-windows, which works great, but the client has occasional large bursts of data so I'd like to allocate a little more memory using the --max-old-space-size command line parameter. Unfortunately, I don't see how to configure that in my service set-up wrapper for node-windows.
Any suggestions?
FWIW, I'm also thinking about changing how I parse the data, e.g. treating it more as a stream, but since this is the first time I've used Node and the project is going live in a couple of days, I'm hoping to find a quick and dirty option that'll get us to an up-and-running status easily, to be adjusted later.
Thanks!
Use node-windows v0.1.14 or higher. The ability to add flags was merged in this version. The more appropriate issue related to this is https://github.com/coreybutler/node-windows/issues/159.

How to get running DDE servers on the computer

I want to see a list of all DDE servers (and topics if possible) currently active on my computer. How can I do that? Is there some service started for each DDE server?
I search the Internet and stackoverflow for some time and did not find anything.
Among the tools I use frequently, tcl can do it:
dde services {} {}
returns all active service-topic pairs.
You can see the implementation in TCL source tree (win/tclWinDde.c). Basically, it's a lot of boring work with windows and messages. First, DDE client window is created. Then WM_DDE_INITIATE is sent to each window (using EnumWindows), passing the client window handle as WPARAM. The client window procedure handles WM_DDE_ACK, adding services and topics from atoms in LOWORD(lParam) and HIWORD(lParam).

distributed logging: JMS and log4j?

Been doing some searching for a solution to this problem: I need log entries from apps running on several machines to be sent to & aggregated on a remote server. Requirements:
logging in the app needs to be asynchronous (can't wait for log entry to traverse network)
logging in the app needs to be queued; if the network fails, log entries need to be queued locally and sent to
centralized server when the network becomes available again
I'm looking at using log4j and a JMSAppender. Assuming that's a suitable solution, are there any examples available? What process would be running on the centralized server to receive log entries in this scenario?
Thanks.
One simple setup I came to think about is to use Apache ActiveMQ
It is an open source messaging broker (JMS compatible) that is able to cluster queues among several physical machines and the ActiveMQ installation is rather lightweight. You simple install one ActiveMQ on each of your applications machines. Then on the logging server (Physical Server C in the picture) you would have another ActiveMQ. Your application would use a JMS appender (read more here) and you could actually just use the included apache camel to read from the queue and write a log on file or database without needing to write an application for that task.
It could be as simple as adding something like the following to the camel.xml in the activemq /conf installation and import the camel.xml in the activemq.xml configuration.
<route>
<from uri="activemq:queue:LogQueue"/>
<to uri="file:target/folder/?fileName=logfile.log&fileExist=Append"/>
</route>
You could use a myrriad of other frameworks, JMS servers and technologies, but I think this is a rather easy approach to achieve with very low cost and high stability.

Resources