The IBM documentation for WebSphere's MQ error codes says that the channel name was incorrect. Channel name? Nowhere in the doc for MQCONN says anything about channel name. It asks for the name of the Queue Manager, which I have done, and verified is correct.
It is tempting to think of "channel" as a synonym for "queue". But before you can connect to a specific queue, you have to connect to the Queue Manager first, and that is the where I am encountering the error.
What does "channel" mean in this context?
Thank you
You are connecting in Client mode (i.e. a network connection) and in order to connect via the network from your application to the queue manager, there will be some configuration to say how to do this. For example, an IP address and a port number. Along with this, there will be a channel name. You might be doing this using a MQSERVER environment variable, or the Client Channel Definition File (CCDT) for example. If the MQCHLLIB and MQCHLTAB environment variables are set, they point to the CCDT that is being used.
It is in this configuration that you will find the channel name, and then you must see whether there is a channel of TYPE(SVRCONN) defined on your queue manager with the same name as the one in your client application configuration.
To see more details about the error, look in the AMQERR01.LOG file both on your client machine and on the queue manager. There you will see more detailed errors about the channel name in question.
Related
I'm investigating a tech for our cluster. Pulsar looks good, but the usage looks more like a queueing system. Of course, queueing system is good to have, but I have a specific requirement: broadcasting.
We would like to use one machine to generate the data and publish it to a Pulsar topic. Then we use a group of servers, forming a replica. Each server consumes the message flow on that topic, and serves clients via WebSocket.
This is different than the Shared subscription, because each server needs to receive all messages, not a fraction of it.
I came to this post: https://kafkaesque.io/subscriptions-multiple-groups-of-consumers-on-pulsar-topic/ , which explains how to do such a job: each server needs create a new exclusive subscription, say use a UUID as its subscription name, from the unique exclusive subscription you can get the full message flow of that topic.
But since our server replica can be dynamic, so once some of the server restart, they will create new UUID subscription again, which will leave many orphan subscriptions on the topic, which eventually would become maintenance headache.
Anyone has the experience to setup a broadcast use case using Pulsar?
Actually, I found that the "Reader Interface" is exactly for this kind of use case:
https://pulsar.apache.org/docs/en/concepts-clients/#reader-interface
Using an exclusive subscription for each consumer is the only way to ensure that each of your consumers receives ALL of the messages on the topic, and Pulsar handles multiple subscriptions quite well.
The issue it seems is the server restart use case, and I don't think that simply connecting with a new UUID subscription is the right approach (putting aside the orphaned subscriptions). You really want to have the server reuse the previous subscription after it restarts. This is because each subscription keeps track of the last message in the topic that it had processed and acknowledged, so you can pick up exactly where you had left off before the server crashed if you reconnect with the same subscription UUID. If you connect with a new UUID, then you will start processing messages produced from that point in time forward, and all messages produced during the restart period will be "lost"
Therefore, you will need to find a mechanism to share these UUIDs across server failures and return them to the restarting server. One approach would be to have a mechanism similar to zookeeper leader election, in which each server is granted an exclusive lease that expires periodically. The server must then periodically refresh the lease to retain it. Then if the server were to crash, it would fail to refresh the lease on that UUID and the restarting server would then be granted the lease when it attempts to reconnect.
See https://curator.apache.org/curator-recipes/leader-election.html for a better explanation of the pattern.
Ok, I want to check if I can run some OS or MQSC commands in MQ server remotely. As long as I know, this could be done with SYSTEM.ADMIN.SVRCONN. In order to do that, I add a remote Queue Manager to my WebSphere MQ client. I put Queue Manager name on the server with proper IP, but when I use SYSTEM.ADMIN.SVRCONN as channel name, I have got: Channel name not recognized (AMQ4871) error.
Also, if I have a channel name like MY.CHANNEL.NAME and it is a server-connection channel with mqm as its MCAUSER, can I run some commands (MQSC or OS) through this channel on the server?
Thanks.
Edit1
I am using WebSphere MQ v.7.0
By "I add a remote Queue Manager to my WebSphere MQ client" I meant I added a remote queue manager to MQ Explorer.
Edit2
I want to explain my question more precisely in this edit. I want to connect to a remote Qmanager via MQ Explorer. I know Qmanager name and its IP ofcourse. Also, the remote Qmanager has both SYSTEM.ADMIN.SVRCONN and SYSTEM.AUTO.SVRCONN channel available. When I check CHLAUTH for these channels, I've got:
AMQ8878: Display channel authentication record details.
CHLAUTH(SYSTEM.ADMIN.SVRCONN) TYPE(ADDRESSMAP)
ADDRESS(*) USERSRC(CHANNEL)
AMQ8878: Display channel authentication record details.
CHLAUTH(SYSTEM.*) TYPE(ADDRESSMAP)
ADDRESS(*) USERSRC(NOACCESS)
dis chl(SYSTEM.ADMIN.SVRCONN) MCAUSER
5 : dis chl(SYSTEM.ADMIN.SVRCONN) MCAUSER
AMQ8147: WebSphere MQ object SYSTEM.ADMIN.SVRCONN not found.
dis chl(SYSTEM.AUTO.SVRCONN) MCAUSER
6 : dis chl(SYSTEM.AUTO.SVRCONN) MCAUSER
AMQ8414: Display Channel details.
CHANNEL(SYSTEM.AUTO.SVRCONN) CHLTYPE(SVRCONN)
MCAUSER( )
As you can see here, I should be able to connect via these two channels and run some commands. But when I choose SYSTEM.ADMIN.SVRCONN as channel name in remote configuration I get : Channel name not recognized (AMQ4871) and when I choose SYSTEM.AUTO.SVRCONN as channel name, I get: You are not authorized to perform this operation (AMQ4036).
Any idea?
when I use SYSTEM.ADMIN.SVRCONN as channel name, I have got: Channel
name not recognized (AMQ4871) error.
Did you define the channel on the remote queue manager?
if I have a channel name like MY.CHANNEL.NAME and it is a
server-connection channel with mqm as its MCAUSER, can I run some
commands (MQSC or OS) through this channel on the server?
Sure. And so can anyone else. You should read up on MQ security and not expose your queue manager to hackers.
I add a remote Queue Manager to my WebSphere MQ client.
I'm not at all sure what this means exactly. MQ Explorer keeps a list of queue manager definitions. MQ Client is just a library for making connections.
If you meant you added a remote queue manager to MQ Explorer, then it makes sense. In addition to defining the connection in Explorer, you will also have to provision the connection at the queue manager. This means defining the SYSTEM.ADMIN.SVRCONN channel or one with a name of your choosing, defining and starting a listener. If you are on a 7.1 or higher queue manager (it's always good to list versions when asking about MQ), then you will also need to create a CHLAUTH rule to allow the connection, and another CHLAUTH rule to allow the connection with administrative privileges. either that or disable CHLAUTH rules altogether, but this is not recommended.
If I have a channel name like MY.CHANNEL.NAME and it is a server-connection channel with mqm as its MCAUSER, can I run some commands (MQSC or OS) through this channel on the server?
Maybe.
Out of the box, MQ denies all client connections. There are CHLAUTH rules to deny administrative connections, and other CHLAUTH rules to deny connections for any SYSTEM.* channel other than SYSTEM.ADMIN.SVRCONN. Since admin connections are denied, non-admin users must have access provisioned using SET AUTHREC or setmqaut commands before they can use SYSTEM.ADMIN.SVRCONN, hence MQ is said to be "secure by default."
When you create MY.CHANNEL.NAME and connect as an admin, and if CHLAUTH is enabled, the connection will be denied. You would have to add a new CHLAUTH rule such as...
SET CHLAUTH('MY.CHANNEL.NAME') TYPE(BLOCKUSER) USERLIST('*NOBODY') WARN(NO) ACTION(ADD)
...in order to allow the admin connection.
(Note: MQ CHLAUTH blocking rules use a blacklist methodology. The default rule blocks *MQADMIN from all channels. The rule I listed above overrides the default rule because the channel name is more specific, and it blocks *NOBODY - which is a list of user IDs that does not include mqm or any other administrative user ID. It's weird, but that's how it works.)
More on this topic at http://t-rob.net/links, and in particular Morag's conference presentation on CHLAUTH rules is a must-read.
20150506 Update
Response to edits #1 & #2 in the original question is as follows:
The first edit says that the QMgr is at v7.0 and the second shows that the QMgr had CHLAUTH records defined. Since CHLAUTH wasn't available until v7.1, these two statements are mutually exclusive - they cannot both be true. When providing the version of the MQ server or client, best to paste in the output of dspmqver. If the question pertains to GSKit, Java code or other components beyond the base code, then dspmqver -a would be even better.
The MQSC output provided in the question updates completely explains the errors.
dis chl(SYSTEM.ADMIN.SVRCONN) MCAUSER
5 : dis chl(SYSTEM.ADMIN.SVRCONN) MCAUSER
AMQ8147: WebSphere MQ object SYSTEM.ADMIN.SVRCONN not found.
As Morag notes, the SYSTEM.ADMIN.SVRCONN cannot be used because it has not been defined.
AMQ8878: Display channel authentication record details.
CHLAUTH(SYSTEM.*) TYPE(ADDRESSMAP)
ADDRESS(*) USERSRC(NOACCESS)
The auths error happens because any connection to any SYSTEM.* SVRCONN channel that is not expressly overridden is blocked by this rule. The CHLAUTH rule for SYSTEM.ADMIN.SVRCONN takes precedence because it is more explicit, and allows non-admin connections to that channel. The lack of a similar overriding rule for SYSTEM.AUTO.SVRCONN means it is denied by the existing rule for SYSTEM.* channels as listed above.
As noted previously, it is STRONGLY recommended to go to the linked web site and read Morag's conference presentation on MQ v7.1 security and CHLAUTH rules. It explains how the CHLAUTH rules are applied, how the precedence works, and perhaps most importantly, how to verify them with the MATCH(RUNCHECK) parameter.
To do the MQ Security right, you need at least the following:
Define a channel with a name that does not begin with SYSTEM.* and set MCAUSER('*NOBODY').
Define a CHLAUTH rule to allow connections to that channel, mapping the MCAUSER as required.
If you wish to connect to the channel with mqm or admin access, define CHLAUTH rules to authenticate the channel, preferably using TLS and certificates.
There are several things that you do not want to do, for reasons that are explained in Morag's and my presentations. These include...
Using SYSTEM.AUTO.SVRCONN for any legitimate connections.
For that matter, using SYSTEM.* anything (except for SYSTEM.ADMIN.SVRCONN or SYSTEM.BROKER.*) for legitimate connections.
Allowing unauthenticated admin connections.
Disabling CHLAUTH rules.
Disabling OAM.
I want people to learn MQ security, and to learn it really well. However, as a consultant who specializes in it I must tell you that even customers who have hired me to give on-site classes and help with their implementation have trouble getting it locked down. There are two insights to be gleaned from this fact.
First, if you do not enough time to get up to speed on MQ security then the implementation will not be secure. To study the topic to the point of understanding how all the pieces fit together well enough to devise a decent security model requires hands-on training with a QMgr that you can build, hammer at, tear down, build again, etc. takes weeks of dedicated hands-on study, or months or years of casual study. My advice here is to go get MQ Advanced for Developers. It is fully functional, free, and has a superset of the controls you have on the v7.1 or v7.5 QMgr you are working on now.
Second insight is that there is no shortcut to learning MQ (or any other IT) security. If it is approached as though it were simply a matter of configuration, then it is almost guaranteed to not be secure when implemented. If it is approached as learning all the different controls available for authentication, authorization, and policy enforcement, and then learning how they all interact, and if security is approached as a discipline of practice, then it is possible to achieve some meaningful security.
Addressing that second issue will require an investment in education. Read through the presentations and try out the various controls live on a test QMgr that you administer. Understand which errors go to which error logs, which generate event messages and which type of events are generated. Obtain some of the diagnostic tools in the SupportPacs, such as MS0P which is one of my favorites, and get familiar with them. Consider attending the MQ Tech Conference (where you can meet many of the folks responding here on SO) for more in-depth training.
If you (or your employer) are not ready to commit to in-depth skill building or are trying to learn this for an imminent project deadline, then consider hiring deep MQ Security skills on an as-needed basis because reliance on just-in-time on-the-job training for this topic is a recipe for an unsecure network.
My NodeJS client is able to connect to the MongoDB primary server and interact with it, as per requirements.
I use the following code to build a Server Object
var dbServer = new Server(
host, // primary server IP address
port,
{
auto_reconnect: true,
poolSize: poolSize
});
and the following code to create a DB Object:
var db = new Db(
'MyDB',
dbServer,
{ w: 1 }
);
I was under the impression, that when the primary goes down, the client will automatically figure out that it now needs to talk to one of the secondaries, which will be elected to be a primary.
But when I manually kill the primary server, one of the secondary servers does become the primary (as can be observed from its mongo shell and the fact that it now responds to mongo shell commands), but the client doesn't automatically talk to it. How do I configure NodeJS server to automatically switch to the secondary?
Do, I need to specify all 3 server addresses somewhere? But that doesn't seem like a good solution, as once the primary is back on line, it's IP address will be different from what it originally was.
I feel that I am missing something very basic, please enlighten me :)
Thank You,
Gary
Well your understanding is part there but there are some problems. The general premise of assigning more than a Single server in the connection is that should that server address be unavailable at the time of connection, then something else from the "seed list" will be chosen in order to establish the connection. This removes a single point of failure such as the "Primary" being unavailable at this time.
Where this is a "replica Set" then the driver will discover the members once connected and then "automatically" switch to the new "Primary" as that member is elected. So this does require that your "replica Set" is actually capable of electing a new "Primary" in order to switch the connection. Additionally, this is not "instantaneous", so there can be a delay before the new "Primary" is promoted and able to accept operations.
Your "auto_reconnect" setting is also not doing what you think it is doing. All this manages is that if a connection "error" occurs, the driver will "automatically" retry the connection without throwing an exception. What you likely really want to do is handle this yourself as you could end up infinitely retrying a connection that just cannot be made. So good code would take this into account, and manage the "re-connect" attempts itself with some reasonably handling and logging.
Your final point on IP addresses is generally addressed by using hostnames that resolve to an IP address where those "hostnames" never change, regardless of what they resolve to. This is equally important for the driver as it is for the "replica set" itself. As indeed if the server members are looking for another member by an IP address that changes, then they do not know what to look for.
So the driver will "fail over" or otherwise select a new available "Primary", but only within the same tolerances that the servers can also communicate with each other. You should seed you connections as you cannot guarantee which node is the "Primary" when you connect. Finally you should use hostnames instead of IP addresses if the latter is subject to change.
The driver will "self discover", but again it is only using the configuration available to the replica set in order to do so. If that configuration is invalid for the replica set, then it is invalid for the driver as well.
Example:
MongoClient.connect("mongodb://member1,member2,member3/database", function(err,db) {
})
Or other with an array of Server objects instead.
I have no clue if it's better to ask this here, or over on Programmers.SE, so if I have this wrong, please migrate.
First, a bit about what I'm trying to implement. I have a node.js application that takes messages from one source (a socket.io client), and then does processing on the message, which might result in zero or more messages back out, either to the sender, or other clients within that group.
For the processing, I would like to essentially just shove the message into a queue, then it works its way through various message processors that might kick off their own items, and eventually, the bit running socket.io is informed "Hey, send this message back"
As a concrete example, say a user signs into the service, that sign in message is then placed in the queue, where the authorization processor gets it, does it's thing, then places a message back in the queue saying the client's been authorized. This goes back to the socket.io socket that is connected to the client, along with other clients that might be interested. It can also go to other subsystems that might want to do more processing on authorization (looking up user info, sending more info to the client based on their data, etc).
If I wanted strong coupling, this would be easy, but I tried that before, and it just goes to a mess of spaghetti code that's very fragile, and I would like to avoid that. Another wrench in the setup is this should be cluster-able, which is where the real problem comes in. There might be more than one, say, authorization processor running. But the authorization message should be processed only once.
So, in short, I'm looking for a pattern/technique that will allow me to, essentially, have multiple "groups" of subscribers for a message, and the message will be processed only once per group.
I thought about maybe having each instance of a processor generate a unique name that would be used as a list in Reids. This name would then be registered with some sort of dispatch handler, and placed into a set for that group of subscribers. Then when a message arrives, the dispatch pulls a random member out of that set, and places it into that list. While it seems like this would work, it seems somewhat over-complicated and fragile.
The core problem is I've never designed a system like this, so I'm not even sure the proper terms to use or look up. If anyone can point me in the right direction for this, I would be most appreciative.
I think what your describing is similar to https://www.getbridge.com/ service. I it but ended up writing my own based on zeromq, it allows you to register services, req -> <- rec and channels which are pub / sub workers.
As for the design, I used a client -> broker -> services & channels which are all plug and play using auto discovery, you have the services register their schema with the brokers who open a tcp connection so that brokers on other servers can communicate with that broker groups services. Then internal services and clients connect via unix sockets or ipc channels which ever is preferred.
I ended up wrapping around the redis publish/subscribe functions a bit to do this. Each type of message processor gets a "group name", and there can be multiple instances of the processor within that group (so multiple instances of the program can run for clustering).
When publishing a message, I generate an incremental ID, then store the message in a string key with that ID, then publish the message ID.
On the receiving end, the first thing the subscriber does is attempt to add the message ID it just got from the publisher into a set of received messages for that group with sadd. If sadd returns 0, the message has already been grabbed by another instance, and it just returns. If it returns 1, the full message is pulled out of the string key and sent to the listener.
Of course, this relies on redis being single threaded, which I imagine will continue to be the case.
What you might be looking for is an AMQP protocol implementation,where you can have queue get custom exchanges,and implement a pub-sub model.
RabbitMQ - a popular amqp protocol implementation with lots of libraries
it also has node.js library
We have our application running on a Sun Solaris system and have a local WebSphere MQ installation. The applcation uses bindings mode to connect to queue manager. When trying to send message to the local queue, the JNDI binding is successfull but we encounter javax.jms.JMSSecurityException: MQJMS2013: invalid security authentication supplied for MQQueueManager error. When investigated found that the credentials (userid) used for authentication is not case sensitive as the user on which the application is running. The userid matches but it is not a case sensitive match. By default the user on which the application is running will be passed for authentication, but here the case sensitive match is failing. The application server is WebLogic. Appreciate any inputs.
In order to open the local queue, the application must have first connected to the queue manager successfully. The error on the remote queue is a connection error so it is not even getting to the queue manager. This suggests that you are using different connection factories and that the second one has some differences in the connectivity parameters. First step is to reconcile those differences.
Also, a MQJMS2013 Security Error can be many things, most of which are not actually MQ issues. For example some people store their managed objects in LDAP and an authentication problem there will throw this error. For people who use a filesystem-based JNDI, OS file permissions can cause the same thing. However if it is an actual WMQ issue (which this appears to be) then the linked exception will contain the MQ Reason Code (for example, MQRC=2035). If you want to be able to better diagnose MQ (or for that matter any JMS transport) issues, it pays to get in the habit of printing linked exceptions.
If you are not able to resolve this issue based on this input, I would advise updating the question with details of the managed object definitions and the reason code obtained from printing the linked exceptions.
We were using createQueueConnection() in QueueConnectionFactory for creating the connection and the issue got resolved by using the method createQueueConnection("",""). The unix userid (webA) is case sensitive and the application was trying to authenticate on the MQ with the case insensitive userid (weba) and MQ queue manager was rejecting the connection attempt. Can you tell us why the application was sending the case insensitive userid (weba) earlier?
Thanks,
Arun