I am attempting to send a compressed JMS message to MQ.
The JMS body is compressed along with business specific Header (key<->value pairs).
The compression is done on Windows 64 bit machine.
IBM MQ and the consumer are running in a Mainframe machine.
I am seeing exceptions thrown when the JMS is decompressed at Mainframe (by consumer).
Exception - java.util.zip.ZipException
- java.util.zip.ZipException: unknown compression method
We use java.util.zip.DeflaterOutputStream for the compression/decompression.
and we set the encosing to UTF8 during compression
I am trying to understand - if this is a platform related issue?
Since When testing the compression/decompression in windows - there is no exception.
Also there is no exception thrown when the compression is done in 64 bit Solaris and decompressed at Mainframe.
Problem seems to occur only when compression is done at Windows.
Compare the 3 different JREs (Windows, Solaris & mainframe) that you are using - maybe the Windows JRE, by default, is using a newer compress method. Hence, you may need to force it to use a compression method that is supported by the mainframe JRE. Also, are you using an Oracle or IBM JRE on Windows? I would strongly suggest that you use an IBM JRE on Windows. Time to RTM.
Related
I have two Django sites (archive and test-archive) on one machine. Each has its own virtual environment and different celery queues and daemons, using Python 3.6.9 on Ubuntu 18.04, Django 3.0.2, Redis v 4.0.9, celery v 4.3, and Kombu v4.6.3. This server has 16 GB of RAM, and under load there is at least 10GB free and swap is minimal.
I keep getting this error in my logs:
kombu.exceptions.InconsistencyError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.
I tried
downgrading Kombu to 4.5 for both sites per some stackoverflow posts
and setting maxmemory=2GB and maxmemory-policy=allkeys-lru in redis.conf per celery docs (https://docs.celeryproject.org/en/stable/getting-started/backends-and-brokers/redis.html#broker-redis); originally the settings were the defaults of unlimited memory and noeviction and these errors were present for both versions of kombu
I still get those errors when one site is under load (i.e. doing something like uploading a set of images and processing them) and the other site is idle.
What is a little strange is that on some test runs using test-archive, test-archive will not have any errors, while archive will show those errors, even though the archive site is not doing anything. On other identical test runs using test-archive, test-archive will generate the errors and archive will not.
I know this is a reported bug in kombu/celery, so I am wondering if anyone has a work around that works more often than not for this configuration. What versions of celery, kombu, redis, etc. seem to work more often than not? I am happy to share my config files or log files, but there are so many I thought it would be best to start this discussion with the problem statement and my setup and see what else is needed.
Thanks!
I've installed a Cloudera Flume node (0.9.4) on my windows 2003 server and it appears to be running. However, I'm stuck as to the next steps to take to send windows server event log data to the master node. My master node is located on a Linux machine. What next steps are needed to connect my Windows flume node to the master node?
thanks,
Ralph.
I'm baffled as to why this seems to be the only decent "open-source" (if not community-developed) solution, but after a few research efforts over the last several years, I've repeatedly come up with NXLog as the best option for handling Windows event logs in a primarily *nix-based environment.
NXLog has a special input module for this purpose called im_msvistalog. I've been using this with NXLog Community Edition and it works well so far. (FYI, I'm shipping Windows logs directly to Solr.)
I presume that there just aren't that many people using tools of this flavor (i.e., Apache Flume, Solr, Java, typically Linux-based tools) for handling Windows event logs. :-) I'd like to know why if anyone cares to chime in. I guess people with Windows infrastructure they care about will just have something like a centralized Windows Event Viewer that operates as a syslog daemon would in a *nix environment?
If this solution doesn't work for you, you can also try querying the Windows event logs using the Windows Events Command Line Utility. I haven't yet had to resort to that since everything I've needed has been available using that NXLog input module I mentioned above.
You need to connect the Windows Event Log to Flume. I haven't tried this but I suggest you try a tool such as KiwiSyslog to turn Windows Events into Syslog. You then configure Flume with a Syslog source and tell KiwiSyslog to sent the events there.
BTW, Flume 0.9.4 is very old. I suggest you change to a recent Apache Flume as that is where the active support (largely by Cloudera staff) is.
I am using syslog on an embedded Linux device (Debian-arm) that has a relatively smaller storage (~100 MB storage). If we assume the system will be up for 30 years and it logs all possible activities, would there be a case that the syslog fills up the storage memory? If it is the case, is syslog intelligent enough to remove old logs as there would be less space on the storage medium?
It completely depends how much stuff gets logged, but if you only have ~100MB, I would imagine that it's certainly likely that your storage will fill up before 30 years!
You didn't say which syslog server you're using. If you're on an embedded device you might be using the BusyBox syslogd, or you may be using the regular syslogd, or you may be using rsyslog. But in general, no syslog server rotates log files all by itself. They all depend on external scripts run from cron to do it. So you should make sure you have such scripts installed.
In non-embedded systems the log rotation functionality is often provided by a software package called logrotate, which is quite elaborate and has configuration files to say how and when which log files should be rotated. In embedded systems there is no standard at all. The most common configuration (especially when using BusyBox) is that logs are not written to disk at all, only to a memory ring buffer. The next most common configuration is idiosyncratic ad-hoc scripts built and installed by the embedded system integrator. So you just have to scan the crontabs and see if you can see anything that's configured to be invokes that looks like a log rotater.
This relates to the management of messages in a WebSphere Application Server ND 7.0 system. We are looking for a robust tool for viewing / moving / deleting messages between (JMS) destinations. In particular we need advise about a tool that can help us efficiently manage large number of messages between destinations. Please advise.
Assuming that you are using SIBus (aka WebSphere platform default messaging) as JMS provider, you may want to have a look at IBM Service Integration Bus Destination Handler.
A small notice from my side. I got the tool (version 1.1.5) to work quickly (read, export to file, import from file, move), but I discovered that the tool is of limited use.
I could not find a setting that will export the full message body for message that is queued on a JMS subscription. When I export messages from a JMS subscription it only fetches 1024 bytes of data.
It did however export the full message on a regular queue destination. However when I put it back and exported it again, there were small differences as evidenced by Beyond Compare file comparison.
The tool looks promising with scripts that can be exported but it is necessary to test your use case scenario.
The tool could use a revision and better testing before being put on the internet.
Hope that helps.
I have asked this question on serverfault.com which is as suggested a more appropriate place for it - https://serverfault.com/questions/169829/what-does-glibc-detected-httpd-double-free-or-corruption-mean
I have an EC2 server running that I use to process image uploads. i have a flash swf that handles uploading to the server from my local disk - while uploading about 130 images (a total of about 650MB) I got the following error in my server log file after about the 45th image.
* glibc detected * /usr/sbin/httpd: double free or corruption (!prev): 0x85a6b990 ***
What does this error mean?
The server has stopped responding so I will restart it. Where should I begin to find the cause of this problem?
thanks
some info -
Apache/2.2.9 (Unix) DAV/2 PHP/5.2.6 mod_ssl/2.2.9 OpenSSL/0.9.8b configured
Fedora 8
This means your web server has crashed. Barring bad hardware, this is a bug inside either the Apache server or a module you have loaded (such as mod_php).
Try upgrading to a newer version, if one is available. If that fails, open a bug report with the Apache maintainers.