Error opening zip file or JAR manifest missing : jrebel.jar - linux

When configuring JRebel on my remote server (JBoss on linux) I have configured the JVM arg as
-javaagent:/home/user/jrebel.jar" -Drebel.remoting_plugin=true
The jrebel.jar is absolutely definitely in that location, yet the server fails to start with the error:
Error opening zip
file or JAR manifest missing : /home/user/jrebel.jar Error occurred
during initialization of VM agent library failed to init: instrument
So the arg is oviously being passed to the JVM correctly, but for the life of me I can't work out why it can't find the jar. I've been through every Zero Turnaround article I can find + looked at the solutions that have resolved it for other people, but no luck. Any ideas?

Turned out to be a permissions problem - the JBoss user didn't have the permissions to access the directory that I had placed jrebel.jar into.
Would have been nice to have a more meaningfull error - e.g. 'permissions denied'. Shows my lack of Linux knowledge though I guess.
After the jar was moved to a directory within the JBoss installation + the jar owner was changed to the JBoss user and Read/Write/Execute permissions added, all is well.

Yes , the permission is the reason that this error happens to me when I tried to open PHPSTORM and that error was :
Error opening zip file or JAR manifest missing : ${JetbrainsIdesCrackPath}
Error occurred during initialization of VM
agent library failed to init: instrument
so before running PHPSTORM I had to run the command : sudo -i to get the root permission to run the program.

Related

Upload from GitLab to Artifactory during pipeline fails occasionally

Occasionally the first upload of artifacts during a GitLab pipeline fail.
I'm getting the following error message in the logs:
2019-08-01 13:43:14,149 [http-nio-8082-exec-187] [ERROR]
(o.j.s.b.p.t.FilePersistenceHelper:87) - Failed moving
'path_to_artifactory\filestore_pre\dbRecord123.bin' to
'path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2'.
Access to file denied null 2019-08-01 13:43:14,149
[http-nio-8082-exec-187] [ERROR] (o.a.w.s.RepoFilter :251) - Upload
request of products-stage-qa:file_to_upload failed due to {}
java.nio.file.AccessDeniedException: Failed to persist file with sha1:
5ecc5f719b4442b9b04f9010646d34917aca8ca2
This seems to happen only during builds, but not during other uploads directly by a user.
It doesn't happen all the time, and only on first tries. But I haven't found any logic when the first try fails or succeeds. It doesn't seem to have anything to do with file types or the like. I can't really determine if it has anything to do with network speeds though since I only have access to part of the infrastructure.
I found an open ticket with the same error message, but only for Conan and for us it only happens with ivy repositories
We are using Artifactory 6.9.1 and GitLab 12.0.3 starter
This looks to be a permission issue. You are getting an error message that states that the move failed due to "Access to file denied".
You can try to log in to the server using the "artifactory" user and manually move the file called "path_to_artifactory\filestore_pre\dbRecord123.bin" to "path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2" and see if you have any issues with this. To log in to the server with the "artifactory" user you can use the command "sudo -s -u artifactory".
You will also need to make sure that all filestore and its subdirectories are owned by the "artifactory" user and have the correct permissions.
Hope this helps.

OpenMPI: ORTE was unable to reliably start one or more daemons

I've been at it for days but could not solve my problem.
I am running:
mpiexec -hostfile ~/machines -nolocal -pernode mkdir -p $dstpath where $dstpath points to current directory and "machines" is a file containing:
node01
node02
node03
node04
This is the error output:
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
[node01:06177] [[6421,0],0] ORTE_ERROR_LOG: Error in file base/plm_base_launch_support.c at line 891
--------------------------------------------------------------------------
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
--------------------------------------------------------------------------
[node01:06177] 1 more process has sent help message help-errmgr-base.txt / failed-daemon-launch
[node01:06177] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
[node01:06181] [[6417,0],0] ORTE_ERROR_LOG: Error in file base/plm_base_launch_support.c at line 891
I have 4 machines, node01 to node04. In order to log into these 4 nodes, I have to first log in to node00. I am trying to run some distributed graph functions. The graph software is installed in node01 and is supposed to be synchronised to the other nodes using mpiexec.
What I've done:
Made sure all passwordless login are setup, every machine can ssh to any other machine with no issues.
Have a hostfile in the home directory.
echo $PATH gives /home/myhome/bin:/home/myhome/.local/bin:/usr/include/openmpi:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
echo $LD_LIBRARY_PATH gives
/usr/lib/openmpi/lib
This has previously worked before, but it just suddenly started giving these errors. I got my administrator to install fresh machines but it still gave such errors. I've tried doing it one node at a time but it gave the same errors. I'm not entirely familiar with command line at all so please give me some suggestions. I've tried reinstalling OpenMPI from source and from sudo apt-get install openmpi-bin. I'm on Ubuntu 16.04 LTS.
You should focus on fixing:
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
[node01:06177] [[6421,0],0] ORTE_ERROR_LOG: Error in file base/plm_base_launch_support.c at line 891

Puppet error when using classes

I am starting using puppet to manage many servers, the problem is that whenever I try to use a class, new relic for example:
node 'mynode' {
class {'newrelic::server::linux':
newrelic_license_key => '***',
}
}
It fails, and returns the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class newrelic::server::linux at /etc/puppet/manifests/site.pp:3 on node mynode
I have installed fsalum-newrelic on the master, and everything works fine when using files, packages, services etc. What am I doing wrong?
The catalog compiler will look for class newrelic::server::linux at newrelic/manifests/server/linux.pp relative to each directory in your module path. (Note: newrelic, NOT fsalum-newrelic.) Make certain that you indeed did install the module such that such a file exists in your modulepath, and make sure that it is readable by the puppetmaster process.
Note, too, that "readable by the puppetmaster process" means more than just the ownership and permissions of the file itself. It also involves ownership and permissions of all the directories in the path to that file, and possibly other forms of access control, such as ACLs and SELinux conext and policy.
Find out where you are actually installing the new puppet forge modules using perhaps a unix utility like "locate".
Then look in the the /etc/puppet/puppet.conf at the "basemodulepath" and check that the place it is installed is in the path
Here is my basemodulepath
basemodulepath = $confdir/environments/production/modules:$confdir/environments/production/local_modules:/etc/puppet/modules
The external modules I am using are either in /etc/puppet/modules or in /etc/puppet/enviroments/production/modules

Cobertura -java.lang.IllegalArgumentException: Class does not have a default interface

I am using Cobertura to code coverage for Integration test. I am facing below issue while deploying instrumented jar in JBoss server.
DEPLOYMENTS IN ERROR: Deployment "vfszip:/D:/jboss-5.1.0.GA/server/test/some_jar.jar/" is in error due to the following reason(s):
java.lang.IllegalArgumentException:
Class class com.someclass does not have a default interface
Here are the steps I followed so far:
Downloaded cobertura-1.9.4.1.
Using this command obertura-instrument.bat C:\some_jar.jar I generated the .ser file and instrumented jar for some_jar.jar.
Placed the jar in JBoss server test/ folder.
Copied the .ser file to JBoss/bin folder.
Copied the Cobertura.jar to Jboss/lib folder.
Run the JBoss server.
Please let me know if I am missing any thing here.
Probably a configuration file error because you possibly have not adjusted a setting before starting for the first time.

Is that the 'path to war' which I am giving wrong? If yes,how do I do a rollback?

I have been trying to use the command to rollback the last process of deploying the website which was interrupted due to a network failure.
The generic command that I am using while inside the bin directory of server's SDK (On Linux) is :
./appcfg.sh rollback /path_to_the_war_directory_that_has_appengine-web.xml
Is this the way we do a rollback ? If not please tell me the method.
_(I was asked to make a directory war in the project directory and place the WEB-INF folder in that with appengine-web.xml inside it. It may be wrong)_
I am fully convinced that I am making a mistake while giving the path to my app .
Shot where my .war file is there :
Now the command that I am using is (while inside the bin directory of the server's SDK) :
./appcfg.sh rollback /home/non-admin/NetbeansProjects/'Personal Site'/web/war
The following is the representation of the path to war directory :
Where am I wrong ? How should I run this command so that I am able to deploy my project once again ?
On running the above command I get this message :
Unable to find the webapp directory /home/non-admin/NetbeansProjects/Personal Site/web/war
usage: AppCfg [options] <action> [<app-dir>] [<argument>]
NOTE : I have duplicated the folder WEB-INF. There is still a folder named WEB-INF inside the web directory that contains all other xml files.
The error tells you that the folder /home/non-admin/NetbeansProjects/Personal Site/web/war does not exist. If you look carefully the name of the folder is NetBeansProjects (the filesystem in Linux is case-sensitive).
So, you should run instead the command:
./appcfg.sh rollback /home/non-admin/NetBeansProjects/'Personal Site'/web/war
and just to make sure that the directory exists run first
ls /home/non-admin/NetBeansProjects/'Personal Site'/web/war

Resources