Run 2 copies of Jboss 6.1.0 in same machine, same time and same application - jboss6.x

Hi Can some one provide me step by step guide to run 2 JBoss 6.1.0 Final in the same machine with different ports? My requirement is that i want to run same web application at same time in 2 different JBoss6.1.0 Final instances. one should be running on port 8080 and other one should be running on 8180.

You can find all the information here
Assuming you have two profiles (node1 and node2) you can run these two commands to run two instances
./run.sh -c node1
./run.sh -c node2 -Djboss.service.binding.set=ports-01 -Djboss.messaging.ServerPeerID=1

Related

How do I automatically start a node/express app (with pm2), having node installed using software collections (scl), on CentOS7

1. Summarize the problem
I would like for a node/express app.js to listen on a port 3000, on container startup.
I created a CentOS 7 Docker container, installed the software collections (SCL) repo, and then installed node.
I can now enable node with:
scl enable rh-nodejs10 bash, and so I did, and then installed express (globally), and pm2 (globally), and can successfully run a minimal express app listening on port 3000 with commands I run at the command line.
I put scl enable rh-nodejs10 bash in my .bash_profile (of a user I created named: www - because I do not want root running the web server).
In fact, I will be building a rootless container (buildah), next after this, so there will be no 'root' user at all for security concerns.
Now on container startup I want to have the web server start automatically, and be able to get a response from: http://localhost:3000 (hello world).
The problem is that on container startup, node is not enabled for any user until a shell is invoked to enable it.
2. Provide background including what you've already tried
I have searched the web for a solution of using node, express, pm2 in conjunction with CentOS 7 software collections and have found no solution.
Please only reply if you have actually tried the solution your recommend, and have it working, otherwise it most likely will not work.
systemd needs to:
1. enable node
2. run pm2 start app
I tried putting both in a shell, but when you enable node, you are then put in a sub-shell and cannot script any additional commands.
3. show some code
scl enable rh-nodejs10 bash
4. Describe expected and actual results including any error messages
I expect the node/express server to listen on port 3000 on container startup.
I have node running on reboot on RHEL 7 by using scl-utils/scl_source technique found here
$ cat /etc/profile.d/enablenodejs.sh
#!/bin/bash
source scl_source enable rh-nodejs10

Configure the npm to start in the background on Mac OS X

Description
I am on a Mac OS X.
Right now, I have almost 10 Laravel/LAMP projects locally, that I ran using vhost configured with Apache. The awesome part about them is even when I restart my Mac or move between networks, or even close the terminal app/tab of my projects, the Apache is still running, all my local sites will still be accessible.
Goal
Now, I am looking to do the same things with my MEAN apps.
How would one configure something like that ?
Let's say I have 3 MEAN apps.
Example
App1
FE running on port : http://localhost:4201
BE running on port : http://localhost:3001
App2
FE running on port : http://localhost:4202
BE running on port : http://localhost:3002
App3
FE running on port : http://localhost:4203
BE running on port : http://localhost:3003
I'm opening for anything suggestions at this moment.
Can we configure the npm to start in the background ?
BE/API
FE
You can use macOS's launchd to run services in the background. There are a couple good GUI apps that make it easier to create launch services:
LaunchControl ($10)
Lingon ($10) - If you go with Lingon, get Lingon X 5 from the official website instead of Lingon 3 from the Mac App Store; Lingon X 5 is more powerful because it is not limited by Apple's sandboxing.
There's also launched.zerowidth.com, an interactive online tool for creating the .plist files that launchd uses.
launchd.info is also a good resource if you want to set them up manually. Apple's documentation is available too.
If you are having problems with commands not working, I recommend trying these troubleshooting steps:
Convert all your commands to use absolute paths (e.g. npm -> /usr/local/bin/npm). You can find the absolute path of a command by running which with the name of the command (e.g. which npm)
Run your commands from within bash using /bin/bash -c (e.g /bin/bash -c "/usr/local/bin/npm start")
One thing you can do is dockerize your applications.
With docker you can run your applications in a light weight virtual machine known as containers in your computer.
This have some advantages, for example, you can run your app with port 80 inside the virtual machine and expose another port to your machine. You can start or stop the container and so forth.
Go to https://www.docker.com/what-docker for more information.

How to start Jetty with multiple standalone instances?

I am trying to start two instances of Jetty in different ports (one is 8080 and the other is 443).
I created two jetty.base directories using start.jar with the parameter --add-to-startd.
When I run "java -jar /opt/jetty/start.jar" in the first app directory it starts normally, port 8080.
When I run "java -jar /opt/jetty/start.jar" in the second app directory, it kills the first process. And after that starts normally, port 443.
If I change the order the same thing happens.
How can I run more than one instance of Jetty without one killing the other?
Jetty: jetty-distribution-9.3.0.M2
Java: jdk1.8.0_25
Operating system: Linux CentOS release 6.6
I found the problem, the process died because the server was out of memory.
I didn't see any Exception in the logs, but monitoring the machine, I saw that when the memory was close to 100% the process died.

Docker orchestration

I know this is a bit long question but any help would be appreciated.
The short version is simply that I want to have a set of containers communicating with each other on multiple hosts and to be accessible with SSH.
I know there are tools for this but I wasn't able to do it.
The long version is:
There is a software that has multiple components and these components can be installed in any number of machines. There is a client- and a server-side for this software.
Both the client-server and the server side components communicate via UDP ports.
The server uses CentOS, the client uses Microsoft Windows.
I want to create a testing environment that consists of 4 containers and these components would be spread across these containers and a client side machine.
The docker host machine is Ubuntu, the containers are CentOS.
If I install all the components in one container it's working, if there are more than it's not. According to the logs its working but its not.
I read that you need to link the containers or use an orchestrator like Maestro to do this, but I wasn't able to do it so far.
What I want is to be able to start a set if containers which communicate with each other, on one or multiple hosts. I want to be able to access these containers with ssh so the service should start automatically.
Also it would be great to use ddns for the containers because the names would be used again and again but the IP addresses can change, but this is just the cherry on top.
Some specifications:
The host is a fresh install of Ubuntu 12.04.4 LTS x86_64
Docker is the latest version. (lxc-docker 0.10.0) I used the native driver.
The containers a plain simple centos pulled from the docker index. I installed some basic stuff on the containers: openssh-server, mc, java-jre.
I changed the docker network to a network that can be reached from the internal network.
IP tables rules were cleared, because I didn't needed them, but also tried with those in place but with no luck.
The /etc/default/docker file changes:
DOCKER_OPTS="--iptables=false"
or with the exposed API:
DOCKER_OPTS="-H tcp://0.0.0.0:4243 --iptables=false"
The ports that the software uses are between 6000-9000 but I tried to open all the ports.
An example of run command:
docker run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I also tried with exposed API:
docker -H :4243 run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I'm not giving up but I would appreciate some help.
You might want to take a look at the in-development docker swarm project. It will allow you to treat your set of test machines as a cluster to which you can deploy containers to.
You could simply use fig for orchestration and link the containers together instead of doing all that ddns and port forwarding stuff. The fig.yml syntax is pretty straight-forward.
You can use weave for networking part. You can use these tutorials
https://github.com/weaveworks/weave
http://xmodulo.com/networking-between-docker-containers.html

How to simultaneously deploy Node.js web app on multiple servers with Jenkins?

I'm gonna deploy a Node.js mobile web application on two remote servers.(Linux OS)
I'm using SVN server to manage my project source code.
To simply and clearly manage the app, I decided to use Jenkins.
I'm new to Jenkins so it was a quite difficult task installing and configuring Jenkins.
But I couldn't find how to set up Jenkins to build remote servers simultaneously.
Could you help me?
You should look into supervisor. It's language and application type agnostic, it just takes care of (re-) starting application.
So in your jenkins build:
You update your code from SVN
You run your unit tests (definitely a good idea)
You either launch an svn update on each host or copy the current content to them (I'd recommend this because there are many ways to make SVN fail and this allows to include SVN_REVISION in the some .JS file for instance)
You execute on each host: fuser -k -n tcp $DAEMON_PORT, this will kill the currently running application with the port $DAEMON_PORT (the one you use in your node.js's app)
And the best is obviously that it will automatically start your node.js at system's startup (provided supervisor is correctly installed (apt-get install supervisor on Debian)) and restart it in case of failure.
A node.js supervisord's subconfig looks like this:
# /etc/supervisor/conf.d/my-node-app.conf
[program:my-node-app]
user = running-user
environment = NODE_ENV=production
directory = /usr/local/share/dir_app
command = node app.js
stderr_logfile = /var/log/supervisor/my-node-app-stderr.log
stdout_logfile = /var/log/supervisor/my-node-app-stdout.log
There are many configuration parameters.
Note: There is a node.js's supervisor, it's not the one I'm talking about and I haven't tested it.
per Linux OS, you need to ssh to your hosts to run command to get application updated:
work out the workflow of application update in shell script. Especially you need to daemonize your node app so that a completed jenkins job execution will not kill your app when exits. Here's a nice article to tell how to do this: Running node.js Apps With Upstart, or you can refer to pure nodejs tech like forever. Assume you worked out a script under /etc/init.d/myNodeApp
ssh to your Linux OS from jenkins. so you need to make sure the ssh private key file has been copied to /var/lib/jenkins/.ssh/id_rsa with the ownership of jenkins user
Here's an example shell step in jenkins job configuration:
ssh <your application ip> "service myNodeApp stop; cd /ur/app/dir; svn update; service myNodeApp restart"

Resources