multiple process triggered on linux - linux

I have multiple entries of a process "solr" on linux. It is installed as service on system and I can see following entries:
Inside file rc.local
Start solr on VM start-up
/sbin/service solr start
Also under following files:
file: rc1.d
entry: K29solr -> ../init.d/solr
file: rc2.d
entry: K29solr -> ../init.d/solr
file: rc3.d
entry: S79solr -> ../init.d/solr
file: rc4.d
entry: S79solr -> ../init.d/solr
file: rc5.d
entry: S79solr -> ../init.d/solr
My question is will these multiple entries lead to triggering of multiple starting of this process solr? Currently only one process is running but logs depict another process might have got triggered but just want to be sure is these entries could be reason. I am linux expert so please bear with me.

It seems like you want this process to run regardless of the runlevel (rc.#). You should only need the one entry in rc.local.
Here is more information on runlevels and startup scripts:
https://www.linux.com/news/enterprise/systems-management/8116-an-introduction-to-services-runlevels-and-rcd-scripts

When the system starts, it finds the default run level from the /etc/inittab file.
It will then run any scripts that have symbolic-links in the relevant rcn.d directory.
If the symbolic-link starts with S it will pass "start" to the linked script, if it starts with K it will pass "stop" to the linked script.
That is why you will find mostly K-prefixed symbolic-links in the rc0.d and rc6.d directories, because runlevel 0 is shutdown and 6 is reboot.

Related

Tomcat setenv.sh not being picked up

I have tomcat9 on linux as an on demand service, go into /bin and start. I have a simple setenv.sh, shown below, that now fails show second below. Everything is standard no other changes. TC does startup how I need those options working. How could this have gone from working to non-working. How can I get again working and loading the setenv.sh?
setenv.sh: CATALINA_OPTS=-Xmx512m -Djasypt.encryptor.password=123
on startup or shutdown this first message. of course catalina.sh is in the executing directory.
./catalina.sh: 1: /usr/tomcat/tc9/bin/setenv.sh:
-Djasypt.encryptor.password=123: not found

How to run two shell scripts at startup?

I am working with Ubuntu 16.04 and I have two shell scripts:
run_roscore.sh : This one fires up a roscore in one terminal.
run_detection_node.sh : This one starts an object detection node in another terminal and should start up once run_roscore.sh has initialized the roscore.
I need both the scripts to execute as soon as the system boots up.
I made both scripts executable and then added the following command to cron:
#reboot /path/to/run_roscore.sh; /path/to/run_detection_node.sh, but it is not running.
I have also tried adding both scripts to the Startup Applications using this command for roscore: sh /path/to/run_roscore.sh and following command for detection node: sh /path/to/run_detection_node.sh. And it still does not work.
How do I get these scripts to run?
EDIT: I used the following command to see the system log for the CRON process: grep CRON /var/log/syslog and got the following output:
CRON[570]: (CRON) info (No MTA installed, discarding output).
So I installed MTA and then systemlog shows:
CRON[597]: (nvidia) CMD (/path/to/run_roscore.sh; /path/to/run_detection_node.sh)
I am still not able to see the output (which is supposed to be a camera stream with detections, as I see it when I run the scripts directly in a terminal). How should I proceed?
Since I got this working eventually, I am gonna answer my own question here.
I did the following steps to get the script running from startup:
Changed the type of the script from shell to bash (extension .bash).
Changed the shebang statement to be #!/bin/bash.
In Startup Applications, give the command bash path/to/script to run the script.
Basically when I changed the shell type from sh to bash, the script starts running as soon as the system boots up.
Note, in case this helps someone: My intention to have run_roscore.bash as a separate script was to run roscore as a background process. One can run it directly from a single script (which is also running the detection node) by having roscore& as a command before the rosnode starts. This command will fire up the master as a background process and leave the same terminal open for following commands to be executed.
If you could install immortal you could use the require option to start in sequence your services, for example, this is could be the run config for /etc/immortal/script1.yml:
cmd: /path/to/script1
log:
file: /var/log/script1.log
wait: 1
require:
- script2
And for /etc/immortal/script2.yml
cmd: /path/to/script2
log:
file: /var/log/script2.log
What this will do it will try to start both scripts on boot time, the first one script1 will wait 1 second before starting and also wait for script2 to be up and running, see more about the wait and require option here: https://immortal.run/post/immortal/
Based on your operating system you will need to configure/setup immortaldir, her is how to do it for Linux: https://immortal.run/post/how-to-install/
Going more deep in the topic of supervisors there are more alternatives here you could find some: https://en.wikipedia.org/wiki/Process_supervision
If you want to make sure that "Roscore" (whatever it is) gets started when your Ubuntu starts up then you should start it as a service (not via cron).
See this question/answer.

Pm2 changing log file location

I have couple of questions regarding pm2
How can I change the location of server-error-0.log and
server-out-0.log files location from c:\users\user\.pm2\logs to other drive, due to restriction in server's c drive access.
Can I log the error and info in database instead of a log file? Do I need to write a separate module for that or is there any way to achieve this?
How can I change the location of ...log file location?
To change pm2's log file location, there are 2 solutions: define log path as parameter when pm2 command is executed (-l, -o, -e), or start pm2 from a configuration file.
For the parameter solution, here is an example:
pm2 start app.js -o ./out.log -e ./err.log
If you don't want to define log path every time when pm2 is executed, you can generate a configuration file, define error_file and out_file, and start pm2 from that:
Generate a configuration file: pm2 ecosystem simple. This would generate a file ecosystem.config.js, with following content:
module.exports = {
apps : [{
name : "app1",
script : "./app.js"
}]
}
Define error_file (for error log) and out_file (for info log) in the file, such as:
module.exports = {
apps : [{
name : "app1",
script : "./app.js",
error_file : "./err.log",
out_file : "./out.log"
}]
}
Delete existing processes in pm2:
pm2 delete <pid>
You can get pid by doing:
pm2 status
Start the process from the configuration file:
pm2 start ecosystem.config.js
In this way, the logs are saved to ./err.log and ./out.log.
Please refer to the document for detail information.
Can I log the error and info in database instead of a log file?
I didn't find any resources in official document. It seems you need to write code and save log to database yourself.
Just wanted to add to #shaochuancs answer, that before doing step 3, make sure you delete the old process. If you don't delete the old process, the changes that you made to your process file will not take into effect after you start your app.
You will need to issue this command before doing step 3 above:
pm2 delete <pid>
In case you want pm2 on startup with changed logs path:
pm2 delete all
pm2 start ecosystem.js
pm2 save
pm2 startup
If you want to write both an error log and console log to the same file, might be a use case, like I am interested to have log in OneFile to push to ELK.you can use -l
-l --log [path] specify filepath to output both out and error logs
Here is the example
pm2 start server.js -l /app/logs/server.log
After doing changes do not forget to run this command as mentioned in the answer.
pm2 delete <pid>

How to reload/restart pm2 with flighplan and be aware of a symlink directory?

I am using flighplan to deploy my web service that built by Node.js
My deployment script uploads the new release to a new directory which has a timestamp or some random characters in its name. I am keeping all my releases in my server so I can rollback easily by just changing the link to any specific release and have zero-downtime deployment.
The main directory, named by the service name and it is just a symbolic link that get changed to the new release's directory after uploading it.
ln -snf ~/tmpDir ~/appName
My problem is when pm2 restars my server it uses the original path of the previous release, it doesn't bind with the symbolic link and follow the link to the new directory that the link is pointing to.
Is there any way to restart or reload pm2 and let it be aware of that symbolic link ?
Short answer - You should not run pm2 inside a symlink which changes.
pm2 would always pick the old path the symlink is pointing to unless you use pm2 kill command.
Solution - create a new directory and make it parent directory of smylink and your code directories (mohmal-144 etc.). For the sake of understanding lets call this deploy.
Now you should have below structure
/home/deploy
/home/deploy/mohmal -> home/deploy/mohmal-144.
If you are using pm2, yous should use ecosystem.json (pm2 config for starting apps) file. Though you can name this file to anything you want. For the sake of understanding, lets call this file ecosystem.json file.
Inside this ecossystem.json file, in the apps section, add the cwd directory and cwd should point to the path of the symlink(not the path to which symlink it pointing to). See below example.
{
"apps": [
{
"name": "mohmal",
"script": "src/bin/server.js",
"exec_mode": "cluster",
"instances" : "0",
"cwd": "/home/deploy/mohmal",
"error_file": "/var/log/mohmal/error.log",
"out_file" : "/var/log/mohmal/out.log",
"merge_logs": true
}
]
}
Place this file in the parent directory which is named deploy in this example.
Now run/use pm2 start, pm2 restart commands from this directory only.
If you are already running pm2 in the system, just once run pm2 kill to clear the old processes.
And then use the suggested changes, you would never have to pm2 kill the processes.
Also the changes to symlink would reflect.

Fuse Fabric: How to delete a configuration PID from a profile?

I began modifying a profile and made some mistakes along the way.
Because of this I have PIDs in the profile which I'd like to delete entirely.
These can be seen in the fabric:profile-display default output shown at the bottom of this post.
They are:
http:
patch.repositories=http:
org.ops4j.pax.url.mvn.repositories=http:
I can't find the correct command to delete this. I've tried:
config:delete org.ops4j.pax.url.mvn.repositories=http:
which successfully completes. But the default profile still lists this pid.
I've also tried:
fabric:profile-edit --delete -p org.ops4j.pax.url.mvn.repositories=http: default
which fails with:
Error executing command: String index out of range: -1
This indicates a property path /property must be specified.
Appending simply / doesn't work either.
One more problem is that I have a pid with a seemingly empty name, as indicated by this line:
PID: (nothing follows this output prefix).
Output of fabric:profile-display default:
Profile id: default
Version : 1.0
Parents :
Associated Containers :
Container settings
----------------------------
Repositories :
mvn:org.fusesource.fabric/fuse-fabric/7.0.1.fuse-084/xml/features
Features :
fabric-agent
karaf
fabric-jaas
fabric-core
Agent Properties :
patch.repositories = http://repo.fusesource.com/nexus/content/repositories/releases,
http://repo.fusesource.com/nexus/content/groups/ea
org.ops4j.pax.url.mvn.repositories = http://repo1.maven.org/maven2,
http://repo.fusesource.com/nexus/content/repositories/releases,
http://repo.fusesource.com/nexus/content/groups/ea,
http://repository.springsource.com/maven/bundles/release,
http://repository.springsource.com/maven/bundles/external,
http://scala-tools.org/repo-releases
org.ops4j.pax.url.mvn.defaultRepositories = file:${karaf.home}/${karaf.default.repository}#snapshots,
file:${karaf.home}/local-repo#snapshots
Configuration details
----------------------------
PID:
PID: org.ops4j.pax.url.mvn
org.ops4j.pax.url.mvn.useFallbackRepositories false
org.ops4j.pax.url.mvn.disableAether true
org.ops4j.pax.url.mvn.repositories ${profile:org.fusesource.fabric.agent/org.ops4j.pax.url.mvn.repositories}
org.ops4j.pax.url.mvn.defaultRepositories ${profile:org.fusesource.fabric.agent/org.ops4j.pax.url.mvn.defaultRepositories}
PID: patch.repositories=http:
PID: org.ops4j.pax.url.mvn.repositories=http:
PID: http:
PID: org.fusesource.fabric.zookeeper
zookeeper.url ${zk:root/ip}:2181
I'd be extremely grateful if someone could point the correct command(s).
I had a look at the command-line code for fabric:profile-edit with --delete and unfortunately this function seems to be desgined for deleting key/value pairs from the PID, rather than deleting the PID itself.
(Here's the code for ProfileEdit.java on github)
So basically you can use that command to "empty out" the PIDs, but not to remove them.
fabric:profile-edit --delete --pid mypid/mykey=myvalue myprofile
Knowing that this doesn't help you much, I asked my colleague who sits next to me (and is much smarter than me) and he recommended the following:
Enable fuse management console with container-add-profile root fmc
Opem fmc in a browser (mine is on localhost at port 8181), go to the Profiles page, choose your profile from the list
Go to the Config Files tab, find the PID you want to nuke and click the cross (X).
Et voila, the pid should be gone. Interested to know if this works for you, including on the "blank" profile...
The following works in Fuse 6.2:
1) for property files (which become PID objects)
# create
profile-edit --resource foobar.properties default
# delete
profile-edit --delete --pid foobar default
2) for arbitrary files
# create
profile-edit --resource foobar.xml default
#delete
only via hawtio web console, see screenshot:

Resources