ActivePERL on Windows running multiple instances of script simultaneously? - multithreading

I've been unable to find anything which really answers the question that I have, so if anyone can shed some light on the matter, I'd be grateful.
I'm not a Unix/Linux guy, so I'm running ActivePerl on Windows NT.
The scenario is this:
webscript.cgi calls background.pl to do some dirty work while the user continues browsing the site, using system($cmd). This works fine and all, but what I am wondering is THIS:
What happens if MULTIPLE calls are made within seconds of each other as a result of users actions to run background.pl? Will multiple instances of background.pl run simultaneously? Will one instance have to complete before the next can begin? Will any subsequent instances called simply fail? Or, will my machine begin to smoke and then perhaps explode? (chuckle)
Again, this is running in a Windows environment so I'm not sure if the rules with ActivePerl are a bit different than running in a Unix environment. Thanks to anyone who might have some information about this!

The web server doesn't know anything about the process running background.pl, so it does what it always does. It runs webscript.cgi which launches background.pl.
Now, if webscript.cgi waits for background.pl to complete, you could run into a situation where the web server stops accepting requests because all of its workers are running webscript.cgi. It will resume once a script ends.
All of this is very easy to test.
Will one instance have to complete before the next can begin?
No.
Will any subsequent instances called simply fail?
No.
Or, will my machine begin to smoke and then perhaps explode? (chuckle)
A poorly configured server could indeed be brought down by trying to run too many programs at once.

Related

Explain me the advantage to use Cronjob inside your code our outside your code?

i have to do a reptitive task in nodeJS and i've seen there is existing package like this one.
https://www.npmjs.com/package/node-cron
and the actual platform where i'm hosted propose inside cronjob.
https://www.netlify.com/docs/webhooks/
so my question is when it's more interessant to use the platform or a package.
thanks.
From the URL posted i didn't see any method of setting up a cron job using webhooks. Unless you were thinking of setting up a webhook that listens for a post which is sent using a linux cron job or the like?
Regardless, the actual question about using a platform or a package. They have pros and cons, but based purely on your question I would go with the platform.
If you choose to use a package you will have to write the code to call the package (which you need to test, maintain and run). You need to ensure that the node process is always up and running, if it dies or exits that it is re-spawned, that if the operating system reboots the node process gets kicked off again. All these problems are can be easily solved (PM2 for instance) but the fact is you need to think of the problems and solve them yourself or the cron job might not run when you want it to.
When using the platform you know that it is well tested, that it will work as documented, and that it will be resilient to failure modes that you might not be aware of.

Why use nohup when the app can be run as system service?

I put this question on stackoverflow, because I found lots of quesions on the topic here already.
Short introduction
Simply put, nohup can be used to run apps in the background and keeps them running after the user logs off or the terminal or ssh session is closed e.g.
There are many example quesitons here on stackoverflow, like this or that.
My question is simple.
Why opt for nohup, when there are options like upstart, systemd, ... which manage the app as service in a much more convenient way (runlevels, ...)?
Reading the many questions on similar topics, the only option seems to be nohup. Almost never the answer is something like: "... use an upstart script, so it is all handled for you..."
I would mainly go with e.g. upstart, except maybe for a quick and dirty test scenario.
Am I missing something important?
nohup is quick, easy, doesn't require root access and doesn't make permanent changes in the system. That's why many people use it (or try to use it) instead of configuring services.
Running things in the background without any supervision is usually a bad idea, although there are many legitimate use cases which don't fit traditional service model. For example:
Background process might be needed only sometimes, after certain user actions.
More than one instance might be needed. For example: one per user or one per session.
Process might not need to to be running all the time. It just quits after doing its job.
Some real world examples (which use something like nohup and would be hard to implement as system services):
git will sometimes run git gc in background to optimize repository without blocking user work
adb will start its service in background and keep it running until user asks to terminate it
Some compilers have option to keep running in the background to reduce startup times of subsequent invocations
Your understanding is correct, the way those questions were asked the natural answer is nohup - upstart would be the answer to a different question such as How to make sure an application keeps running on Linux

How do I create an application that runs in the background and is interactive in Linux?

I want to create an application that runs in the background in Linux (daemon) that will basically at set times (5 times) play a music file or any sound given every single day. I want this daemon to start when the computer is started in terminal mode (non-GUI). I want to know if this is possible and if so, What considerations, tools, and programming language would be the most efficient in doing so? This will be a dedicated computer that will only be executing this task, so if any recommendations on how I can maximize efficiency while disabling other features that are not required for this task will be appreciated. Also, could you please explain how processes and tasks work in terminal (non-GUI)? I always thought terminal was something like CMD in Windows and can only run tasks one at a time.
EDIT: I need the sound to run at variable times, I'll be fetching these times from a website. Any suggestions regarding how to achieve this?
Thanks for the help and sorry for any shortcoming in the questions or my research.
Look at using cron to run your tasks. cron is a very flexible scheduling utility built in to most Linux distributions.
Basically, with cron you specify a task to run (your main program, or maybe just a sound-playing program), all of its arguments, and when it runs. cron takes care of running it, and will even send you "mail" if the job produces any output (such as errors).
You can make processes fork into a subprocess of your terminal, i.e. you are able to run more than one task at a time by putting a & after your terminal command:
> cmd&
> [you can type other commands here but the "cmd" program is still running]
However, for services you generally don't have to worry about starting it as a subprocess because the system already knows to do this. Here's a good question from Super User that has an example of a working service. Simply place your service as a shell script in the /etc/init.d and it will be automatically started as a service.

oracle temporary ora 12505 error after linux startup

I am experiencing a very strange behavior with oracle, maybe somebody can help me, let me summarize it real quick:
My OS of choice is debian linux, I am using Oracle XE 11.0.2.0. On linux startup, I run a script file which is located under /etc/init.d/. I added the following line to make oracle start on system start:
/etc/init.d/oracle-xe start
Right after this line , I run my application from the script, my application heavily relies on the oracle db, therefore once oracle starts, I am positive that my application will run ok. Unfortunately my assumption seems wrong.Here's why: I set up similar set up in 3 machines, in 2 of them I see weird behavior, after system start oracle db is not responding to connection requests, Even though oracle-xe start command completed executing.
My observation is the following, if I run my application right after oracle-xe start is executed, I receive ora-12505 errors at least for a minute: "TNS listener does not currently know of SID" . After a minute everything stabilizes, and my application starts working ok. 1 minute without a db on system startup is not acceptable for me performance-wise, therefore I am trying to solve this problem.
Surprisingly it does not happen in one of the other linux boxes I have here, I am not quite sure what is different on that box. I compared ora files, but couldn't find any difference, it seems like a wild goose chase...
I would be so grateful if anybody has experienced and solved ths problem before and shares that valuable solution with me.
I think I found the problem, looks like I am starting oracle-xe instance before I assign network interfaces an IP address, in that case it takes some time for oracle to receive connections, that requires me to set static ip on the linux boxes, which is something I don't want. Is there a solution so that I can still assign IP addresses later on?

Linux: What should I use to run terminal programs based on a calendar system?

Sorry about the really ambiguous question, I really have no idea how to word it though hopefully I can give you more detail here.
I am developing a project where a user can log into a website and book a server to run a game for a specific amount of time. When the time is up the server stops running and the players on the server are kicked off. The website part is not a problem, I am doing this in PHP and everything works. It has a calendar system to book a server and can generate config files based on what the user wants.
My question is what should I use to run the specific game server on the linux box with those config files at the correct time? I have got this working with bash scripts and cron, but it seems very un-elegant. It literally uses FTP to connect to the website so it can download all the necessary config files and put them in a folder for that game and time. I was wondering if there was a better way of doing this. Perhaps writing a program in C, but I am not sure how to go about doing this.
(I am not asking for someone to hold my hand and tell me "write this code here", just some ideas of a better way of approaching this problem)
Thanks so much guys!
Edit: The webserver is a totaly different machine. I would theoreticaly like to have more than one game server where each of them "connects" (at the moment FTP) to the webserver, gets a file saying what it has to do at a specific time and downloads any associated files then disconnects.
I think at is better suited for running one time jobs than cron.
For a better approach for the downloading files etc, you should give more details on your setup (like, the website and the game server, are they on the same machine? Or the same network? etc etc.
You need a distributed task scheduler. With that, you can:
Schedule command "X" to be run at a certain time.
Specify the machine (or ask it to pick a machine from a pool of available machines)
Webserver would send request to this scheduler via command line or via web service when user selects a game server and a time.
You can have a look at : http://www.acelet.com/super/SuperWatchdog/index.html
EDIT :
One more option :http://jobscheduler.sourceforge.net/

Resources