I am trying to run DocumentDB Emulator as a windows service using sc utility on a port which is different from the default port 8081 which it is trying to use.
sc create DocumentDBEmulatorService binPath= "path\to\exe\DocumentDB.Emulator.exe /port=8082" start= auto
The Service gets created and fails to start with the following error message
The DocumentDBEmulatorService service failed to start due to the following error.The DocumentDBEmulatorService did not respond to the start or control request in a timely fashion.
A timeout was reached (30000 milliseconds) while waiting for DocumentDBEmulatorService service to connect.
Is it possible to run Document DB emulator executable as service or am I trying to something which is clearly not possible ?
sc will only run an executable which that is a proper Windows service (i.e. implement ServiceMain).
You can try something like NSSM instead.
See answers in this question (except the accepted one) for more options.
Related
Background:
I have a kubernates cluster in my cloud server,And i deployed spring boot application as a pod .Everything is ok before 2 week , but my application unexpectedly could not be accessed on Ferb 22.
I used command "kubectl exec -it <pod> sh ",curl 127.0.0.1:<port> in pods but no response .I saw my application logs but cant found error about this issue.I try to restart my applications ,but the same issue occur after two days.
I have no idea with this issues.Can any one help me ?
When everything ok ,i can call 127.0.0.1:18890 and have response immediately ,once issues happend it will be requested timeout .Only this kubernates service have this question ,others seems normal
I have write a simple python 3.7 window service and installed successfully.Now I am facing this error
"Error starting service: The service did not respond to the start or control request in a timely fashion."
Please Help me to fix this error.
Thanks
One of the most common errors from windows when starting your service is Error 1053: The service did not respond to the start or control request in a timely fashion. This can occur for multiple reasons but there are a couple things to check when you do get them:
Make sure your service is actually stopping:Note the main method has an infinite loop. The template above with break the loop if the stop even occurs, but that will only happen if you call win32event.WaitForSingleObject somewhere within that loop; setting rc to the updated value
Make sure your service actually starts: Same as the first one, if your service starts and does not get stuck in the infinite loop, it will exit. Terminating the service
Check your system and user PATH contains the necessary routes: The DLL path is extremely important for your python service as its how the script interfaces with windows to operate as a service. Additionally if the service is unable to load python - you are also hooped. Check by typing echo %PATH% in both a regular console and a console running with Administrator priveleges to ensure all of your paths have been loaded
Give the service another restart: Changes to your PATH variable may not kick in immediately - its a windows thing
I have a node app on IBM cloud and it keeps crashing every time and most of the time it's not running, I've even increased the memory per instance to one gb, How do I diagnose where the issue is? Here is my manifest.yml. So I'm in a situation whereby I have to continually check the app and do a manual restart
applications:
- instances: 1
timeout: 600
name: TicketSokoChatbot
buildpack: sdk-for-nodejs
command: npm start
memory: 1024M
random-route: true
here is the error:
an instance of the app crashed: Instance never healthy after 1m0s: Failed to make TCP connection to port 8080: connection refused; process did not exit
When running on cloud foundry, the port is set for you. You must use that port which you can find in the environment variable PORT, e.g.
app.listen(process.env.PORT || 3000);
If the port isn’t the cause of the issue, the next thing you could try is changing the health check timeout.
If this doesn’t work for you, the cloud foundry docs provide information on Troubleshooting, in particular take a look at the section App Fails to Start. Here is one of the debug steps listed in the cloud foundry documentation:
Find the reason app is failing and modify your code. Run cf events
APP-NAME and cf logs APP-NAME --recent and look for messages similar
to this:
2014-04-29T17:52:34.00-0700 app.crash index: 0, reason: CRASHED, exit_description: app instance exited, exit_status: 1
These messages may identify a memory or port issue. If they do, take
that as a starting point when you re-examine and fix your application
code.
After trying all of debug steps, if you are still unable to fix your problem add more information to your question with what you have tried.
I recommend that anyone building cloud foundry apps gets acquainted with the developer focused cloud foundry documentation Deploying and Managing Applications.
I have a service on a Redhat 7.1 which I use systemctl start, stop, restart and status to control. One time the systemctl status returned active, but the application "behind" the service responded http code different from 200.
I know that I can use Monit or Nagios to check this and do the systemctl restart - but I would like to know if there exist something per default when using systemd, so that I do not need to have other tools installed.
My preferred solution would be to have my service restarted if http return code is different from 200 totally automatically without other tools than systemd itself - (and maybe with a possibility to notify a Hipchat room or send a email...)
I've tried googling the topic - without luck. Please help :-)
The Short Answer
systemd has a native (socket-based) healthcheck method, but it's not HTTP-based. You can write a shim that polls status over HTTP and forwards it to the native mechanism, however.
The Long Answer
The Right Thing in the systemd world is to use the sd_notify socket mechanism to inform the init system when your application is fully available. Use Type=notify for your service to enable this functionality.
You can write to this socket directly using the sd_notify() call, or you can inspect the NOTIFY_SOCKET environment variable to get the name and have your own code write READY=1 to that socket when the application is returning 200s.
If you want to put this off to a separate process that polls your process over HTTP and then writes to the socket, you can do that -- ensure that NotifyAccess is set appropriately (by default, only the main process of the service is allowed to write to the socket).
Inasmuch as you're interested in detecting cases where the application fails after it was fully initialized, and triggering a restart, the sd_notify socket is appropriate in this scenario as well:
Send WATCHDOG_USEC=... to set the amount of time which is permissible between successful tests, then WATCHDOG=1 whenever you have a successful self-test; whenever no successful test is seen for the configured period, your service will be restarted.
I have setup 2 instance under aws load balancer. I have deployed node.js web services + mongodb in both instance. load balancer works fine with web services.
But, Problem is I have one timer service (node.js service only). the behavior of this timer is updating my mongodb based on some calculation.
My problem is, I must need to run this timer service (timer.js) at only one aws instance (out of 2) at same time. and expected that if one aws instance goes down then timer service at other instance will come up.
i know elb not providing this kind of facility.Can any one please help me to make it done ?
Condition : At a time only one timer service must be run with amazon load balancer.
Thanks.
You would have to implement this yourself using a locking algorithm using a shared data store that supports atomic operations
Alternatively, consider starting a "timer" server in an Auto Scale Group of Min:1, Max: 1 so Amazon keeps it running. This instance can be a t2.micro which is very cheap. It can either run the job itself, or just make an http request to your load balancer to run the job at the desired internal. If you so that, only one of your servers will run each job
Wouldn't it make more sense to handle this like any other "service" that needs to keep running?
upstart service
running node.js server using upstart causes 'terminated with status 127' on 'ubuntu 10.04'
This guy had a bad path in his file but his upstart script looks okay
monit
Node.js (sudo) and monit