We have CRON jobs running on our app, all of which were working fine till last night. All the tasks (we have a few in minutely and hourly) have stopped abruptly (the last log entry was for around 1:52 AM IST).
Openshift seems to be running an upgrade since Nov 3, 12:46 EST, so I assume it is because of this but usually any issues during such events subscide within an hour.
We ahve tried the usual steps : Stop/Sart the CRON cartridge, Stop/Start the app, Removed the Cron cartridge and added again but its not working.
Anyone else facing this right now??
It seems I had to restart the app once more after after re-adding the cartridge. Now its working, although why it occurred is still a mystery.
Related
I have a worker dyno that I use to run my discord bot. It works normally every day and I have been using it for a long time but suddenly today is just stopped working even though it shows that it turned the dyno on but my bot is still offline and I also have enough hours so it is not a hours problem too(My code is also perfect which I have connected through GitHub pipeline). I cannot figure out what's wrong with the dyno. Does anybody know why it might be happening?
Solved. I was missing the ms module. So I installed the ms module using npm i ms -s and then updated my package.json file to Github and when the build was done, it worked.
I have been using Openshift 3 seamlessly to build and deploy a Node.js application for several months.
Since yesterday, new builds of the same app systematically fail because they are stuck forever in the Pending state, with the "Started status: not started" until after an hour when the build fails with "No logs available".
Has anyone else had this issue?
I am using starter-us-east-1 (no mention of it in status.starter.openshift.com).
My app runs fine locally and I don't understand what's the issue with Openshift.
Deployment of existing builds seems to work, but there's no way to make new builds, either manually or via webhooks.
I didn't change anything in the configuration, just my code.
Thank you by advance for any hint.
Additional info: The issue doesn't occur in starter-ca-central-1 where I can see a whole new interface with notifications, and the build of the app I have there works fine.
More info: The new interface has been rolled out to starter-us-east-1, but the problem is still more present than ever. Here is a screenshot of my Pods/Events tab:
One day, out of nothing, my app decided not to die.
After pressing ctrl + c when I inspect port it's still there and not only on my machine but also on server that is managed by PM2 so I need to go there each time and kill the process by hand. I tried to look for issues in my code and when I couldn't I just though that it's simply one of the dependencies fault and soon there will be a fix. It was slightly annoying but I could kill the process with kill -9 PID and I had most of the tasks I had on front-end side so it wasn't that big issue. Today, more than a week later problem it's still here.
I went back in history, picked a commit that I made weeks ago, where everything worked perfectly fine, switched NodeJS from 5.1.0 to 4.2.1, cleared npm cache, reinstalled all dependencies and I still see the problem.
I'm using LoopbackJS, but normally I start the app simply with "node server/server.js" and then the problem described above happens, but if I use "slc run" and then try to kill the app with ctrl + c it just hangs forever, I mean I can press ctrl+c as many times as I want and it's still running in console foreground.
If instead of pressing ctrl+c I kill the tab in console, app dies without problems.
This is what I see after running "lsof -i tcp:4000" when the app is supposed to be dead but is not
Edit:
Running and killing it with Strongloop process manager - slc start/slc stop works fine but it would be more convenient to use normal way of running NodeJS app (node server.js) during development and it doesn't change the fact that there is some issue and it would be better to not hide it under the carpet.
Out of anger I stripped out my app to the most basic elements. File by file, piece by piece and I found the answer.
The culprit was phantomJS - https://github.com/sgentle/phantomjs-node. PATCH version 0.8.2 introduced this bug. Fix was created almost a month ago, merged a week ago but not published on npm yet.
I've installed Odoo/OpnnERP, and I've put the openerp-server daemon in /etc/init.d to get it started automatically on Ubuntu startup. It gets started normally on every reboot and works fine. But sometimes it goes down automatically, I don't know any pattern when it goes down, and I also can't see any reason in the log file (/var/log/openerp/openerp-server.log). It just goes down without any logs. When I find it down, I have to reboot manually to get it started again.
Any help on spotting the issue which causes openerp-server to stop automatically without any logs?
Thanks,
Abdul
I have Odoo under a DO droplet and all I did was upgrading the droplet, that's it!
Just found an answer to the above issue. Here is the solution
I'm using SPJobDefinition.Execute to explicitly force a timer job to run for a bit of testing. The job runs but the time it last ran hasn't changed in either 'Timer job status' or 'Timer job definitions'. As this hasn't run before, forcing it doesn't even appear in 'Timer job status'. I recall that it did update the last run time inside central admin when I last tried this. So either something is broken and it's not updating the status, or it doesn't update the status by design and I'm mistaken about it doing so last time.
I've discovered that OWSTIMER.exe doesn't end up doing the execution. I wrote a console app which did the call to SPJobDefinition.Execute and it turns out that Execute doesn't schedule it to run there and then, but actually loads up the dll for the job and runs it in process. I would imagine that a side effect of this is that it doesn't run under schedule and so Central Admin doesn't show that it ran.
I had similar problem, the solution was to restart the 'SharePoint 2010 Timer' service (to reload the job DLL).