One of the applications in our environment is going to be using PostgreSQL on a Windows Server. My question is regarding "Master Password" feature. For the application to work properly; will we have to enter the "Master Password" every time we reboot the server? In SQL we just reboot the server and as long as the service is running - there's no additional input required.
After restarting the server, "postresql-x64-15 PostgreSQL Server 15" service is running. Application has not gone live so we don't know if there will be any issue.
Related
So I am doing this project. I'm basically creating this server using task warrior and google calendar API to upload tasks that are made from the terminal to the google calendar.
Originally I did this on my personal computer(OS archlinux) and it worked, but I can't keep my computer running 24/7 so that why I opted of using a VPS. The VPS is running Ubuntu 20.04 without GUI. The same process that I did on my computer I to the server, everything went well until the part where google asked to allow the program to which I got a localhost refuse to connect message.
I'm going to assume that because it isn't my local network it going to refuse the connection.
My question how to allow that connection to be accepted by google? Is it something that I need to add or change in the API setting on google?
I have a Google VM, and i can start a web server. The command i issue is: python server.py.
My plan is to have the application running.
Since i will eventually close my pc (and thus the terminal), will the application continue running normally?
Or, do i have to start the server and then use disown, to make the app run in the background?
NOTE: If the second option is the case, does this mean that when i re-log in, and i want to shut down the server, the only way to do it is with pkill?
I have a Nodejs server running a website remotely in a Windows 10 machine. But the machine is sometimes turned off and I do not know that the website is down.
I was thinking of creating a Nodejs website and have it run in Heroku that sends a request to my website running in the windows machine every 5 minutes and notify me via email if it does not get a response. However, I wanted to know if there are better options available for situations like these.
We currently have TDWC (8.5.1) stood up on a Linux server.
(A very OLD Linux server that doesn't have much horsepower).
It's working fine, but slow. We need to upgrade it, and 9.2
is as high as we can go due to our Service Provider limitations.
Instead of upgrading it in place, I would like to install it on a
brand new Windows 2012-R2 server that was provisioned just for Workload Automation tools. I've scoured the manuals and the forums and I don't see anything that addresses this specifically. I assume this install would be handled as a brand new install and not an upgrade as far as the server goes.
My question/concern is about the Started Task and Parmlib on the Mainframe. As long as I am using the same host & port on the mainframe, and the z/OS Connector, wouldn't it be as simple as shutting down the old TDWC and starting up the new 9.2 DWC release? Wouldn't it connect to the same Started Task as the current release does?
The SERPTDWC member on the mainframe
contains the following...
/* TCPIP ZCONNECTOR SERVER
SERVOPTS SUBSYS(TWSC)
PROTOCOL(TCP)
USERMAP(USERS)
TCPOPTS TCPIPJOBNAME(NETITCP)
HOSTNAME(DPSMVS1.EDS.EXPRESS-SCRIPTS.COM)
SRVPORTNUMBER(31121)
INIT CALENDAR(DEFAULT)
There is no problem in running multiple connectors connect to the same server started task, this is the standard configuration when running a DWC cluster.
This does not require any change to SERVOPTS or TCPOPTS, the only check to do is to verify that the users authenticating on the new connector are correctly mapped in the USERMAP, the new connector will present the users with new connector name and you may need to add them in the USERS parameter member
I have a Windows Azure VM(linux server 14.04) running and am able to access the VM using command line on my mac/windows machines. I'm running a node.js server and a mongodb instance on this Azure VM.
The problem is that this nodejs server on the VM gets disconnected after sometime(timeout sort of thing). Is it possible that the server on the VM runs indefinitely and keeps serving requests?
PS: My VM is running indefinitely and properly, but the nodejs server on the VM itself times out after sometime. Please help!
Thanks.
It is probably just crashing!
A barebone node application does not get monitored by itself.
This might sound a little crazy if you come from other web frameworks / platforms like ASP.NET or PHP where you had IIS or Apache monitoring your application for you, which was kind of nice tbh. In node.js you choose your process manager / monitoring system. From my experience, the most popular and well supported PMs are the ones listed in the Expressjs documentation: http://expressjs.com/advanced/pm.html
As Azure VMs will not sleep or shutdown itself , and also will not stop any servers running on them.
And per your description
the nodejs server on the VM itself times out after sometime
The issue seems the same with what #svenskunganka said.
You can check what occurred the error “sometime”, leveraging PM2 as #Daniel and #svenskunganka suggested.
When you deploy your nodejs project with PM2, it will monitor the application and log errors automatically. You can also monitor your VM metrics (such as CUP Usage,Network in/out) from Azure Portal Monitor panel.