Moving TDWC from Linux to Windows server - mainframe

We currently have TDWC (8.5.1) stood up on a Linux server.
(A very OLD Linux server that doesn't have much horsepower).
It's working fine, but slow. We need to upgrade it, and 9.2
is as high as we can go due to our Service Provider limitations.
Instead of upgrading it in place, I would like to install it on a
brand new Windows 2012-R2 server that was provisioned just for Workload Automation tools. I've scoured the manuals and the forums and I don't see anything that addresses this specifically. I assume this install would be handled as a brand new install and not an upgrade as far as the server goes.
My question/concern is about the Started Task and Parmlib on the Mainframe. As long as I am using the same host & port on the mainframe, and the z/OS Connector, wouldn't it be as simple as shutting down the old TDWC and starting up the new 9.2 DWC release? Wouldn't it connect to the same Started Task as the current release does?
The SERPTDWC member on the mainframe
contains the following...
/* TCPIP ZCONNECTOR SERVER
SERVOPTS SUBSYS(TWSC)
PROTOCOL(TCP)
USERMAP(USERS)
TCPOPTS TCPIPJOBNAME(NETITCP)
HOSTNAME(DPSMVS1.EDS.EXPRESS-SCRIPTS.COM)
SRVPORTNUMBER(31121)
INIT CALENDAR(DEFAULT)

There is no problem in running multiple connectors connect to the same server started task, this is the standard configuration when running a DWC cluster.
This does not require any change to SERVOPTS or TCPOPTS, the only check to do is to verify that the users authenticating on the new connector are correctly mapped in the USERMAP, the new connector will present the users with new connector name and you may need to add them in the USERS parameter member

Related

Using ODBC Driver in Azure to connect to external database

I am working in a business in New Zealand. We currently use a remote server (Plexus) to store a large amount of data (some tables > 2 billion rows). We have started down the SharePoint route, and I have created a number of databases and apps in SharePoint that use this data. Currently, I have to run a program in New Zealand that downloads the data to our local server and then pushes up that data into an Azure database, which the web apps connect to. I would like to remove this middle step for many reasons but the biggest reason is that the web connection between NZ and the US tends to result in a lot of time outs and long pulls due to having to pull large data sets across the Pacific. The remote database we are using is Plexus.
Ideally, I would like to have my C# code sitting in Azure and have this connect to the remote server directly. This way I could simply send the SQL request to Plex and have this data go directly into the Azure databases. The major advantage would be that this would mean it would all be based in the US which would make things a lot faster.
The major hurdle is that we need to install an ODBC Driver given to us by the remote server into Azure so it recognises the calls as genuine. Our systems adminstrator has said he has looked into it and it seems this can't be done?
I was hoping someone on the StackOverFlow community has encountered a similar issue and resolved it?
Note: Please dont think I am asking whether Azure has an ODBC connection because I know it does. I am not asking if I can connect TO Azure, I am asking if I can connect Azure to another external data source.
In a Worker Role/Cloud service in azure you can install the ODBC driver in a startup task using powershells ODBC commandlets.
More info here: Powershell Add-OdbcDsn and here: Powershell startup task in cloud services
One option is to create a virtual machine in the same Azure data center as your database and install your ODBC driver and your C# app.

keep nodejs server on azure VM running

I have a Windows Azure VM(linux server 14.04) running and am able to access the VM using command line on my mac/windows machines. I'm running a node.js server and a mongodb instance on this Azure VM.
The problem is that this nodejs server on the VM gets disconnected after sometime(timeout sort of thing). Is it possible that the server on the VM runs indefinitely and keeps serving requests?
PS: My VM is running indefinitely and properly, but the nodejs server on the VM itself times out after sometime. Please help!
Thanks.
It is probably just crashing!
A barebone node application does not get monitored by itself.
This might sound a little crazy if you come from other web frameworks / platforms like ASP.NET or PHP where you had IIS or Apache monitoring your application for you, which was kind of nice tbh. In node.js you choose your process manager / monitoring system. From my experience, the most popular and well supported PMs are the ones listed in the Expressjs documentation: http://expressjs.com/advanced/pm.html
As Azure VMs will not sleep or shutdown itself , and also will not stop any servers running on them.
And per your description
the nodejs server on the VM itself times out after sometime
The issue seems the same with what #svenskunganka said.
You can check what occurred the error “sometime”, leveraging PM2 as #Daniel and #svenskunganka suggested.
When you deploy your nodejs project with PM2, it will monitor the application and log errors automatically. You can also monitor your VM metrics (such as CUP Usage,Network in/out) from Azure Portal Monitor panel.

Meanjs hotswap deployment

I'm trying to deploy my MEANjs application into production...
So far I've use jenkins, git,rsync, etc. to copy the project to the remote server.
and in the final step I just have to call
stop myMeanjsApp
Replace the folder with the new version of the application
Call start myMeanjsApp
but that would mean a downtime which I'm trying to avoid so
1. how can I avoid this?
2. Are there any good practice work-flows for this ?
I've seen this but I'm not sure if its the way to go
or is there any other simple way of doing this?
Typically a large scale web application is upgraded by creating new virtual machines running an upgraded version of the software. The new virtual machines are then added to the load balancer (manually or automatically). Then the virtual machines running the old version are removed from the load balancer pool and when all in-progress requests to the old vms are done, the vms can be destroyed. E.g. AWS features like ELB and auto scaling groups makes this an attractive way to upgrade software.
You could do the same even if you have a single server by starting the new version into a different port.
The naught npm module is fair approach if you must replace the code in-place.
For some applications it might be an option to simply stop accepting new connections and restart with the new version when the last connection is done.
And for some applications you can just kill the old version and start new version at any time. It all depends on your requirements and environment.

Install Neo4j on Azure, cannot browse WebAdmin

I've just installed Neo4j 1.8.2 onto Azure by following this step-by-step process...
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
Unfortunately, when I browse to http://:7474/webadmin Fiddler says Error 10061 - No connection could be made because the target machine actively refused it.
I've followed the instructions exactly and haven't received any errors.
Any help much appreciated.
So, I think I got to the bottom of this. I think it was due to the size of compute / VM I was creating. It looks like the problem is caused when running on Extra Small instances. I created a new installation using a Small instance and everything now works :).
Try setting the server to accept connections form all hosts, and maybe use a newer Neo4j, say 1.9.4
http://docs.neo4j.org/chunked/stable/security-server.html#_secure_the_port_and_remote_client_connection_accepts
The way the VM Depot image is set up, it's pre-configured to allow all hosts to connect, and the Neo4j server will auto-start. The only thing you need to take care of, when constructing your VM, is to open an Input Endpoint, with any public port you want (preferably 7474 to stay true to Neo4j) and internal port 7474.
Note that the UI changed a bit since the how-to was published: You can specify the endpoint as the last step before creating your virtual machine. Other than that, the instructions should be the same. And... once the VM is up and running (it'll take about 5-10 minutes), you just visit http://yourservicename.cloudapp.net:7474 and you should see the web admin. Note: this is not the same as your vm name. If you named your VM something like 'neo' then you do not want http://neo:7474 or http://neo.cloudapp.net:7474. You need to use your cloud service name (you had to create a name for the service when you deployed the VM.
I've deployed that image several times in demos, and just tried again right now to make sure nothing wonky happened. Worked perfectly.

Cloud environment on Windows Azure platform

I've got 6 web sites, 2 databases and 1 cloud environment setup on my account
I used the cloud to run some tasks via Windows Task Manager, everything was installed on my D drive but between last week and today the 8 of March my folder containing the "exe" to run as been removed.
Also I've installed SVN tortoise to get the files deployed and it not installed anymore
I wonder if somebody has a clue about my problem
Best Regards
Franck merlin
If you're using Cloud Services (web/worker roles), these are stateless virtual machines. That is: Windows Azure provides the operating system, then brings your deployment package into the environment after bootup. Every single virtual machine instance booted this way starts from a clean OS image, along with the exact same set of code bits from you.
Should you RDP into the box and manually install anything, anything you install is going to be temporary at best. Your stuff will likely survive reboots. However, if the OS needs updating (especially the underlying host OS), your changes will be lost as a fresh OS is brought up.
This is why, with Cloud Services, all customizations should be done via startup tasks or the OnStart() event. You should never manually install anything via RDP since:
Your changes will be temporary
Your changes won't propagate to additional instances; you'll be required to RDP into every single box to perform the same changes.
You may want to download the Azure Training Kit and look through some of the Cloud Service labs to get a better feel for startup tasks.
In addition to what David said, check out http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for the scenarios where the different drives will be destroyed.
Also take a look at http://blogs.msdn.com/b/kwill/archive/2012/09/19/role-instance-restarts-due-to-os-upgrades.aspx which points you to the RSS feed and MSDN article where you can see that a new OS is currently being deployed.

Resources