Not able to fetch data or queue data from FTP server in Azure cloud nifi server? - azure

I am using FTP server which is working in local and able to fetch data from FTP server, but in the Azure cloud nifi server same FTP server is not fetching a single record from the particular FTP server. I am using ListFTP associated with FetchFTP nifi processor and used the same configuration whatever I used in local for ListFTP and FetchFTP nifi processors.
Can someone please suggest what is happening here. I checked firewall and even I disabled the firewall. That FTP server is running on Active connection mode. I tried but I'm not able to figure out the exact reason.
I am attaching the screenshots of my FTP processors configuration. One very important thing while using GetFTP server it is not fetching a single data after running hours of hours and even not a single exception or error. But with ListFTP and FetchFTP server it is showing exception after some 15 minutes interval that is "Failed to perform listing on remote host due to java.net.SocketException"

I think once you go through your conf/nifi.properties file and check whether keystore certificate is enabled or disabled and if it is disabled then do it enable.
Here you can check nifi configuration documentation.

Related

Error transferring files from mainframe to RedHat Linux using FTPS

I want to transfer a few files weekly from mainframe to a Linux server running RedHat using a batch (JCL) job using FTPS.
Linux server is configured with vsftpd. Is it possible to send file from mainframe to linux using FTPS?
Getting this error while transferring the file from mainframe to Linux.
EZA1736I FTP
EZY2640I Using 'SYS1.TCPPARMS(FTPDATA)' for local site configuration parameters.
EZA1450I xxx FTP CS xxx
EZA1456I Connect to ?
EZA1736I host_name
EZA1554I Connecting to: host_name xxx.xxx.xxx.xxx port: 21.
220 (vsFTPd 2.0.5)
EZA1701I >>> AUTH TLS
234 Proceed with negotiation.
EZA2897I Authentication negotiation failed
EZA1534I *** Control connection with host_name dies.
EZA1457I You must first issue the 'OPEN' command
EZA1460I Command:
EZA1618I Unknown command: 'Atul'
EZA1619I For a list of the available commands, say HELP
EZA1460I Command:
EZA1736I Summer#123
EZA1618I Unknown command: 'Monsoon#123'
EZA1460I Command:
EZA1736I cd /home/Atul/
EZA1457I You must first issue the 'OPEN' command
From your log you seem to be able to set up an unsecured connection to the FTP server. That's good.
EZA2897I Authentication negotiation failed indicates that the TLS-handshake did not complete successfully. Either the partners could not find a common TLS-version and/or ciphersuite or (that's the point I'd examine first) the certificate provided by the FTPs-server isn't trusted by the client user. To be sure you would have to capture and examine a TCP- or TLS-trace.
In a first step I would check the certificate provided by the FTP server and compare it to the trusted certificates in your security manager. In the case of RACF you would have to examine SITE-certificates and/or certificates in the user's keyring.
Yes, sending from the mainframe using FTPS to VSFTP is certainly possible. Both the client (z/OS in this case) and server (Linux in this case) need to agree on the encryption method to be used and I believe by default, z/OS has to trust the certificate for the server, which may involve importing the certificate bundle to a key ring that the batch job has access to. The job not having access to a keyring that trusts the chain for the server certificate would be my first guess.
I don't have experience with setting up the RACF keyring things, but I can say that people do successfully send us data every day from z/OS to our Linux server via FTPS.

SharePoint 2010-"Cannot connect to the configuration database"

I have TFS 2012 which integrated with SharePoint service, recently I duplicated the same machine for testing, and modified some parameters on the testing machine to avoid conflict due to they are in the same networking. They works well.
Today the production server's SharePoint site prompt "Cannot connect to the configuration database", but team foundation service works well, I doubt that something in the testing server cause this, and I shutdown testing server, the issue still exists.
Before I have experience for this issue, most of them are related with SQL server instance configuration like this "https://mikessharepoint.wordpress.com/2011/11/22/cannot-connect-to-the-configuration-database-error-of-central-administration/" or some authentications issue in the IIS application pool.
and here is the error event from event viewer:
"
Unknown SQL Exception 53 occurred. Additional error information from SQL Server is included below.
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
"
I doubt that something (probably SQL server instance name) on production server change to testing servers by some action, but I don't know where and how.
And I doubt that the new server cause the error, the reason is I find something incorrect in wss_config database of production server, the "ServerName" filed in table TimerJobHistory, some value are new server, some value are old server. But even I update all the value to old server and restarted the production server, it does not work.
I check all things I can googled, but nothing works, is there anybody could provide any help on this, appreciate for any kindly help.
Finally, I find the root reason, there is a table named "Objects" in the "Wss_config" DB, I find the new testing server name in "Name" filed, so I update the value to old server name-->Restarted IIS and SharePoint service (not sure if required)-->Bingo!
But I still don't know why the new server cause this error, so I have not power on the new server for now. Will update if any new detection.
Thanks all for your kindly help.

How to use get cf ssh-code password

We are using CF Diego API 2.89 version, Currently I was able to use it and see the vcap and the app resources when running cf ssh myApp.
Now it's become harder :-)
I want to deploy App1 that will "talk" with "APP2"
and have access to to it file system (as it available in the command line when you run ls...) via code (node.js), is it possible ?
I've found this lib which are providing the ability to connect to ssh via code but not sure what I should put inside host port etc
In the connect I provided the password which should be retrieved
via code
EDIT
});
}).connect({
host: 'ssh.cf.mydomain.com',
port: 2222,
username: 'cf:181c32e2-7096-45b6-9ae6-1df4dbd74782/0',
password:'qG0Ztpu1Dh'
});
Now when I use cf ssh-code (To get the password) I get lot of requests which I try to simulate with Via postman without success,
Could someone can assist? I Need to get the password value somehow ...
if I dont provide it I get following error:
SSH Error: All configured authentication methods failed
Btw, let's say that I cannot use CF Networking functionality, volume services and I know that the container is ephemeral....
The process of what happens behind the scenes when you run cf ssh is documented here.
It obtains an ssh token, this is the same as running cf ssh-code, which is just getting an auth code from UAA. If you run CF_TRACE=true cf ssh-code you can see exactly what it's doing behind the scenes to get that code.
You would then need an SSH client (probably a programmatic one) to connect using the following details:
port -> 2222
user -> cf:<app-guid>/<app-instance-number> (ex: cf:54cccad6-9bba-45c6-bb52-83f56d765ff4/0`)
host -> ssh.system_domain (look at cf curl /v2/info if you're not sure)
Having said this, don't go this route. It's a bad idea. The file system for each app instance is ephemeral. Even if you're connecting from other app instances to share the local file system, you can still lose the contents of that file system pretty easily (cf restart) and for reasons possibly outside of your control (unexpected app crash, platform admin does a rolling upgrade, etc).
Instead store your files externally, perhaps on S3 or a similar service, or look at using Volume services.
I have exclusively worked with PCF, so please take my advice with a grain of salt given your Bluemix platform.
If you have a need to look at files created by App2 from App1, what you need is a common resource.
You can inject an S3 resource as a CUPS service and create a service instance and bind to both apps. That way both will read / write to the same S3 endpoint.
Quick Google search for Bluemix S3 Resource shows - https://console.bluemix.net/catalog/infrastructure/cloud_object_storage
Ver 1.11 of Pivotal Cloud Foundry comes with Volume Services.
Seems like Bluemix has a similar resource - https://console.bluemix.net/docs/containers/container_volumes_ov.html#container_volumes_ov
You may want to give that a try.

SQLAzure database server - named pipes provider, error: 40 - the network path was not found

We access our database that is in SQL Azure, and every so often we hit this error while trying to connect. We connect from a corporate network, using SSMS or API.
The weird part is how it always successfully and instantly connects on retrying. We retry just 1 second after and it works.
We saw that the DTU Usage % was high and scaled our server up, but that did not help. We have employed a SqlAzureRetry policy while accessing the database from our API, which seems to be helping in mitigating the issue - but the root cause is still not identified.
Has anyone employed a configuration or strategy or faced a similar issue? (the underlying provider failed to open / network path not found).
Thanks!
The solution was to change the format of the server name to use TCP:
tcp:servername.database.windows.net,1433;
Also, if you're connecting from code, you should change to the above format in your connection string.

Couchdb development server access

I am totally new to couchdb,
How can i expose the service into a local development remote server ? (after in a future step expose it public)
I try to install on a remote development server besides i am not using Digital Ocean server i am using this tutorial : https://www.digitalocean.com/community/tutorials/how-to-install-couchdb-and-futon-on-ubuntu-14-04
I could not access with a web browser after install and start couchdb service with
couchdb -b
Wich return the default message : Apache CouchDB has started, time to relax.
Also from comand line i could:
curl http://127.0.0.1:5984/
And receive the correct message.
How can i access via web browser this development server ?
I can't know for sure, since I don't know your setup, but I'm guessing that you're trying to access the database from a different machine then the one it's running on. And I assume you know what IP to use to get to your remote, and that leads me to believe that your problem is that the port is not open (or not forwarded correctly) to your couchDB server.
A standard couchDB installation should be accessible, from a web-browser.

Resources