Excel2016: Cannot query PostgresSQL database: Server certificate not accepted - excel

I want to import some data to Excel2016 from a postgresSQL table. I have tried it by clicking "new query" and selecting From Database -> From PostgresSQL Database:
But then I receive the following error:
Details: "TlsClientStream.ClientAlertException: CertificateUnknown: Server certificate was not accepted. Chain status: A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider.
. The specified hostname was not present in the certificate.
at TlsClientStream.TlsClientStream.ParseCertificateMessage(Byte[] buf, Int32& pos)
at TlsClientStream.TlsClientStream.TraverseHandshakeMessages()
at TlsClientStream.TlsClientStream.GetInitialHandshakeMessages(Boolean allowApplicationData)
at TlsClientStream.TlsClientStream.PerformInitialHandshake(String hostName, X509CertificateCollection clientCertificates, RemoteCertificateValidationCallback remoteCertificateValidationCallback, Boolean checkCertificateRevocation)"
Any suggestions on how to solve this? Thank you so much in advance!

This error is indicative of a connection being made to the PostgreSQL db where the server's certificate cannot be validated by the client making a connection. This error only happens when the "Trust Server Certificate" is set to FALSE in the library Excel uses to connect to PostgreSQL (npgsql).
There are several ways that may work to address this, in the order I'd suggest trying them:
If there's an option hidden in Excel (perhaps under advanced options or similar) to set the 'Trust Server Certificate' parameter to True, then your connection will start working. If it allows you to specify an entire connection string, then this can be done in the connection string as well.
The database should have a public key in an SSL cert listed in the postgresql.conf file for the db. If you (or your db administrator) can get that public key and add it to your machine (instructions will vary depending on your operating system).

I have finally found a workaround for my problem.
What you can do is to:
Install the current postgresql driver from here
Follow the instruction from this video
With this, you can connect to your postgreSQL database by ODBC.

Related

Azure sql database TLS is always enable?

I wrote a java code. In the code, I used com.microsoft.sqlserver.jdbc.SQLServerDataSource to establish a JDBC connection with my Azure sql database . I found that no matter whether I used " ds.setEncrypt(true);" or not, the JDBC connection was encrypted by TLS ( I use wireshark to catch the TCP packaege , all the package is TLS whether I used " ds.setEncrypt(true);" or not ).
Why ? I checked many official documents, but I couldn't find the answer . It's too difficult...
Azure sql database TLS is always enable ? Are there relevant official documents to prove it ?
The question is : I use ds.setEncrypt(true) or not ,even i set this to "false" , the TCP packages are encrypted by TLS . Why ?
Below is my code to establish the JDBC connection .
public static Connection getConnectionObject() {
SQLServerDataSource ds = new SQLServerDataSource();
ds.setServerName("azuresqldbserver0821.database.windows.net");
ds.setDatabaseName("azuresqldb0821");
ds.setPortNumber(1433);
ds.setUser("root0817");
ds.setPassword("<YourStrong#Passw0rd>");
ds.setEncrypt(false);// I use this method or not ,even i set this to "false" , the TCP packages are encrypted by TLS
ds.setTrustServerCertificate(true);
Connection conn;
try {
conn = ds.getConnection();
} catch (Exception e) {
e.printStackTrace();
return null;
}
return conn;
}
}
When a client first attempts a connection to SQL Azure, it sends an initial connection request. Consider this a "pre-pre-connection" request. At this point the client does not know if TLS/SSL/Encryption is required and waits an answer from SQL Azure to determine if TLS/SSL is indeed required throughout the session (not just the login sequence, the entire connection session). A bit is set on the response indicating so. Then the client library disconnects and reconnects armed with this information.
When you set "Encrypt connection" setting on the connetion string you avoid the "pre-pre-connection", you are preventing any proxy from turning off the encryption bit on the client side of the proxy, this way attacks like man-in-the-middle attack are avoided.
When secure connections are needed, please enable "Encrypt connection" setting.
In-transit encryption to Azure SQL is always enabled.
Transport Layer Security (TLS) was previously known as Secure Sockets Layer (SSL).

Could not find Host of Azure In App SQL Database

I've created a MySQL In App database for my Azure App, and got the connection string for it. This string is injected into the application.json, and then used to create the actual connection:
WebApplicationBuilder builder = // get it somewhere
var connectionString = builder.Configuration.GetConnectionString("DefaultConnection")
builder.Services.AddDbContext<DatabaseContext>(options => options.UseMySQL(connectionString));
Only... no connection string works. The one with the port (Database=localdb;Data Source=127.0.0.1:53844;User Id=azure;Password=password) throws:
System.Net.Sockets.SocketException (11001): No such host is known.
And the one without the port (Database=localdb;Data Source=127.0.0.1;User Id=azure;Password=password) throws:
System.Net.Sockets.SocketException (10013): An attempt was made to access a socket in a way forbidden by its access permissions.
This question sugested another connection string (Server=127.0.0.1; Port=53844; Database=localdb; Uid=azure; Pwd=password), which weirdly enough also throws this exception, even though the port is defined:
System.Net.Sockets.SocketException (10013): An attempt was made to access a socket in a way forbidden by its access permissions.
And the manual suggests yet another string (server=localhost;database=localdb;user=azure;password=password) which again throws one of the two exceptions depending on if the port is present.
Connecting via the browser works fine, so I can confirm port, username and password work normally.
Just to be sure, I tried "localhost" as the host, too. Same results.
What am I doing wrong?
It's a mix out of all these connection strings:
server=localhost;port=53844;database=localdb;user=azure;password=password
(Port and server separated, but both present.)
Works for me right now.

Connecting to Aurora Postgres (Babelfish, 1433)

I'm attempting to connect to a new Aurora PostgreSQL instance with Babelfish enabled.
NOTE: I am able to connect to the instance using the pg library through the normal port 5432 (the Postgres TDAS endpoint).
However, for this test, I am attempting to connect through the Babelfish TDS endpoint (1433) using the standard mssql package.
If I specify a database name (it is correct), I receive the error 'database "postgres" does not exist':
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
database: 'postgres',
user: 'xxx',
password: 'xxx'
};
and the connection closes since the connection fails.
if I omit the database property in the config, like:
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
user: 'xxx',
password: 'xxx'
};
It will connect. Also, I can use that connection to query basic things like SELECT CURRENT_TIMESTAMP and it works!
However, I can't access any tables.
If I run:
SELECT COUNT(1) FROM PERSON
I receive an error 'relation "person" does not exist'.
If I dot-notate it:
SELECT COUNT(1) FROM postgres.dbo."PERSON"
I receive an error "Cross DB query is not supported".
So, I can't connect to the specific database directly and if I connect without specifying a database, I can't cross-query to the table.
Any one done this yet?
Or, if not, any ideas on helping me figure out what to try next? I'm out of ideas.
Babelfish databases (that you connect to on port 1433) have nothing to do with PostgreSQL databases (port 5432). Essentially, all of Babelfish lives within a single PostgreSQL database (parameter babelfishpg_tsql.database_name).
You seem to have a single-db setup, because Cross DB query is not supported. With such a setup, you can only have a single database via port 1433 (apart from master and tempdb). You have to use CREATE DATABASE to create that single database (if it isn't already created; ask sys.databases).
I can't tell if it is supported to create a table in PostgreSQL (port 5432) and use it on port 1433 (the other way around is fine), but if so, you have to create it in a schema that you created with CREATE SCHEMA while connected on port 1433.
The answer was that I should be connecting to database "master".
Even though there is no database titled master in the instance, you still do connect to it.
Once connected, running the following:
select current_database();
This will indicate you are connected to database "babelfish_db".
I don't know how that works or why a database would have an undocumented alias.
The bigger answer here is that cross-DB object references are not currently supported in Babelfish, outside your current SQL Server database.
This is currently being worked on. Stay tuned.

Connection sasl conversation error(authentication error) while using mongoexport

I am trying to export data from a mongodb cluster to my computer, using my URI connection string, but am getting the error: could not connect to server: connection() : auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-S HA-1": (AtlasError) bad auth Authentication failed
This is the command I am using:
mongoexport --uri="mongodb+srv://yash_verma:<******>#jspsych-eymdu.mongodb.net/test?retryWrites=true&w=majority" --collection=entries --out=entries.csv
Could anyone tell me what it is that I am doing wrong? I am sure I am using the correct password.
I am also fairly new to programming and have tried to look online for a solution, but haven't found one yet.
Any help would be greatly appreciated.
Thanks,
Yash.
Your connection string looks fine, but make sure to remove the angle brackets (<>) around <password>, like so:
mongoexport --uri="mongodb+srv://yash_verma:******#jspsych-eymdu.mongodb.net/test?retryWrites=true&w=majority" --collection=entries --out=entries.csv
…where ****** is the database password (not the account password!) of the database user yash_verma.
Duplicate answer

Error reaching the Node.js server in drupal 7

I have installed node.js server on shared hosting.I have drupal site in which I am using node.js integration module to connect to node.js server.
But whenever I am trying to broadcast message from admin panel, I am getting this error message "Error reaching the Node.js server "Error reaching the Node.js server at "nodejs/publish" with {"data":{"somecustomdata":"http://www.google.ca"},"channel":"nodejs_user_1","callback":"myowncallback","clientSocketId":""} "%{"data":{"somecustomdata":"http://www.google.ca"},"channel":"nodejs_user_1","callback":"myowncallback","clientSocketId":""}": [404] Not Found." in db log.
Any help would be appreciated.
It is very likely one of two things:
Drupal server is accessing wrong URI.
Node.js Server is not listening to the URI you expect it to.
Of course something less obvious might cause errors, but please verify those two before proceeding.
Best would be to get your Drupal server print in error logs the URI it is trying to access, and manually verify you can access it within your browser, or another tool.
Thanks "alandrev" for your help.I have resolved that issue on the same day but I forgot to add my mistake.Actually I was not configuring the nodejs correctly.I was using the incorrect port number on backend settings in nodejs.config.js file.The correct settings mentioned below:
backendSettings = {
"scheme":"https",
"host":"yourhostname",
"port":"port number which is not already in use",
'sslKeyPath': 'key file path for ssl enabled site otherwise leave empty',
'sslCertPath': 'certificate path for ssl enabled site otherwise leave empty',
'sslCAPath': '',
"resource":"/socket.io",
"baseAuthPath": '/nodejs/',
"publishUrl":"publish",
"serviceKey":"",
"backend":{
"port":443,
"scheme": 'https or http',
"host":"yourhostname",
"messagePath":"/nodejs/message/"},
"clientsCanWriteToChannels":false,
"clientsCanWriteToClients":false,
"extensions":"",
"debug":false,
"addUserToChannelUrl": 'user/channel/add/:channel/:uid',
"publishMessageToContentChannelUrl": 'content/token/message',
"jsMinification":true,
"jsEtag":true,
"logLevel":1};
Solved this same issue by adding "polling" to the transport
backendSettings = {
"scheme":"http",
"host":"localhost",
"port":8081,
"key":"/path/to/key/file",
"cert":"/path/to/cert/file",
"resource":"/socket.io",
"publishUrl":"publish",
"serviceKey":"SERVICE KEY",
"backend":{
"port":80,
"host":"localhost",
"messagePath":"/mysite/nodejs/message/"},
"clientsCanWriteToChannels":true,
"clientsCanWriteToClients":true,
"extensions":"",
"debug":true,
"transports":["websocket","polling",
"flashsocket",
"htmlfile",
"xhr-polling",
"jsonp-polling"],
"jsMinification":true,
"jsEtag":true,
"logLevel":1};

Resources