SymmetricDS and Azure SQL Server - azure

I need help connecting Azure database using SymmetricDS 3.5.1. I can't seem to the configuration correct. I get an error saying "Cannot create PoolableConnectionFactory" with the message either "socket closed" (when I don't specify ssl parameter) or "login timeout" (when I specify the ssl parameter). I have specified a timeout amount in the connection string, however, it does not seem to work and defaults to 30 seconds. Is there any documentation on how to connect to an Azure database using SymmetricDS? Anyway, take a look and tell me what I need to change in my engine.properties file? I have the following:
db.url=jdbc:jtds:sqlserver://MyServer.database.windows.net:1433;database=MyDatabase;user=MyUser#MyServer;password=MyPassowrd;encrypt=true;hostNameInCertificate=*.database.windows.net;loginTimeout=300;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880;ssl=require
db.user=MyUser#MyServer
db.database=MyDatabase
db.password=MyPassword
db.driver=net.sourceforge.jtds.jdbc.Driver

It turns out you have to use the Microsoft JDBC driver. I didn't see any documentation on how to set it up so for the sake of others this is what I did after reading http://www.symmetricds.org/docs/how-to/connect-to-database
Download the Microsoft jdbc driver
Put the sqljdbc4.jar file in the lib folder of your symmetric folder
Change the *.properties file to be the following connection information...
db.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
db.url=jdbc:sqlserver://{your_server_name}.database.windows.net:1433;database={database_name};user={user}#{your_server_name};password={password};encrypt=true;hostNameInCertificate=*.database.windows.net;loginTimeout=300;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880;

Related

Why might DSBulk Load stop operation without any errors?

I have created a Cassandra database in DataStax Astra and am trying to load a CSV file using DSBulk in Windows. However, when I run the dsbulk load command, the operation never completes or fails. I receive no error message at all, and I have to manually terminate the operation after several minutes. I have tried to wait it out, and have let the operation run for 30 minutes or more with no success.
I know that a free tier of Astra might run slower, but wouldn't I see at least some indication that it is attempting to load data, even if slowly?
When I run the command, this is the output that is displayed and nothing further:
C:\Users\JT\Desktop\dsbulk-1.8.0\bin>dsbulk load -url test1.csv -k my_keyspace -t test_table -b "secure-connect-path.zip" -u my_user -p my_password -header true
Username and password provided but auth provider not specified, inferring PlainTextAuthProvider
A cloud secure connect bundle was provided: ignoring all explicit contact points.
A cloud secure connect bundle was provided and selected operation performs writes: changing default consistency level to LOCAL_QUORUM.
Operation directory: C:\Users\JT\Desktop\dsbulk-1.8.0\bin\logs\LOAD_20210407-143635-875000
I know that DataStax recently changed Astra so that you need credentials from a generated Token to connect DSBulk, but I have a classic DB instance that won't accept those token credentials when entered in the dsbulk load command. So, I use my regular user/password.
When I check the DSBulk logs, the only text is the same output displayed in the console, which I have shown in the code block above.
If it means anything, I have the exact same issue when trying to run dsbulk Count operation.
I have the most recent JDK and have set both the JAVA_HOME and PATH variables.
I have also tried adding dsbulk/bin directory to my PATH variable and had no success with that either.
Do I need to adjust any settings in my Astra instance?
Lastly, is it possible that my basic laptop is simply not powerful enough for this operation or just running the operation crazy slow?
Any ideas or help is much appreciated!

gremlin console - can not connect using cassandra config

I installed janus server (0.4) and cassandra (3.11) on my machine. They start correctly.
When I start the janus client to operate from the console
I run
:remote connect tinkerpop.server conf/remote.yaml
the connection is successful
then if I use this command
graph = JanusGraphFactory.open ('conf/janusgraph-cassandra.properties')
I get the following error message
WARN org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager - Cassandra Thrift protocol is deprecated and will be removed with JanusGraph 0.5.0. Please switch to the CQL backend.
Could not open global configuration
The warning is clear while the error that it cannot load the global configuration does not.
Analyzing the configuration file in question I noticed the following property:
storage.backend
This property sets the driver. By changing its value from:
cassandrathrift
to
CQL
everything works fine.
The warning should be an error if you need to use cql as a driver.
Instead, the message suggests that it is looking for a default configuration file.
It could be that, when using cassandrathrift as driver, some properties are not set and therefore look for their default value. At the moment I don't know in which path this default file should exist and how it should be done. Considering that the cassandrathrift driver is deprecated, I think it is a good solution.
In the same conf/ dir as the Thrift-based config file, you should also see a janusgraph-cql.properties file.
This file should already have storage.backend=cql set, as well as a few other parameters allowing you to connect to a local Cassandra instance running on 127.0.0.1 (without security enabled).

How to have fast redirects with async db writes?

I am currently working on a node.js api deployed on aws with elastic beanstalk.
The api accepts a url with query parameters, saves the parameters on a db (in my case aws rds), and redirects to a new url without waiting for the db response.
The main priority by far for the api is the redirection speed and the ability to handle a lot of requests. The aim of this question is to get your suggestions on how to do that.
I ran the api through a service called blitz.io to see what load it could handle and this is the report I got from them: https://www.dropbox.com/s/15wsa8ksj3lz99e/Blitz.pdf?dl=0
The instance and the database are running on t2.micro and db.t2.micro respectively.
The api can handle the load if no write is performed on the db, but crashes under a certain load when it writes on the db (I shared the report for the latter case) even without waiting for the db responses.
I checked the logs and found the following error in /var/log/nginx/error.log:
*1254 socket() failed (24: Too many open files) while connecting to upstream
I am not familiar with how nginx works but I imagine that every db connection is seen as an open file. Hence, the error implies that we reach the limit for open files before being able to close the connections. Is that a correct interpretation? Why am I getting the error?
I increased the limit in the way suggested here: https://forums.aws.amazon.com/thread.jspa?messageID=613983#613983 but it did not solve the problem.
At this point I am not sure what to do. Can I close the connections before getting a response from the db? Is it a hardware limitation? The writes to the db are tiny.
Thank you in advance for your help! :)
if you just modified ulimit, it might not be enough. You should look at fs.file-max for number of file descriptors,
sysctl -w fs.file-max=100000
as explained there :
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/

Pre-login handshake woes with connecting directly to SQL Azure

We are currently experiencing a rather troublesome problem in our development environment with the following message...
A connection was successfully established with the server,
but then an error occurred during the pre-login handshake.
(provider: SSL Provider, error: 0 - The certificate's CN
name does not match the passed value.)
...the commonly accepted wisdom to resolving this problem is to set the TrustServerCertificate portion of the connection to True. However, this does not work reliably or consistently.
This particular error occurs in a number of instances, for instance testing our WCF Service in our Azure Emulator talking to live / hosted SQL Azure Instance or even using SQL Management Studio. The only common denominator we've found is that this occurs only when we connect directly to SQL Azure as opposed to when its hosted and Azure is talking directly to SQL Azure (which does work).
I've tried a number of tactics to resolve the problem (such as the one detailed here), i.e. believing it was connection related and removing pooling and other modifications to the connection string. But alas, none are conclusive and more irritating is that the error is intermittent and will prevent access for a short period of time before magically resolving itself.
Other factors that I've eliminated.
We're using the Transcient Application Block to attempt to recover from these errors, but no.
Our office has no proxy server with our connection to the Azure hosted services.
Has anyone else experienced this problem or has any suggestions?
You need to scan for Non-IFS Winsock BSPs or LSPs which not compatible with the FILE_SKIP_COMPLETION_PORT_ON_SUCCESS flag ,problem results primarily from non-IFS LSPs Being installed.
Just run "netsh WinSock Show Catalog" from command prompt , and check any "service flag" which doesn't look in the format of 0x20xxx
In my case I found that "Speed Accelerator" with service flag 0x66,removing this software solve my Problem .
More information can be found here : http://support.microsoft.com/kb/2568167
What does your connection string look like? Not sure if you've tried this yet but I remember having a problem similar when using a remote SQL connection to SQL Azure and found that I had to set:
Trusted_Connection=False;Encrypt=True
and remove any Connect Timeout from the string entirely.

WebSphere MQ Security Authentication Exception on Unix

We have our application running on a Sun Solaris system and have a local WebSphere MQ installation. The applcation uses bindings mode to connect to queue manager. When trying to send message to the local queue, the JNDI binding is successfull but we encounter javax.jms.JMSSecurityException: MQJMS2013: invalid security authentication supplied for MQQueueManager error. When investigated found that the credentials (userid) used for authentication is not case sensitive as the user on which the application is running. The userid matches but it is not a case sensitive match. By default the user on which the application is running will be passed for authentication, but here the case sensitive match is failing. The application server is WebLogic. Appreciate any inputs.
In order to open the local queue, the application must have first connected to the queue manager successfully. The error on the remote queue is a connection error so it is not even getting to the queue manager. This suggests that you are using different connection factories and that the second one has some differences in the connectivity parameters. First step is to reconcile those differences.
Also, a MQJMS2013 Security Error can be many things, most of which are not actually MQ issues. For example some people store their managed objects in LDAP and an authentication problem there will throw this error. For people who use a filesystem-based JNDI, OS file permissions can cause the same thing. However if it is an actual WMQ issue (which this appears to be) then the linked exception will contain the MQ Reason Code (for example, MQRC=2035). If you want to be able to better diagnose MQ (or for that matter any JMS transport) issues, it pays to get in the habit of printing linked exceptions.
If you are not able to resolve this issue based on this input, I would advise updating the question with details of the managed object definitions and the reason code obtained from printing the linked exceptions.
We were using createQueueConnection() in QueueConnectionFactory for creating the connection and the issue got resolved by using the method createQueueConnection("",""). The unix userid (webA) is case sensitive and the application was trying to authenticate on the MQ with the case insensitive userid (weba) and MQ queue manager was rejecting the connection attempt. Can you tell us why the application was sending the case insensitive userid (weba) earlier?
Thanks,
Arun

Resources