What is the correct way to configure apache superset to connect to presto via LDAP? - presto

I started a new Apache Superset deployment from their docker-compose.
My connection string looks like this:
presto://user:password#presto:8446/
And in Advanced/Security I put this:
But this doesnt seem to work.
Any ideas what I am doing wrong? I tried to follow the pyhive connection string as much as possible.
And yes... PyHive is installed in my docker image. I checked.
When I click on "Connect" I am getting this error.
An error occurred while creating databases: (database_name) A database with the same name already exists
When I click on "test Connection" i get the following error
ERROR: (builtins.NoneType) None
(Background on this error at: http://sqlalche.me/e/13/dbapi)

Related

How to resolve create MySQL 5.5.53 read replica with error InvalidParameterCombination, Status 400?

Today I tried to create read replica for MySQL 5.5.53 RDS, it give me below error
Cannot find version 5.5.53 for mysql (Service: AmazonRDS; Status Code:
400; Error Code: InvalidParameterCombination;
Create read replica in UI version did not worked. I tried there AWS cli mode to create
aws rds create-db-instance-read-replica --db-instance-identifier <read_replica_name> --source-db-instance-identifier <master-server-name> --db-instance-class <class-name> --availability-zone <zone> --no-multi-az --auto-minor-version-upgrade --no-publicly-accessible --vpc-security-group-ids <vpc-id>
And it worked.
I was getting this error today when trying to load the "Modify" page for one of my RDS instances. I discovered that this happens when I navigate to the instance from the "Resources" tab in a CloudFormation stack, but not when I navigate to the instance from the "Instances" list in the RDS console. (The two paths do result in different URLs but what looks like the same page.)
Thought I'd add this in case it's what was behind your error message, or for someone else who searches and finds this question as I did.

"Error: Key not loaded" in h2o deployed through a K3s cluster, using python3 client

I can confirm the 3-replica cluster of h2o inside K3s is correctly deployed, as executing in the Python3 interpreter h2o.init(ip="x.x.x.x") works as expected. I followed the instructions noted here: https://www.h2o.ai/blog/running-h2o-cluster-on-a-kubernetes-cluster/
Nevertheless, I had to modify the service.yaml and comment out the line which says clusterIP: None, as K3s was complaining about something related to its inability to set the clusterIP to None. But even though, I can certify it is working correctly, and I am able to use an external IP to connect to the cluster.
If I try to load the dataset using the h2o cluster inside the K3s cluster using the exact same steps as described here http://docs.h2o.ai/h2o/latest-stable/h2o-docs/automl.html, this is the output that I get:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Key not loaded: Key<Frame> https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv
Request: POST /3/ParseSetup
data: {'check_header': '0', 'source_frames': '["https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv"]'}
The same error occurs if I use the h2o.upoad_file("x.csv") method.
There is a clue about what may be happening here: Key not loaded: Key<Frame> while POSTing source frame through ParseSetup in H2O API call but I am not using curl, and I can not find any parameter that could help me overcome this issue: http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/h2o.html?highlight=import_file#h2o.import_file
I need to use the Python client inside the same K3s cluster due to different technical reasons, so I am not able to kick off nor Flow nor Firebug to know what may be happening.
I can confirm it is working correctly when I simply issue a h2o.init(), using the local Java instance.
UPDATE 1:
I have tried in different K3s clusters without success. I changed the service.yaml to a NodePort, and now this is the error traceback:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Job is missing
Request: GET /3/Jobs/$03010a2a016132d4ffffffff$_a2366be93ec99a78d7bc161de8c54d67
UPDATE 2:
I have tried using different services (NodePort, LoadBalancer, ClusterIP) and none of them work. I also have tried using Minikube with the official image, and with a custom image made by me, without success. I suspect this is something related to either h2o itself, or the clustering between pods. I will keep digging and let's think there will be some gold in it.
UPDATE 3:
I also found out that the post about running H2O in Docker is really outdated https://www.h2o.ai/blog/h2o-docker/ nor is working the Dockerfile present at GitHub (I changed it to uncomment the ENTRYPOINT section without success): https://github.com/h2oai/h2o-3/blob/master/Dockerfile
Even though, I tried with the custom image I built for h2o-k8s and it is working seamlessly in pure Docker. I am wondering why it is still not working in K8s...
UPDATE 4:
I have tried modifying the environment variable called H2O_KUBERNETES_SERVICE_DNS without success.
In the meantime, the cluster started to be unavailable, that is, the readinessProbe's would not successfully complete. No matter what I change now, it does not work.
I spinned up a K3d cluster in local to see what happened, and surprisingly, the readinessProbe's were not failing, using v3.30.0.6. But now I started testing it with R instead of Python. I am glad I tried, because I may have pinpointed what was wrong. There is a version mismatch between the client and the server. So I updated accordingly the image to v3.30.0.1.
But now again, the readinessProbe is not working in my k3d cluster, so I am unable to test it.
It seems it is working now. R client version 3.30.0.1 with server version 3.30.0.1. Also tried with Python version 3.30.0.7 and server version 3.30.0.7 and it started working. Marvelous. The problem was caused by a version mismatch between the client and the server, as the python client was updated to 3.30.0.7 while the latest server for docker was 3.30.0.6.

SSAS Data Pump - IsapiModule could not be found

The data pump was previously running on an server with IIS and SQL Server running on the same server. It was working fine. We've been provided a new SQL named instance and we're trying to re-setup the data pump to point to the new server. New SQL server is also new version, so we are using the new msmdpump from the new SQL installation. We've previously set this up on several client sites, so have followed the requirements with the data pump setup, but in this case I'm stuck with a "specified module could not be found" for the IsapiModule. I suspected that it might be an access issue to the msmdpump dll, but I've gone as far as to move the folder to a location with "full everyone access". I've set up tracing and the relevant details for the problem seems to be;
MODULE_SET_RESPONSE_ERROR_STATUS
ModuleName: IsapiModule
Notofocation: EXECUTE_REQUEST_HANNDLER
HttpStatus: 500
HttpReason: Internal Server Error
HttpSubStatus: 0
ErrorCode: The specified module could not be found. (0x8007007e)
I've tried everything I could find online, so any assistance or advice will be great.
Same error as the following IIS 8.0 Detailed 500.0 Internal Server Error - IsapiModule Not Found
Ok this was quite tough, had exactly the same issue but was able to resolve by installing this:
Which I found here:
https://www.microsoft.com/en-us/download/details.aspx?id=40784

Resource temporarily unavailable. Authentication by key failed (Error -18). (Error #35)

I'm using EC2 Amazon Web Service to launch my server using NodeJS, MongoDB.
I completed to save and load the data using my android application through NodeJS server and MongoDB but when I tried to check the data using RoboMongo (Robo 3T), the error occurred.
Resource temporarily unavailable. Authentication by key (path of the .pem key) failed (Error -18). (Error #35)
Robomongo 1
Robomongo 2
Error dialog
This is what I did in Robomongo.
These are the result of searching the google... I think I did right...
What is wrong?
I Solved the problem myself.
When you have this problem,
1. Check out /etc/mongod.conf
In network interfaces.
bindIP must be 0.0.0.0
not 127.0.0.1
2. Check the SSH User Name.
For an Amazon Linux AMI, the user name is ec2-user.
For the others, Check out the link ! https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
3. If it didn't help,
try to download what developer uploaded (1.2 - Beta)
https://github.com/Studio3T/robomongo/issues/1189#issuecomment-353279070

Using Google CloudSQL, getting "connect ECONNREFUSED 127.0.0.1:3306"

I'm trying my best to learn Google's Cloud Platform. They have a CloudSQL offering, which I'm learning via this NodeJS tutorial. Everything worked great until I deployed to their appspot server, at which point I got the following error:
connect ECONNREFUSED 127.0.0.1:3306
I've looked all through the NodeJS project and don't see anything in it or the Cloud Console that is referencing localhost or 127.0.0.1. Googling the error hasn't helped thus far. Any ideas?
I couldn't get this thing fixed when running on server but using this files I was able to read/write from local and production, now im using this connection strings in my own app
https://cloud.google.com/appengine/docs/flexible/nodejs/using-cloud-sql
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/appengine/cloudsql
I had a similar issue when deploying the nodejs sample app 2-structured-data
The reason why the error occurred is that the NODE_ENV environment variable was not passed to the config file that is used to check if node should use a socket for connecting to mysql
You can fix it by adding 'NODE_ENV' in the file config.js :
.env([
...
'NODE_ENV'])

Resources