How do I implement ArangoDB HTTP API authentication on DC/OS cluster - arangodb

When using a single arangodb instance, I've used the root user and a password to authenticate my pyArango connection in my application.
I tried to do the same thing with my cluster, but i couldn't.
Is there a better way?

Related

Call SQL Server from a Linux Container without Passwords in the Connection String

I am converting a service to run on a Linux Container. Currently, the service runs in IIS in a Windows VM.
It runs as a Lan User that has permissions to the database. Thus the connection string uses Integrated Security.
But Containers cannot join a domain. So, as I understand it, that option is out.
I researched this for Windows Containers and found that it supports running as a Group Managed Service Account (gMSA) on the container host, and that calls made as "Network Service" are swapped to the gMSA. (Allowing use of a domain user via the container host.)
But I cannot seem to find a similar feature for Linux containers.
Do all processes run in Linux containers just put usernames and passwords in to their database connection strings?
Or is there a better way to convey identity in a Linux Container?
To give a few more details on my particular setup:
Running a Linux container
Running .NET Core 2.2
Running in Kubernetes (eventually)
Database is Microsoft SQL Server Running on Windows
Would help to know a bit more of your setup, but with the information at hand there are 3 options as I could see.
Option 1:
Manage the credentials with for docker secrets as per
https://docs.docker.com/engine/swarm/secrets/
docker container exec <CONTAINER_ID> \
bash -c 'mysqladmin --user=wordpress --password="$(< /run/secrets/old_mysql_password)" password "$(< /run/secrets/mysql_password)"'
Option 2:
Depending on what kind of DB you're using you could add the password to the configuration, for example in my.conf for mysql.
[client]
password = 123
Option 3:
Depending on your network stack, you could set the permissions in the database instead. Allowing the IP access to the database. But I would however recommend one of the other options.
So since it is a linux environment and I believe you want to use windows authentication you can use similar Ad authentication.
Check here
his tutorial explains how to configure SQL Server on Linux to support Active Directory (AD) authentication, also known as integrated authentication. For an overview, see Active Directory authentication for SQL Server on Linux.
This tutorial consists of the following tasks:
Join SQL Server host to AD domain
Create AD user for SQL Server and set SPN
Configure SQL Server service keytab
Secure the keytab file
Configure SQL Server to use the keytab file for Kerberos authentication
Create AD-based logins in Transact-SQL
Connect to SQL Server using AD Authentication
update I don't think windows domain integrated authentication can be used.I don't think there can be any integrated authentication then. try dsn so that your code does not have username password. https://www.easysoft.com/products/data_access/odbc-sql-server-driver/getting-started.html

How to rate limit a node.js API that uses JWT?

I have a node.js application that runs on different servers/containers. Clients are authenticated with JWT tokens. I want to protect this API from DDos attacks. User usage limit settings can be stored in the token. I've been thinking about som approaches:
Increment counters in a memory database like redis or memcached and check it on every request. Could cause bottlenecks?.
Do something in a nginx server (is it possible?)
Use some cloud based solution (AWS WAF? Cloudfront? API Gateway? Do they do what I want?)
How to solve this?
There are dedicated npm packages like
express-rate-limit https://www.npmjs.com/package/express-rate-limit
ratelimiter https://www.npmjs.com/package/ratelimiter
I don't know about AWS but in Azure you can use Azure API Management to secure your api https://learn.microsoft.com/en-us/azure/api-management/api-management-key-concepts

kubernetes cluster secure entry point for api

I built a kubernetes cluster witch contain a ui app, worker, mongo, MySQL, elasticsearch and exposes 2 routs with ingress and there is also an ssl certificate on top of the cluster static ip. Utilizing pub/sub and storage.
All looks fine.
Now I’m looking for a secure way to expose
An endpoint to an external service
Use case:
A remote app wishes to access my cloud app with a video guid in the payload in a secure manner and get a url to a video in the bucket
I looked at google endpoints service but couldn’t get it to work with kubernetes.
There is more services that will need an access point to the app.
What is the best way for me to solve this problem.
Solve it by simply adding an endpoint to the ingress controlling the app, and protect it with SSL and JWT. Use this and this guides to add the ingress controller.
This tutorial shows how to integrate Kubernetes with Google Cloud Endpoint

Connecting to Mongodb atlas from Node.js docker services

So I am learning node.js,docker and mongodb.And I have a few doubts.
I have three tasks of a service(replicas) (node.js in docker services).The service is supposed to access a mongodb database.I have two options:
Use atlas-this sounds simple to me as I am a beginner.
Use mongodb containers-Which I believe could be a little more work.
So the question is if I use MongoDB atlas and connect to the database hosted on atlas is the transfer of data between node.js and atlas secure by default?what should be done to "secure" the transfer of data between the node.js container service and the Mongodb atlas?
If I choose the second option above should all three replicas/tasks communicate with only ONE mongodb container?
is the transfer of data between node.js and atlas secure by default?
Without knowing your application environment, I can't comment about security on your side of the network.
However for MongoDB Atlas, it's using TLS/SSL and authentication (SCRAM) enabled by default (and cannot be disabled).
Traffic from clients to Atlas is authenticated and encrypted in-transit, and traffic
between the customer’s internally managed MongoDB
nodes is also authenticated and encrypted in-transit using
TLS/SSL.
Also depending on which cloud provider you would choose in Atlas (AWS, GCP, or Azure) they each provides different encryption at rest features (transparent disk encryption).
Please note that there are other security features provided by MongoDB Atlas, i.e. IP Whitelisting. See also MongoDB Atlas: Security Features and Setup and MongoDB Atlas Security Controls.
If I choose the second option above should all three replicas/tasks communicate with only ONE mongodb container?
I'm not sure I understand this question. The purpose of having a replica set is to provide High Availability (in the case of a primary failover, the other will automatically take over). Having all three nodes of replica set deployed into a single Docker container will defeat this purpose.

cannot connect with Cloud SQL Proxy without Elevated privileges

We are attempting to configure Dev users at a project level with only 'viewer' access and also allow them to login to Cloud SQL. Strangely there are no granular permissions as there are for DataStore or Bigquery.
When attempting to connect after configuring the Cloud SQL proxy to follow Google best practice for connecting to V2 Cloud SQL instances.
The connection is refused in MySQL workbench and the following message appears in the Proxy window.
As soon as the Project privileges are changed to 'editor' in IAM, the same connection works fine. With a lack of roles for Cloud SQL, this means all users either cant access Cloud SQL v2 with proxy or can reset the root password.
Hopefully we are mistaken as this seems like a serious security issue?
You are correct, at this time the actor must have at least 'project editor' role to connect using the Cloud SQL Proxy.

Resources