Azure VM not connecting to Azure Redis Cache but local is connecting to Azure Redis Cache - node.js

The same Azure Redis cache is getting connected from local machince.
The port 6380, on which cache is running is open in firewall of both inbound and outbound in the VM.
I tried in both NodeJs and Java. Both are connecting to remote Azure Redis from local and the exact same code for NodeJS and Java is not connceting to Azure Redis cache from VM.
Java config:
spring.redis.host=my-cache.redis.cache.windows.net
spring.redis.password=<password>
spring.redis.port=6380
spring.redis.ssl=true
NodeJS config:
const client = redis.createClient(6380,
'my-cache.redis.cache.windows.net',
{
auth_pass: <password>,
tls: { servername: 'my-cache.redis.cache.windows.net' }
});

well, the other end must accept connection as well, so you mist allow connections from the VM if you have any firewall rules at all:
https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-configure#firewall

This is solved by the following on my Windows Desktop VM:
Click Start.
Enter cmd in the Start menu search text box.
Right-click Command Prompt and select Run as Administrator.
Run the following command: ipconfig /flushdns
Run the following command: ipconfig /registerdns

Related

Azure VM Connection Refused

I created a VM in Microsoft Azure with Ubuntu 20 in which I run a Tomcat Server exposed to Port 443 and 80 (redirecting to 443), Neo4j on Port 7474, and Jenkins on Port 8081.
I can't access neither of those ports, although I set all the Inbound Port Rules like this:
When I try to reach IP:PORT, I always get this:
I am kinda new to Azure. It is possible to log in to the servier via SSH in the Terminal. Can anyone help me? How can I access my Server?
Have you tried to access to the VMs by using SSH and looking whats going on with the logs ?!
Yes, you can connect to a terminal by SSH:
ssh -i <private key path> username#ipaddress
If you don't config your SSH key, you can use create you password on the Azure portal.
In your VM, on the left, you have many options, and one name reset password.

connection refused connecting to remote mongodb server

So we've accumulated enough applications in our network that use MongoDB to justify building a dedicated server specifically for MongoDB. Unfortunately, I'm pretty new to mongodb (coming from SQL/MySQL derivatives). I have followed several guides on installing and configuring mongodb for my environment. None are perfect, but I think I'm close... I've have managed to get to a point that I can connect to the db server from the local server using the following command:
mongo -u user 127.0.0.1/admin
However, I'm NOT able to connect to the server using this from either the local OR a remote computer using it's network address, IE:
mongo -u user 192.168.24.102/admin
I've tried both with authentication enabled and disabled, and I've tried setting the bindIP to 192.168.24.102 and 0.0.0.0 with no love. Thinking it was a Firewall issue, I disabled the firewall entirely... same. no love...
so what's the secret sauce? how do I connect to a MongoDB server remotely?
Some notes to know: This server is on a local network only. There will be some NAT shenanigans at some point directing public traffic to it from remote application servers, but only specific ports (we will NOT be using 27017 when that happens) and it will sit behind a pretty robust firewall appliance, so I'm not worried about securing the server as I about securing MongoDB itself.
This answer assume a setup where a Linux server is completely remote and has MongoDB already installed.
Steps:
1. Connect to your remote server over SSH.
ssh <userName>#<server-IP-address>
2. Start Mongo shell and add users to MongoDB.
Add the admin;
use admin
db.createUser(
{
user: "AdminSammy",
pwd: "AdminSammy'sSecurePassword",
roles: [
{"userAdminAnyDatabase",
"dbAdminAnyDatabase",
"readWriteAnyDatabase"}
]
}
)
Then add general user/users. Users are added to specific databases.
use some_db
db.createUser({
user: 'userName',
pwd: 'secretPassword',
roles: [{ role: 'readWrite', db:'some_db'}]
})
3. Edit your MongoDB config file, mongod.conf, that is found in etc directory.
sudo vim /etc/mongod.conf
Scroll down to the #security: section and add the following line. Make sure to un-comment the security: line.
security:
authorization: 'enabled'
After authorization has been enabled only those authenticated with password will access the database. In this case these are the ones added in step 2 above.
Note: Visual Studio code can also be used over SSH to edit the mongo.conf file.
4. Add remote server's IP address to mongod.conf file.
Look for the net line and add the IP address of the server that is hosting this MongoDB installation, example 178.45.55.88
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1, 178.45.55.88
5. Open port 27017 on your server instance.
This allows access to your MongoDB server from anywhere in the world to anyone who knows your remote server IP address. This is one reason to have authenticated users. More robust ways of handling security are really important! Consult MongoDB manual for that.
Check firewall status using ufw.
sudo ufw status
If its not active, activate it.
sudo ufw enable
Then,
sudo ufw allow 27017
Important: You also need to allow port 22 for your SSH communication with your remote server. Otherwise you will be locked out from your remote server. Assumption here is that SSH uses port 22 for communication, the default.
sudo ufw allow 22
6. Restart Mongo daemon (mongod)
sudo systemctl restart mongod
7. Connect to remote Mongo server using Mongo shell
You can now connect to the remote MongoDB server using the following command.
mongo -u <user-name> -p <user-password> <remote-server-IP-address>:<mongo-server-port>
You can also connect to the remote MongoDB server with authentication:
mongo -u <user-name> -p <user-password> <remote-server-IP-address>:<mongo-server-port> --authenticationDatabase <auth-db-name>
You can also connect to a specific remote MongoDB database with authentication:
mongo -u <user-name> -p <user-password> <remote-server-IP-address>:<mongo-server-port>/<db-name> --authenticationDatabase <auth-db-name>
At this moment you can read and write within the some_db database from your local computer without ssh.
Important: Put into consideration the standard security measures for any database. Local security practices should guide what to do at any of the above steps.

Does Azure Web App for containers support ssh access when running a multi-container app?

I'm running an Azure Web app (containers) with custom container images. I've followed the steps to enable ssh into a container image and it works great when I'm only running a single container. But when I run the app as a multi-container app (with docker-compose file) with more than one container image I get the error below. For additional context this is a small python web app that using nginx and redis hence the need for more than one container. Only one of my custom images has ssh enabled and running and exposing port 2222.
Is this even possible? If not then I'm not sure how feasible it is to run a web app multi-container if I have no way to access a container for support purposes.
az webapp remote-connection create -g GROUPNAME -n APPNAME -p 2222 --verbose
Configured default 'GROUPNAME' for arg resource_group_name
remote-connection is deprecated and moving to cli-core, use `webapp create-remote-connection`
Port 2222 is open
Creating a socket on port: 2222
Setting socket options
Binding to socket on local address and port
Finished initialization
Status response message: FAILURE:2222:Unable to connect to WebApp
WARNING - Remote debugging may not be setup properly. Reponse content: FAILURE:2222:Unable to connect to WebApp
SSH is available { username: root, password: Docker! }
Start your favorite client and connect to port 2222
I also tried the create-remote-connection command but got similar results.
az webapp create-remote-connection -n APPNAME -g GROUPNAME --verbose &
Error I receive is:
Auto-selecting port: 52661
Finished initialization
Status response message: FAILURE:2222:Unable to connect to WebApp
WARNING - Remote debugging may not be setup properly. Reponse content: FAILURE:2222:Unable to connect to WebApp
Connection is not ready yet, please wait
.
Status response message: FAILURE:2222:Unable to connect to WebApp
WARNING - Remote debugging may not be setup properly. Reponse content: FAILURE:2222:Unable to connect to WebApp
Looks like its not supported :(
see...
How to SSH in to different containers in Multi Container Azure App Service
and...
Support SSH to specific container in multi-container setup

How do I connect a somata client to a remote registry?

I'm using somata as my microservices platform for the web apps I'm building. I have successfully set up multiple clients on one machine with the somata registry running on the same machine. Now I want to have a client on one machine connect to a registry on another machine. How do I connect a client to a remote registry?
The simplest way is to use the environment variables SOMATA_REGISTRY_HOST (default "127.0.0.1") and SOMATA_REGISTRY_PORT (default 8420) when running your script:
SOMATA_REGISTRY_HOST=55.44.33.21 node test.js
The somata Client constructor also lets you connect to specific registries with the options registry_host and registry_port:
var client = new somata.Client({
registry_host: '55.44.33.21',
registry_port: 5858
})
Note: To allow connections from remote hosts, somata-registry will have to be run with its binding port as "0.0.0.0" instead of the default "127.0.0.1", which can be accomplished with the -h flag or SOMATA_REGISTRY_BIND_HOST environment variable when starting the registry. The -p flag and SOMATA_REGISTRY_BIND_PORT are also available for listening on a custom port.
somata-registry -h 0.0.0.0
or
SOMATA_REGISTRY_BIND_HOST=0.0.0.0 somata-registry
And of course you'll need access to the host and port from the remote machine.

Strange behaviour of Mean.io on Azure VM‏

I created an Azure virtual machine with Ubuntu 14.04 LTS OS.
I installed a mean.io application version 0.3.3, on this virtual machine, with nginx that proxy the requests in the http port 3000 over the port 80.
I opened one endpoint in azure portal, for the TCP protocol on private port 3000 and public port 80.
I installed the latest version of node on Azure VM.
The database (mongoDB) is hosted on compose.io.
With pm2 (https://www.npmjs.org/package/pm2) I created a daemon that run the application.
All apparently works fine: the cpu was with no load and the memory was empty (only 100MB).
But after a period, node.js cannot process the request.
I have tried to do a 'curl' in localhost 3000 but i dont have any response.
The problem persists only in Azure VM: I tried the same application, with the same configuration, on my dev machine (ubuntu 14.04 desktop), and on Digital Ocean (another distro of ubuntu 14.04 server) and all works fine without problem.
Can you help me to find the problem?
I have tried to dockerize all infrasctructure, in the same machine (a CoreOS vm on azure):
1 container with mean app,
1 container with MongoDB,
the problem still persisted!!!
finally, i have found the solution: keep the connection to MongoDB alive.
i have modified the server.js file from the mean app in this mode:
var options = {
server: {
socketOptions: { keepAlive: 1 }
}
};
var db = mongoose.connect(config.db, options);
In this mode the connection still alive and the problem was solved.

Resources