I'm just getting started with gcloud vm's and trying to secure them up a bit. If I change the ssh port, what switch/flag do I add to the gcloud command when using the gcloud command like this
gcloud beta compute ssh --zone "us-east4-c" "base" --project "testproject"
Thanks!
After checking this GCP doc, you can see that you'll be able to set a custom port by adding a flag called --ssh-flag.
For example:
gcloud compute ssh example-instance --zone=us-central1-a --project=project-id --ssh-flag="-p 8000"
It is also applicable for gcloud beta:
gcloud beta compute ssh example-instance --zone=us-central1-a --project=project-id --ssh-flag="-p 8000"
The sample commands will SSH to your Compute Engine instance on port 8000.
Note: Before connecting, make sure you have an ingress Firewall Rule that accepts TCP on the port you've chosen.
UPDATE: If above is not working and you are getting connection refused, it means you need to configure your VM to listen to the port you wanted. Here are the steps:
Go to sshd configuration file : sudo vi /etc/ssh/sshd_config
Add your chosen port for example:
Save the file.
Restart sshd service : sudo systemctl reload sshd.service
Related
I created a VM in Microsoft Azure with Ubuntu 20 in which I run a Tomcat Server exposed to Port 443 and 80 (redirecting to 443), Neo4j on Port 7474, and Jenkins on Port 8081.
I can't access neither of those ports, although I set all the Inbound Port Rules like this:
When I try to reach IP:PORT, I always get this:
I am kinda new to Azure. It is possible to log in to the servier via SSH in the Terminal. Can anyone help me? How can I access my Server?
Have you tried to access to the VMs by using SSH and looking whats going on with the logs ?!
Yes, you can connect to a terminal by SSH:
ssh -i <private key path> username#ipaddress
If you don't config your SSH key, you can use create you password on the Azure portal.
In your VM, on the left, you have many options, and one name reset password.
I have an ubuntu server running on a EC2 AWS server. I am testing a hello world Nodejs app that I set to listen on port 9000. If I use the GUI in the AWS console to open incoming 9000 TCP port the app runs fine. But if I try and use the command line sequence shown below it wont allow connections.
sudo su
ufw allow 9000/tcp
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 8080/tcp
ufw allow 443/tcp
ufw enable
ufw status
I ran these before doing any security group things inside my AWS EC2 instance and no luck. I am doing a school project that we only have CLI access to an EC2 thru SSH so I wanted to try and open ports through the CLI if possible. Thanks in advance for any help
If by the GUI in the AWS console you mean security groups then you should know that it does not configure ufw. Instead security groups modify firewall rules in AWS networking equipment (either the router or switch or firewall appliance assigned to your network). In some cases you will need to configure both ufw and AWS security groups to allow access especially when you use distros with a restrictive default firewall (Debian and by extension Ubuntu does not enable the firewall by default).
If you want to configure security groups from the command line or in a script you will need to use aws cli. With the aws cli installed you can do:
aws ec2 authorize-security-group-ingress --group-id $group_id --protocol tcp --port 9000 --cidr 0.0.0.0/0
Note that the aws cli does the same thing as going to your browser and configure the security group in the console but via an API. As such it does not matter where you install the aws cli. You don't need to install it on the EC2 instance. You can install it on your personal Windows or Linux or Mac machine.
There is even an app that you can install on Android and iOS to configure the security group if you are interested.
I am doing a port forwarding to connect my local machine to Spinnaker.
Step1: -> Localhost to AWS instance
ssh -A -L 9000:localhost:9000 -L 8084:localhost:8084 -L 8087:localhost:8087 ec2-user#<aws-instance-ip>
Step2: -> Aws instance to Spinnaker cluster
ssh -L 9000:localhost:9000 -L 8084:localhost:8084 -L 8087:localhost:8087 ubuntu#10.100.10.5
This works fine when i do http://localhost:9000
However, instead of port forwarding from local machine I want to setup a tunnel from another aws instance (Eg: 55.55.55.55) and access via http://55.55.55.55:9000 . So that other team members can i directly access Spinnaker UI.
I have tried following the above steps from 55.55.55.55 host and then tried
http://55.55.55.55:9000 however it didnt work.
What should i change to make it resolve on 55.55.55.55 host?
Port forwarding is bound to the IP you give to ssh. If you give localhost (default), it will be accessible only on localhost (127.0.0.1). If you want to access it from outside, you need to give the 55.55.55.55 address instead.
You will also need a -g switch to ssh, which will allow remote hosts to connect to your locally forwarded ports.
1- I created a new containerservice in azure.
2 - The creation was done following the portal step by step.
3 - I have not changed any configuration of any service, VM, balancing, master and agent.
4 - I can connect with PuTTY normally.
5 - I can open a tunnel by redirecting port 80 to port 80.
Following this tutorial, I can put the container to run::
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ffe6a1c890e4 yeasy/simple-web "/bin/sh -c 'pytho..." 31 minutes ago Up 31 minutes 0.0.0.0:80->80/tcp vibrant_morse
If I access localhost from my browser I can reach port 80 of the container and see the identical "Real Visit Results" page of the tutorial.
But in the tutorial it says that if I use load balancer's DNS I should see the result, that's my problem, I can not access the container through DNS, I only get timeout.
Reinforcing, I created a container service and did not change any configuration, just entered with PuTTY and put the container to run.
According to your description, it seems that you don't set your DOCKER_HOST environment variable to the local port configured for the tunnel. When you ssh to your master VM, you need execute command below:
export DOCKER_HOST=:2375
Run the Docker commands that tunnel to the Docker Swarm cluster. For example:
docker info
If you don't set the environment variable on the tunnel, the docker contanier is created on master VM, so you could not get the Web with agent Public IP.
Also, you could not set environment variable, but you need to point to the host when you execute docker command. More information please refer to this link
Now I have a VM on google cloud platform which was created by someone else
and he configured a cassandra on the VM.I'd like to access this cassandra node.
I used the IP to access it, but I failed.I don't know whether the cassandra is running on the VM.
How can I verify that?
If you can access over ssh then ps aux | grep cassandra or try to telnet 9042 port.
At first, ssh to the server by using your username , password and ip:
For Example:
$ ssh jack#11.1.41.1
Use jps to find the cassandra process:
[jack#11.1.41.1 ~]$ jps
1136 CassandraDaemon
15314 Jps
If you see, CassandraDaemon is running with a process id, then you can be assured that cassandra is running on this server.
You can try Nodetool to check the status too. If Nodetool command worked, then it's confirmed that cassandra is running.
[jack#11.1.41.1 ~]$ nodetool status
By default, Google Compute Engine VMs only expose port 22 (SSH), not any application ports, because that would be insecure.
If you want to connect to a Cassandra server running on a GCE VM, you should create an SSH tunnel with port forwarding, and then access the Cassandra server via that SSH tunnel.
Specifically, follow the tutorial by running this command (adjusted from the original to use the Cassandra port 9042 for native clients):
gcloud compute ssh example-instance \
--project my-project \
--zone us-central1-a \
--ssh-flag="-L" \
--ssh-flag="2222:localhost:9042"
In the above command, the parameters are defined as follows:
example-instance is the name of the instance to which you'd like to
connect.
my-project is your Google Cloud Platform project ID.
us-central1-a is the zone in which your instance is running.
2222 is the local port you're listening on.
9042 is the remote port you're connecting to.
[...]
The gcloud command creates and maintains an SSH connection, and this approach only works while the SSH session is active. As soon as you exit the SSH session that gcloud creates, port forwarding via localhost:2222 will stop working.
If you want to create more than one port forwarding rule, you can specify multiple rules on a single command line by repeating the flags:
gcloud compute ssh example-instance \
--project my-project \
--zone us-central1-a \
--ssh-flag="-L" \
--ssh-flag="2222:localhost:9042" \
--ssh-flag="-L" \
--ssh-flag="2299:localhost:8000"
Alternatively, you can run a new gcloud command each time to create a separate tunnel. Note that you cannot add or remove port forwarding from an existing connection without exiting and re-establishing the connection from scratch.
Make appropriate substitutions as necessary; for example, according to the Cassandra docs on port usage:
By default, Cassandra uses 7000 for cluster communication (7001 if SSL is enabled), 9042 for native protocol clients, and 7199 for JMX (and 9160 for the deprecated Thrift interface). The internode communication and native protocol ports are configurable in the Cassandra Configuration File. The JMX port is configurable in cassandra-env.sh (through JVM options). All ports are TCP.
If you have a custom configuration for Cassandra ports, you will need to take that into account, of course.