So i'm planning to map my local drive to the AWS instance,
ok why i'm doing this is because i'm having some files in the local server, and the front-end application to download the files is at the AWS cloud server. so by mapping the drive i able to set the virtual directory and set the physical path as the mapped drive at IIS so the application able to download the files.
but whenever i trying to map its failing.
currently i already allow SMB ports at both local server firewall and aws security groups so both of the able to communicate,
current security group configuration that has been allowed :
port tcp 445, 135, 139
port udp 137
port tcp 3389 for rdp
destination to my server ip (public)
the parameter to map the drive i use is :
\serverip\drive\directory\
so what configuration/setup i'm missing
Hope anyone can help me
Thank you
Related
I've been reading around mounting Azure storage account file shares on a Linux web app: https://learn.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-mount-files-storage-linux
This works fine, and I've confirmed I can write to the fileshare from my function without using any REST endpoints. However, everything I've read (https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox) implies that port 445 is blocked by default within function apps.
So, how is the connection from my function app to the file share enabled?
Yes! It is not recommended to use the PING command directly to verify network connectivity to a public DNS name or IP address because PING is usually forbidden. You could use PowerShell command Test-NetConnection -Port 445 -ComputerName somestoragexxx.file.core.windows.net to verify the port 445 on the dev machine.
If TCP 445 connectivity fails, make sure your ISP or on-premise network security is not blocking outbound port 445. Please be aware that you should open outbound port 445 rather than inbound port 445.
As a test result, on my local machine, TCP test port 445 is false.
On the Azure VM, TCP test port 445 is true, and I could access the storage file share successfully.
Additionally, port 445 always is not allowed to access over the Internet. You could use different ways to access files in Azure Files.
In the above link, there are many ways to access Azure File share service. If your outbound port 445 is also blocked by your Firewall or ISP, please check this solution to resolve it and also refer this SO Thread.
Note:
You can mount the file share on your local machine by using the SMB 3.0 protocol, or you can use tools like Storage Explorer to access files in your file share. From your application, you can use storage client libraries, REST APIs, PowerShell, or Azure CLI to access your files in the Azure file share.
I've setup an Ubuntu Server on Azure. On this server, an application is running on port 3000. I want to access this application external. Azure tells me my server has public ip 40.68.XXX.XXX.
When I ping this IP, there is no response, despite ssh works when connecting to this IP-address.
I want to access 40.68.XXX.XXX:3000 external, does somebody know how to get this work?
Yes, you need to open up a port on the Network Security Group (NSG) and open up the port on your firewall (on the VM itself).
Easiest way to open the port is using the portal:
https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-nsg-quickstart-portal
I am trying to open port 8080 in a Windows Azure virtual machine. I have a test website listening in that port, and I am able to access it via localhost, so the website is running.
I have also opened the port in the firewall and created an inbound security rule in azure portal for the virtual machine, but the port doesn't seem to be open to the outside world. I have tried accessing it both via the IP address and the DNS with the same results.
Is there anything else I should be doing?
I have a VM-1 on Azure with windows server 2012. I have installed FTP server in this. (FileZilla)
Another VM-2 in different cloud, where a windows service will access a FTP server in VM-1.
Both VMs are in different cloud. In FTP , while listing directories I am getting '425 Cant open data connection for dir listing'
I am using active mode in FTP.
But If I install the windows service in local machine, it is running correctly without any issue in FTP.
I'll answer though this isn't a programming question, because I can help. :)
When a virtual machine is created, a default ACL is put in place to block all incoming traffic other than for RDP and Remote PowerShell connections.
http://azure.microsoft.com/blog/2014/03/28/network-isolation-options-for-machines-in-windows-azure-virtual-networks/
You need inbound ports other than 21 for active connections, typically high ports above 1024, if you have port 21 open inbound already then you should use passive (pasv) mode to let the server open a random outbound port first to solve the issue of needing an ACL for other inbound ports.
More info on Active vs Passive and ports: http://slacksite.com/other/ftp.html
I am new to Azure and trying to setup our companies testing environment in Azure.
As I understand it for two machines to talk to each other in Azure they need to be in the same cloud service, i.e. our web server and DB server.
So I have created a service, then created each of the VM's in that service. They are both running. In the endpoints I can see:
web server:
NAME PROTOCOL PUBLIC PORT PRIVATE PORT LOAD-BALANCED SET NAME
HTTP TCP 80 80 -
HTTPS TCP 443 443 -
PowerShell TCP 5986 5986 -
Remote Desktop TCP 50232 3389 -
db server:
NAME PROTOCOL PUBLIC PORT PRIVATE PORT LOAD-BALANCED SET NAME
MSSQL TCP 1433 1433 -
PowerShell TCP 54327 5986 -
Remote Desktop TCP 52459 3389 -
in the cloud service the input areas
INPUT ENDPOINTS
protoApp : 123.456.789.227:80
protoApp : 123.456.789.227:443
protoApp : 123.456.789.227:5986
protoApp : 123.456.789.227:50232
protodb : 123.456.789.227:1433
protodb : 123.456.789.227:54327
protodb : 123.456.789.227:52459
I can connect to the protodb server but not the protoapp server (on the given ports).
There are two / three questions really.
Should they both be in the same cloud service?
Should the live DB and web server be in a seperate cloud server (not created them yet)
Can anyone think of a reason why I can no longer MSTSC / rdp to one of the machines, even though the endpoints say its all fine, the machine is running and the cloud service says it has it as an endpoint.
No reason why not, though you should look at creating a Virtual Network to connect them
You should consider this if
Performance dictates it
You want extra security - consider somebody hacks the web server, they then immediately have access to the same server that hosts the data. Really you should restrict the incoming IPs for MSSQL to something trusted anyway, or the same subnet if you use a Virtual Network
Cost is not an issue
I've sometimes had trouble using mstsc to directly connect via RDP to Azure VMs. If you go to http://manage.windowsazure.com and navigate to your VM, there will be a "Connect" option at the bottom. This will download a .rdp file which might help.
Something else worth noting, If you're using Azure VMs, you won't qualify for Microsoft's uptime SLA unless you have two or more VMs per cloud service configured as part of an Availability Set. So straight away you should consider that the number of VMs you're planning will double if you want to have a production/highly available environment, and you should consider the impact this will have on your application architecture too.