In case i have a container as a service platform serving containers to different tenants' users. And a single host can be hosting containers of different tenants. how can i allow access over HTTP while making sure that a user can access only his tenant's containers?
Does Kubernetes solve that? Is there a more lightweight solution?
DCHQ absolutely solves this problem.
Related
We need to expose one of our Azure VMs located in a VNet to Internet. We are using F5 ADC for all inbound traffic both to on-premise and to Azure.
What is the best practice to expose an Azure VM to Internet if you have zero-trust approach in mind?
Appreciate all kinds of advice
If you are planning to expose Azure VM to the Internet considering Zero Trust strategy, you should check:
Workloads are monitored and alerted to abnormal behavior.
Every workload is assigned an app identity—and configured and deployed consistently.
Human access to resources requires Just-In-Time.
After above items are completed, check next:
Unauthorized deployments are blocked, and alert is triggered.
Granular visibility and access control are available across workloads.
User and resource access segmented for each workload.
https://learn.microsoft.com/en-us/security/zero-trust/infrastructure
https://learn.microsoft.com/en-us/security/zero-trust/
It depends what on the VM you want to expose to the Internet.
If it is a web site running on the VM you could use a web appication firewall https://learn.microsoft.com/en-us/azure/web-application-firewall/ag/ag-overview
If it is RDP access to the VM you could use Azure Bastion
https://azure.microsoft.com/en-us/services/azure-bastion/
I've got a rather unwieldy legacy intranet app that does a lot of file manipulations across multiple network shares (file reads, moves, deletes, creates directories, etc) and I want to set up a preproduction instance. Currently the app pool is running under a domain account that has been granted access to all these scattered directories. I'm wondering if running a second instance of the site (different server) using the same domain account would be an issue.
This doesn't seem to be an easy question to formulate in a way to get a useful answer out of google. Anyone have any experience doing this? I would rather not have to create more accounts and track down all the locations that would require added permissions if I don't have to.
The aim to set different application pool identity for different application pools is to restrict the limit for application pool. Independent application pool will isolation NTFS permission from accessing the files that the web app shouldn't reach. Just in case the server are under vulnerability attack.
Of course, if you are hosting your web apps in a isolated network environment, you could share your domain account for multiple application pools.
As Lex said, consult your network administrator would get more practical answer.
Right now we have AD set up so access to our App Service is authenticated. But we need the website to have local access to some special applications. Since we can't install applications on an App Services, I THINK that means we need to run the website on a VM.
If that's the case, I'd like to not lose the ability for Azure AD to authenticate access to our VM. I'm sure we can use Azure AD to authenticate us while we RDP to the server, but can it also be used for when we expose our Web Application over HTTPS from the server?
Since we can't install applications on an App Services, I THINK that
means we need to run the website on a VM
Even though a VM is the simplest one, you do have other options to at least consider. Here is Microsoft's documentation comparing the various options along with scenario based recommendations.
Azure App Service, Virtual Machines, Service Fabric, and Cloud Services comparison
Quick Note:
Amongst the options discussed, avoid Cloud Services (classic) as far as possible, as they are legacy and on their way out. Also, if you still choose Virtual machine, do consider Virtual Machine Scale Sets for better scale and management options.
I'm sure we can use Azure AD to authenticate us while we RDP to the
server, but can it also be used for when we expose our Web Application
over HTTPS from the server?
Yes, it can be used even when you expose your web application over HTTPS from the server. Exposing over HTTPS would only involve opening up the port through NSG rules and SSL settings for your application in IIS. This will not impact you ability to RDP into the VM.
I am doing a small system which deploys azure container groups via Rest. On the container groups I have multiple instances that are load balanced via Traefik. For example I have a container group with two containers plus a traefik container that redirects requests to the other two containers.
The problem with this solution is being able to access docker.sock on the traefik container. Without docker.sock Traefik is blind, and cannot detect the existing containers.
I have tried a couple of approaches, but with no success.
Is it possible to access docker.sock on an azure container instance?
Thanks for your support.
We don't enable access to docker.sock from ACI because the service is built to provide a managed container solution, so access to the underlying host and docker running isn't available.
Depending on your scenario, we'll be bringing in LB support for Azure VNETs later this year that should be able to help. Hopefully you can find alternative routes but feel free to share details and I'm happy to help if possible.
I have an application that includes multiple hosted services in Azure. Two are web roles, one is a worker role. The problem is, two of the roles need to now communicate. One is a web role that serves as the admin interface. The other is a worker role. The admin interface needs to issue commands, like pause any running jobs, report status, etc. The 2nd web role is just a site, unrelated to the first two.
(Just to preface, I want to make sure my use of Azure terms are correct):
Hosted Service: An Azure 'application'. Multiple roles with two deployments, production and staging
Deployment: A specific instance of all the roles, either in production or staging, with a single external endpoint (*.cloudapp.net)
Role: A single 'job', either a web role or a worker role.
Instance: The VM's that service a role
Also to verify: Is it possible to add roles to an existing hosted service? That is, if I deploy 2 roles from one solution, can I add a third role in another deployment from a different solution?
Because each role is in it's own hosted service, it presents some challenges. Here's my understanding of the choices in how they can communicate:
Service Bus: This seems to be the best from an architecture standpoint. Each hosted service can connect a WCF service to the service bus, and admin can issue commands to the worker role. The downside is this is pretty cost prohibitive.
Internal endpoints: This seems best if cost is factored it. The downside is you have to deploy all the roles at once, and the web roles cannot have unique addresses. The only way to access both web roles externally is with port forwarding. As far as I'm aware, it's not possible to deploy 2 roles from one solution, and 1 role from another?
External WCF service: Each component can be in individual projects and individual hosted services. The downside is there's now an externally visible service for administration.
Queue/Table storage: Admin can write commands to the Azure Queue, and the worker roles can write their responses to table storage. This seems fine for generating reports, but seems not great for issuing synchronous commands.
Should multiple roles that all service "the application" all go into the same Azure hosted service? If from a logical standpoint it makes the most sense, then I'd be happy to go with #2 and just deal with port forwarding.
First off, your definitions look pretty good and I think you understand the problem pretty well.
Also with each deployment, each external endpoint can only be assigned to one role. So if you want to run two sites on port 80, then they need to be in the same role. This is just like setting up two sites on an IIS with the same port (which is exactly what you're working with). The sites are distinguished using host headers. If you don't want to go to that effort or if you want to deploy the sites separately, then you'll want to put your stand alone site in its own service/cloud project.
For the communication part, the one option that you've missed off is service bus queues. Microsoft have released a library using service bus queues that is specifically designed for inter-role communication.
Other than that, the extra comments on your points:
You're right internal endpoints is the cheapest way to go, but you will be rolling it all yourself. Of course it could setup WCF services to listen on these internal endpoints.
An external WCF service might work OK, but if you have more than one instance of your role, all WCF calls will go through the load balancer and the message will only be sent to one of the instances. You would need to make multiple calls to make sure the message was received by all instances and even then you couldn't be sure it had worked without some other feedback method.
Storage queues suffer from a similar issue. If you have two instances and want them both to receive the same message, there's no way to guarantee that this will happen.