Snowflake cluster Security in multi tenancy - security

For multi-tenant environments with snowflake different clusters for each client. How can we secure the snowflake cluster via a DDoS attack?
As we have common microservices across tenants than one tenant can maliciously make DDoS attack on another snowflake cluster.
How can we configure network isolation in between tenant snowflakes clusters?
Thanks

Snowflake maintains a comprehensive documented security program based on NIST 800-53 (or industry recognized successor framework), under which Snowflake implements and maintains physical, administrative, and technical safeguards designed to protect the confidentiality, integrity, availability, and security of the Service and Customer Data (the “Security Program”), including, but not limited to, as set forth below. Snowflake regularly tests and evaluates its Security Program, and may review and update its Security Program as well as this Security Addendum, provided, however, that such updates shall be designed to enhance and not materially diminish the Security Program.
More details: SECURITY ADDENDUM

Related

Can I share a k8s cluster securely between many DevOps product teams?

would there be a secure way to work with different product DevOps teams on the same k8s cluster? How can I isolate workloads between the teams? I know there is k8s rbac and namespaces available, but is that secure to run different prod workloads? I know istio but as I understood there is no direct answer to my Südasien. How can we handle different in ingress configuration from different teams in the same cluster? If not securely possible to isolate workloads how do you orchestrate k8s clusters to reduce maintenance.
Thanks a lot!
The answer is: it depends. First, Kubernetes is not insecure by default and containers give a base layer of abstraction. The better questions are:
How many isolation do you need?
Whats about user management?
Do you need to encrypt traffic between your workload?
Isolation Levels
If you need strong isolation between your workloads (and i mean really strong), do yourself a favor and use different clusters. There may be some business cases where you need guarantee that some kind of workload is not allowed to run on the same (virtual) machine. You could also try to do this by adding nodes that are only for one of your sub-projects and use Affinities and Anti-Affinities to handle the scheduling. But if need this level of isolation, you'll probably ran into problems when thinking about log aggregation, metrics or in general any point where you have a component that's used across all of your services.
For any other use case: Build one cluster and divide by namespaces. You could even create a couple ingress-controllers which belong just to one of your teams.
User Management
Managing RBAC and users by hand could be a little bit tricky. Kubernetes itself supports OIDC-Tokens. If you already use OIDC for SSO or similar, you could re-use your tokens to authenticate users in Kubernetes. I've never used this, so i can't tell about role mapping using OIDC.
Another solution would be Rancher or another cluster orchestrating tool. I can't tell about the other, but Rancher comes with built-in user management. You could also create projects to group several namespaces for one of your audiences.
Traffic Encryption
By using a service mesh like Istio or Linkerd you can encrypt traffic between your pods. Even if it sounds seductive to encrypt your workload, make clear if you really need this. Service meshes come with some downsides, e.g. resource usage. Also you have one more component that needs to be managed and updated.

How Safe is your data in Azure VM

We are going to have a new business system and I'm trying to convince my boss to host it on cloud in China cause business is there, ie: Azure, AWS, etc. He has a concern about data confidentiality and he doesn't want the company's financial info to leak out. The software vendor also suggested we build our own data center if we are so concern about data confidentiality. This makes me even more difficult to convince him. He has the impression that anything can be done in China.
I understand that Azure SQL is not an option for me cause host admin still have control even though I implement TDE (cannot use Always Encrypt). Now I'm looking at VM where I have full control over - at VM level up. I can also use disk encryption. Couple that with other security measures like SSL I'm hoping that this will improve the security of the data is it in transit or at rest. Is my understanding correct?
With that said, can the Azure admin still overwrite anything set on VM and take over the VM fully?
Even though it's technically possible but if this takes a lot of effort (benefit < effort) it still worth trying.
Any advice will be much appreciated.
Azure level Admin can just login to your VM, doesnt matter if its encrypted or not (or decrypt it, for that matter). You cannot really protect yourself from somebody inside your organization doing what he is not supposed to do (you can with to some extent with things like Privileged Identity Management, proper RBAC, etc).
If you are talking about Azure Fabric admin (so the person working for Microsoft or the chinese company in this particular case). He can, obviously pull the hard drive and get access to your data, but its encrypted at rest. Chances are he cannot decrypt it. If you encrypt the VM on top of that with Azure Disk Encryption (or Transparent Data Encryption) using your own set of keys he wouldn't be able to decrypt the data even if he can, somehow, get past the Azure side encryption
If you want to more control better to have IaaS services than PaaS services. You have more control on IaaS. You can use Bit loker to encrypt your disks if you are using Windows OS. China data center also under industry specific standards. Access to your customer data is controlled by an independent company in China, 21Vianet. Not even Microsoft can access your data without approval and oversight by 21Vianet. I think there is no big risk but you have to implement more security mechanism than Azure provide for better security.

Security of in-transit data for Geo-Replication in Azure SQL Database

We want to enable Geo-Replication in Azure SQL Database. However for compliance reasons, we want to be sure that replication to secondary region happens over a secure encrypted channel.
Is there any documentation available to confirm that data in-transit during geo-replication goes over a secure encrypted channel?
I have looked into Microsoft Azure Trust center and there is a brief mention about using standard protocols for in-transit data. However I could not find information related to which protocols are used and how security of in-transit data is ensured.
Thank you for this question. Yes, the geo-replication uses a secure channel. If you are using V11 servers the SSL certificates are global and regularly rotated. If you are using V12 servers the certificates are scoped to the individual logical servers. This provides secure channel isolation not only between different customers but also between different applications. Based on this post I have filed a work time to reflect this in the documentation as well.

Windows Azure and multiple storage accounts

I have an ASP.NET MVC 2 Azure application that I am trying to switch from being single tenant to multi-tenant. I have been reviewing many blogs and posts and questions here on Stack Overflow, but am still trying to wrap my head around the specifics of what's right for this particular app.
Currently the application stores some information in a SQL Azure database, as well as some other info in an Azure Storage Account. I'm considering writing the tenant provisioning code to simply create a new database for a new tenant, along with a new azure storage account. This brings me to the following question:
How will I go about testing this approach locally? As far as I can tell, the local Azure Storage Emulator only has 1 storage account. I'm not sure if I'm able to create others locally. How will I be able to test this locally? Or will it be possible?
There are many aspects to consider with multitenancy, one of which is data architecture. You also have billing, performance, security and so forth.
Regarding data architecture, let's first explore SQL storage. You have the following options available to you: add a CustomerID (or other identifyer) that your code will use to filter records, use different schema containers for different customers (each customer has its own copy of all the database objects owned by a dedicated schema in a database), linear sharding (in which each customer has its own database) and Federation (a feature of SQL Azure that offers progressive sharding based on performance and scalability needs). All these options are valid, but have different implications on performance, scalability, security, maintenance (such as backups), cost and of course database design. I couldn't tell you which one to choose based on the information you provided; some models are easier to implement than others if you already have a code base. Generally speaking a linear shard is the simplest model and provides strong customer isolation, but perhaps the most expensive of all. A schema-based separation is not too hard, but requires a good handle on security requirements and can introduce cross-customer performance issues because this approach is not shared-nothing (for customers on the same database). Finally Federations requires the use of a customer identifyer and has a few limitations; however this technology gives you more control over performance distribution and long-term scalability (because like a linear shard, Federation uses a shared-nothing architecture).
Regarding storage accounts, using different storage accounts per customer is definitively the way to go. The primary issue you will face if you don't use separate storage accounts is performance limitations, such as the maximum number of transactions per second that can be executed using a single storage account. As you are pointing out however, testing locally may be a problem; however consider this: the local emulator does not offer 100% parity with an Azure Storage Account (some functions are not supported in the emulator). So I would only use the local emulator for initial development and troubleshooting. Any serious testing, including multitenant testing, should be done using real storage accounts. This is the only way you can fully test an application.
You should consider not creating separate databases, but instead creating different object namespaces within a single SQL database. Each tenant can have their own set of tables.
Depending on how you are using storage, you can create separate storage containers or message queues per client.
Given these constraints you should be able to test locally with the storage emulator and local SQL instance.
Please let me know if you need further explanation.

Advantages and disadvantages of azure security

Has anyone seen details or a White paper on azure security and the positives and negatives compared to your own hosting?
Securing Microsoft's Cloud Infrastructure
Security Mental Model for Azure
Cloud Security Frame
Outlook for Azure – scattered clouds but generally sunny
Security Considerations for Client and Cloud Applications
abmv has a full set of links.
Just wanted to add one point: The azure platform is highly automated, so there are very few manuall operations, at least compared with the hosting companies I have seen. This reduces the chance of security problems due to human error, forgetting a configuration setting for example.
Azure security whitepapers are available at the Azure Trust Center: http://azure.microsoft.com/en-us/support/trust-center/security/
This is also a helpful document for Security Best Practices for Azure Solutions: http://download.microsoft.com/download/7/8/A/78AB795A-8A5B-48B0-9422-FDDEEE8F70C1/SecurityBestPracticesForWindowsAzureSolutionsFeb2014.docx
In practice, many customers choose to mix several compute types in their cloud environment, as certain models may apply better to different tasks; multiple cloud services, virtual machines, and Web Sites can all work in conjunction. The pros and cons of each should be weighed when making architectural decisions.
There is great potential and promise for the cloud, but those looking to adopt cloud computing are understandably nervous and excited about the business prospects. Customers are excited about reducing capital costs, divesting themselves of infrastructure management, and taking advantage of the agility delivered by on-demand provisioning of cloud-based assets. However, IT architects are also concerned about the risks of cloud computing if the environment and applications are not properly secured, and also the loss of direct control over the environment for which they will still be held responsible. Thus, any cloud platform must mitigate risk to customers as much as possible, but it is also incumbent on the subscriber to work within the cloud platform to implement best practices as they would for on-premises solutions.
Moving to a cloud platform is ultimately a question of trust vs. control. With the Infrastructure-as-a-Service (IaaS) model, the customer places trust in the cloud provider for managing and maintaining hardware. The cloud provider secures the network, but the customer must secure the host and the applications. However, for Platform-as-a-Service (PaaS), the customer gives further control of the host, the network, and runtime components. Thus, the cloud vendor would be responsible for ensuring that the host and runtime are properly secured from threats. In both cases, the customer would be responsible for securing applications and data (e.g., authentication, authorization, configuration management, cryptography, exception management, input validation, session management, communication, audit and logging).
Software as a Service (SaaS) presents one further level of abstraction. In this case, the cloud provider manages all levels of the stack all the way up to the application. Customers provide configuration information and sometimes high level code, but that is the end of their responsibility.
Generally, traditional threats will continue to exist in the cloud, such as cross-site scripting (XSS) or code injection attacks, Denial-of-Service (DOS) attacks, or credential guessing attacks. Some old threats are mitigated, since patching may be automated (for Platform-as-a-Service, or PaaS, only), and cloud resiliency improves failover across a service. Some threats are expanded, such as those concerning data privacy (location and segregation) and privileged access. New threats are introduced, such as new privilege escalation attacks (VM to host, or VM to VM), jail-breaking the VM boundary or hyper-jacking (a rootkit attack on the host or VM). Microsoft has taken extraordinary measures to protect Azure against those classes of threats.
Worth also checking into the Azure Security Information Site - we'll be adding a lot more dev-centric security content there in this calendar year https://aka.ms/AzureSecInfo

Resources