I'm new to Terraform, but how to say run it on a regular server? Is it possible? I am talking - regular on premises machine
EDIT: Years later I come back to this question: Let me reform it.
Can Terraform be used to provision a datacenter server, which is not on a hypervisor.
Terraform operates by calling into the APIs of various service providers and systems. Thus in principle Terraform can manage anything that has an API, and in practice it has existing support for a few different on-premises-capable systems, including:
OpenStack
VMWare vSphere
CloudStack
If the compute resources in your existing datacenter infrastructure are already managed with one of these systems, or if you are willing to install them, then Terraform can be used to manage at least parts of these systems. (For full details, see the documentation for each provider linked above.)
Terraform's plugin architecture allows support for other systems to be developed, so other API-driven datacenter management systems such as The Foreman could be supported by Terraform, and indeed third parties have developed integrations with others that are distributed outside of the "official set" that HashiCorp hosts.
By default, Terraform does not support bare metal provisioning services for on-prem equipment. However, the Open Source project; Digital Rebar Provision (DRP), has a Terraform Provider that allows the Terraform DSL to operate in conjunction with DRP. The Provider enables full support of bare metal provisioning by use of the Terraform DSL which drives the API of DRP to enable provisioning of bare metal.
The Digital Rebar Provision Terraform Provider is written and supported by RackN. You will need to install the DRP service on-prem, and configure it to enable provisioning workflows that are appropriate for your needs. Once this is done, the Terraform Provider then enables "ready state" infrastructure access to request Machines from the "terraform ready" pool of servers. The servers are then driven through the requested Workflow to configure it according to the operators needs.
On "destroy", the machine is cleaned, and returned back to the "terraform ready" pool of servers again. You can find "quickstart" information on getting DRP up and running by visiting the RackN hosted Portal.
As has been pointed out by #Martin Atkins; terraform drives other infrastructure or cloud resources via APIs. This is true also for Digital Rebar Provision. Terraform itself does not know how to interact with bare metal infrastructure. Use of a control or orchestration engine that understands how to address physical systems is required. In this solution - Terraform drives the Digital Rebar Provision service via the DSL, thus enabling provisioning activities of physical server systems on-prem.
For full disclosure - I work for RackN - which fosters and supports the Digital Rebar Provision service and capability.
There is an open-source Terraform Redfish Provider currently being developed by Dell EMC that allows for provisioning, deployment and update of x86 servers out-of-band (via BMC such as for e.g. iDRAC) using standard Redfish REST APIs. For more details on Redfish, please refer to DMTF Redfish specification here. At present, it currently supports following provider resources and data sources:
Resources:
resource_redfish_bios
resource_redfish_power
resource_redfish_storage_volume
resource_simple_update
resource_redfish_virtual_media
Data Sources:
data_source_redfish_bios
data_source_redfish_storage
data_source_redfish_virtual_media
data_source_redfish_firmware_inventory
The question is vaguely understandable but,
If it means that you want to write Infrastructure-as-Code for your personal on premises servers the answer is NO. Refer to Martin Atkins' answer.
If it means that you want to ssh into your on premises servers and execute terraform routines (plan, apply, destroy etc.), the answer is YES.
Download the suitable binary into your server operating system from here.
Related
I need to walk the resources in an Azure Subscription and determine the dependencies of those resources, i.e. this LogicApp connects to or is triggered by that Service Bus topic, or, that API connects to this SQL Server etc.
I realise I could use the dependsOn attribute in the ARM template, but that may not be a true representation of all the resources in the subscription being parsed.
Does anyone know of a tool that can build a dependency graph?, or, Does anyone know if Azure Powershell provides enough information that can help me build a dependency graph of my own?
There is nothing built into Azure itself, but there are a few options that may help depending on your requirements.
There is an open source ARM visualizer at http://armviz.io/designer. You can import an ARM template and then it will create the diagram of your resources. There is a walkthrough available at https://blogs.msdn.microsoft.com/azureedu/2016/03/09/how-can-i-map-my-existing-azure-arm-resources-visually/.
There is another more fully featured resource visualizer that covers multiple cloud platforms at https://www.cloudockit.com/, but it is not free to use.
There are some other similar visualization tools that you might be interested in for helping visualize and manage your applications and infrastructure rather than just your Azure resources.
Application Map in Application Insights - https://learn.microsoft.com/en-us/azure/application-insights/app-insights-app-map. It provides a more application centric architecture view rather than an infrastructure view and provides additional runtime insights such as performance and errors.
Service Map in Operations Management Suite - https://learn.microsoft.com/en-us/azure/operations-management-suite/operations-management-suite-walkthrough-servicemap. This provides a more machine oriented infrastructure view showing the different connections and dependencies that a VM has.
You can also use Azure Resource Graph for this, but there isn't a list or built-in way to automatically detect all of the dependencies so you would have to build out this logic yourself. There is a starter sample at https://learn.microsoft.com/en-us/azure/governance/resource-graph/samples/advanced?tabs=azure-cli#list-virtual-machines-with-their-network-interface-and-public-ip.
If a malicious user tamper with the file placed in AppServices and incorporate the virus, is there a way to know that? For example, installing antivirus software on a virtual machine and keep it in the same way.
http://stackoverflow.com/questions/38387004/antimalware-for-azure-app-services
I am looking at this URL for reference and I understand that using Tinfoil Security meets the requirements. However, Tinfoil Security can not be used because the license I use is Japanese CSP.
https://www.microsoft.com/en-us/TrustCenter/Security/ThreatManagement
I also saw this URL, but my English skill is not adequate, so my understanding may be less than enough. Therefore, I need some details. Was "Azure cloud service" written as "Azure cloud service and virtual machine's Microsoft antimalware" include AppService? I thought that only the cloud service was covered. For example: https://azure.microsoft.com/en-us/services/cloud-services/
I am checking whether the file size and timestamp has been changed in the AppServices web job, but please let me know if there are things that can be covered with the functions provided as a service of Microsoft.
Azure App Service uses the Anti-malware solution used by Azure Cloud Services and Virtual Machines.
This is mentioned here: App Service Security
This further points to the following article: Microsoft Antimalware for Azure Cloud Services and Virtual Machines
For extended scenarios Tinfoil was provided as an additional option. If that is not available to you, then using Azure Cloud Services (Web Roles) is more inline with your requirement.
As part of our development process we are required to certify our drivers against the Microsoft HLK/HCK test suites. As our testing infrastructure exists in Azure, I need a method to enable secure boot via ARM template (or other method) on the Azure Marketplace based VMs.
I have scoured the interwebs for references to this process, but was unable to find anything.
Is there an option anywhere in the latest ARM versions that would allow me to secure boot enable my Server 2016-Datacenter Azure VMs?
https://argonsys.com/learn-microsoft-cloud/library/secure-boot-on-virtual-machines/
This could be enabled on the Guest OS level if the powershell run upon the Guest OS goes through. If not it could be within the ARM GitHub Schema prior to being documented. If not Microsoft may need to intervene on a low severity task.
I understand that both Azure Service Fabric and Azure Container Services can be used to host microservices through containers.
In what scenarios is it practical & cost effective to use one over the other? What are some strong use cases for Azure Service Fabric and Azure Container Services models of hosting
I read this comparison but did not find it comprehensive
Update: A comparison table like one in this diagram would help keep the points "sticky" & memorable while deciding which option to use
Acronyms used in the table - AF - Azure Functions, ASF - Azure Service Fabric, ASE - App Service Environment, ACS - Azure Container Service, VMSS - Virtual Machine Scale Set
The “rank” should not be misconstrued as good or bad
Beside the link you pasted for "Choosing between Azure Container Service, Azure Service Fabric and Azure Functions" - Following is what I have found out.
Azure Service Fabric (ASF) is more of a PaaS offering while Azure Container Service (ACS) is more like an IaaS offering.
ASF gives you its own specific programming model, which if you follow then you will be able to take advantage of ASF features. That is why there is an ASF SDK for C#/Java you need to use. However ASF additionally allows guest executables and orchestrating Docker containers (not sure how much will they be leveraged compared to ACS or will they be at par).
At the moment ASF is Windows only (ASF on Linux preview now available # Feb 2017) (it smells vendor tie-in)
ASF offers you Actor model which is good for IoT solution (maybe quicker to implement than to DIY on ACS)
ACS in this sense is more open; it provides only container based model and heavily relies and support docker ecosystem. And once its a container its pretty much technology agnostic.
This could also be the reason for Microsoft's push for Windows Nano which is a basis for the windows based (server level) containers (my opinion). So with ACS you can either have Windows or Linux containers or both.
ACS also allows you to use the open source, industry famous container orchestrators including Docker Swarm, DC/OS-Mesos.
While ASF provides sort of its own orchestration. In other words ASF provides more integrated, easier to use feature rich model but ACS gives you much more openness and flexibility.
MS guys in some conference also mentioned that it could be considered that ASF is more of a Microsoft oriented shop while ACS is more oriented towards open source technologies.
[Feb 2019 Update]
It's a difficult comparison as Azure Service Fabric also exposes an application framework. It's pretty opinionated about the way applications should be built, which doesn't necessarily fit well with notions of 12-factor, cloud-native container apps.
This is an ever-moving feast, but there are a growing number of container runtimes in Azure:
Azure Kubernetes Service is the container orchestrator that replaced
ACS. It seems to be moving very much in a PaaS direction.
Azure Container Instances are useful for small jobs and burst scale
Azure Batch is optimised for large, repetitive compute jobs
Azure Service Fabric is an IaaS offering geared more around lifting and shifting Windows applications to the cloud
Azure Service Fabric Mesh is the new kid on the block - a PaaS service for Service Fabric apps.
All in all, if you're starting with containers then I would give Service Fabric a miss and head for Kubernetes. You can run containers in Service Fabric, but you can be made to feel like a second-class citizen. IMHO, OFC.
Gross over simplification. If your a Linux guy ACS will probably match what you want better. If you are a Windows dev writing windows code ASF will probably serve you better.
This is a question for Azure experts, in particular around the Windows VM's available in Azure.
Do they make any changes to the base build? Hardening and security standards? Or are they standard builds fresh out the box?
Any information on this would be greatly appreciated.
yes. Public and up-to-date information about security measures like compliance, some technical details, etc, can be found on the Azure Trust Center.
However, i do not think that Microsoft reveals all of the internal implementation information, but a lot of work is doing around isolation of hypervisor, root os, guest vms. Also, there is the Azure Fabric Controller is the "brain" that secures and isolates customer deployments and manage the commands sent to Host OS/Hypervisor, and the Host OS is a configuration-hardened version of Windows Server.
Some basic information can be found here:
https://technet.microsoft.com/en-us/cloud/gg663906.aspx
Azure Fabric Controller: https://azure.microsoft.com/en-us/documentation/videos/fabric-controller-internals-building-and-updating-high-availability-apps/
And i recommend to follow Mark Russinovich, Azure CTO, as his video are one of the most internal-details-revealing i ever saw.
You might wanna check out the CIS hardened Images in the Azure Marketplace: https://www.cisecurity.org/cis-hardened-images-now-in-microsoft-azure-marketplace/
Ther you can choose between two levels of hardening, depending on your workload as well as there multiple Windows Server versiosn and even some Linuxs distrubutions. If you want to harden the VMs yourself, I would check out the Dev-Sec Project on github: https://github.com/dev-sec
There you can customize the hardening to your needs if you have an automation tool in place like chef, puppet etc.