I am trying to create a resource group in Azure using Ansible. However i am getting following error:
ERROR! no action detected in task
The error appears to have been in '/home/alam/azure/rg.yml': line 6, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- azure_rm_resourcegroup:
^ here
Here is my yml playbook:
- name: Test the inventory script
hosts: azure
connection: local
gather_facts: no
tasks:
- name: "Create a resource group"
azure_rm_resourcegroup:
location: westus
name: Testing
state: present
tags:
delete: never
testing: testing
Command:
ansible-playbook -i ./ansible/contrib/inventory/azure_rm.py rg.yml
Upgrade Ansible to at least version 2.1 (better yet to the latest one). The docs are clear on that requirement:
azure_rm_resourcegroup - Manage Azure resource groups.
New in version 2.1.
If you use an older version, the module name will not be recognised and Ansible will throw an error: "no action detected in task."
Upgrading to 2.2 has resolved the issue. However to create the resources "hosts" should not be "Azure". Change it to "localhost"
Related
while I try to add my k8s cluster in azure vm, is shows error like
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
Here is the output for my command executed
root#kubeadm-master:~# curl --insecure -sfL https://104.211.32.151:8443/v3/import/lqkbhj6gwg9xcb5j8pnqcmxhtdg6928wmb7fj2n9zv95dbxsjq8vn9.yaml | kubectl apply -f -clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver
created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
secret/cattle-credentials-e558be7 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/cattle-cluster-agent created
daemonset.apps/cattle-node-agent created
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
I was also facing the same issue, so I changed the API version for the cattle-admin-binding from beta to stable as below:
Old value:
apiVersion: rbac.authorization.k8s.io/v1beta1
Changed to:
apiVersion: rbac.authorization.k8s.io/v1
Though I ran into some other issues later, the above error was gone.
I tried to create a task in ansible playbook using win_powershell module but everytime I run the playbook, the task is hanging. Does anyone have a solution ?
I`m pretty new to ansible.
Here is the yml file:
---
- name: notification
hosts: windows
become: false
gather_facts: false
tasks:
- name: message
ansible.windows.win_powershell:
script: |
Add-Type -AssemblyName PresentationFramework
[System.Windows.MessageBox]::Show('message')
Perhaps one of the following modules might be more suitable than running a powershell command. You'd need to install the community modules for either of these.
The community.windows.win_msg module.
Usage:
- name: Display message.
community.windows.win_msg:
msg: 'message'
Ansible also supports toast notifications via community.windows.win_toast module
Need to handle YUM package installation deployement process with different versions/packages, for target environments(dev/prod/systest) using ansible playbook.
NOTE: I have gone through groups_var and hosts_var concept but did not understand if multiple packages with different versions can handled for deployment in multiple environment based on input
As you found out, this separation can be achieved by using group_vars and host_vars. These are loaded in relation to the path of inventory file.
Simple example tasks like below will install different versions in dev and prod environments as explained below.
Example playbook1.yml:
- hosts: appservers
tasks:
- name: install app-a
yum:
name: 'app-a-{{ app_a_version }}'
- name: install app-b
yum:
name: 'app-b-{{ app_b_version }}'
Consider the example directory structure separating each environment's inventory:
dev/hosts
prod/hosts
systest/hosts
Each inventory file will contain hosts/groups for that environment.
Dev environment:
Example dev/hosts:
[appservers]
appserver1.dev
appserver2.dev
Then we can have variables specific to this environments in dev/group_vars/appservers.yml:
---
app_a_version: 1.1
app_b_version: 5.5
Will install app-a-1.1 and app-b-5.5 when run as:
ansible-playbook playbook1.yml -i dev/hosts
Prod environment:
Example prod/hosts:
[appservers]
appserver1.prod
appserver2.prod
And variables defined in prod/group_vars/appservers.yml:
app_a_version: 1.0
app_b_version: 5.0
But in prod it will install app-a-1.0 and app-b-5.0 when run as:
ansible-playbook playbook1.yml -i prod/hosts
host_vars work in similar way, and can be used to provide variables specific to each host of the inventory rather than groups in inventory.
I'm trying to install Istio with automatic sidecar injection into Kubernetes. My environment consists of three masters and two nodes and was built on Azure using the Azure Container Service marketplace product.
Following the documentation located here, I have so far enabled RBAC and DynamicAdmissionControl. I have accomplished this by modifying /etc/kubernetes/istio-inializer.yaml on the Kubernetes Master by adding the following content outlined in red and then restarting the Kubernetes Master using the Unix command, reboot.
The next step in the documentation is to apply the yaml using kubectl. I assume that the documentation intends for the user to clone the Istio repository and cd into it before this step but that is unmentioned.
git clone https://github.com/istio/istio.git
cd istio
kubectl apply -f install/kubernetes/istio-initializer.yaml
After which the following error occurs:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
error: error validating "install/kubernetes/istio-initializer.yaml": error validating data: found invalid field initializers for v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
If I attempt to execute kubectl apply with the mentioned flag, validate=false, then this error is generated instead:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml --validate=false
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
deployment "istio-initializer" configured
error: unable to recognize "install/kubernetes/istio-initializer.yaml": no matches for admissionregistration.k8s.io/, Kind=InitializerConfiguration
I'm not sure where to go from here. The problem appears to be related to the admissionregistration.k8s.io/v1alpha1 block in the yaml but I'm unsure what specifically is incorrect in this block.
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: InitializerConfiguration
metadata:
name: istio-sidecar
initializers:
- name: sidecar.initializer.istio.io
rules:
- apiGroups:
- "*"
apiVersions:
- "*"
resources:
- deployments
- statefulsets
- jobs
- daemonsets
Installed version of Kubernetes:
user#hostname:~/istio$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
I suspect this is a versioning mismatch. As a follow up question, is it possible to deploy a version of kubernetes >= 1.7.4 to Azure using ACS?
I'm fairly new to working with Kubernetes so if anyone could help I would greatly appreciate it. Thank you for your time.
Seems to be a versioning problem as the alpha feature is supported for k8s version> 1.7 as mentioned here (https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers).
1.7 introduces two alpha features, Initializers and External Admission
Webhooks, that address these limitations. These features allow admission
controllers to be developed out-of-tree and configured at runtime.
And it is possible to deploy a version of kubernetes >= 1.7.4 to Azure. Note sure about the deployed version using the portal. But if you use acs-egnine to generate the ARM template, it is possible to deploy a cluster with version 1.7.5.
You can refer here for the procedures https://github.com/Azure/acs-engine. Basically it involves three steps. First, you should create the json file by referring to the clusterDefinition section. To use version 1.7.5, you should specify the attribute "orchestratorRelaease" to be "1.7" and also enable the RBAC by specifying the attribute "enableRbac" to be true. Second, use the acs engine (version >= 0.6.0) to parse the json file to ARM template (azuredeploy.json & azuredeploy.parameters.json should be created). Lastly, use the command "New-AzureRmResourceGroupDeployment" in powershell to deploy the cluster to Azure.
Hope this helps :)
i am using azure=0.11.1 and also tried in 1.0.1 version and execute it but i getting same error which mention below, playbook is mention below:
azurevm_yml
---
- local_action:
module: "azure"
name: 'vm_ubuntu1'
role_size: Small
image: '5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-67-20150815'
password: "admin12345#"
location: 'East US 2'
user: admin
wait: yes
subscription_id: 'xxxxxxxxxxxxxx'
management_cert_path: '/ansible-pbook/xxxx.pem'
storage_account: 'storageacc01'
endpoints: '22,8080,80'
register: azure_vm
Error:
root#xxxxx:/ansible-pbook# ansible-playbook azure_vm.yml
ERROR: password is not a legal parameter of an Ansible Play
Please suggest me...
The correct format for a task is something like this:
- local_action: azure
name='vm_ubuntu1'
role_size=Small
image='5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-67-20150815'
password="admin12345#"
location='East US 2'
user=admin
wait=yes
subscription_id='xxxxxxxxxxxxxx'
management_cert_path='/ansible-pbook/xxxx.pem'
storage_account='storageacc01'
endpoints='22,8080,80'
register: azure_vm
All the parameters passed to the module should be in the format of key=value, while attributes to the task/action itself (like register, tags, ignore_errors, etc.) are in the format of attribute: value