I am trying to create a new service for one of my deployments named node-js-deployment in GCE hostes Kubernetes Cluster
I followed the Documentation to create_namespaced_service
This is the service data:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "node-js-service"
},
"spec": {
"selector": {
"app": "node-js"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 8000
}
]
}
}
This is the Python function to create the service
api_instance = kubernetes.client.CoreV1Api()
namespace = 'default'
body = kubernetes.client.V1Service() # V1Serice
# Creating Meta Data
metadata = kubernetes.client.V1ObjectMeta()
metadata.name = "node-js-service"
# Creating spec
spec = kubernetes.client.V1ServiceSpec()
# Creating Port object
ports = kubernetes.client.V1ServicePort()
ports.protocol = 'TCP'
ports.target_port = 8000
ports.port = 80
spec.ports = ports
spec.selector = {"app": "node-js"}
body.spec = spec
try:
api_response = api_instance.create_namespaced_service(namespace, body, pretty=pretty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->create_namespaced_service: %s\n" % e)
Error:
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Tue, 21 Feb 2017 03:54:55 GMT', 'Content-Length': '227'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Service in version \"v1\" cannot be handled as a Service: only encoded map or array can be decoded into a struct","reason":"BadRequest","code":400}
But the service is being created if I am passing JSON. Not sure what I am doing wrong.
Any help is greatly appreciated, thank you.
From reading your code, it seems that you miss assigning the metadata to body.metadata. And you missed that the ports field of the V1ServiceSpec is supposed to be a list, but you used a single V1ServicePort so without testing I assume this should works:
api_instance = kubernetes.client.CoreV1Api()
namespace = 'default'
body = kubernetes.client.V1Service() # V1Serice
# Creating Meta Data
metadata = kubernetes.client.V1ObjectMeta()
metadata.name = "node-js-service"
body.metadata = metadata
# Creating spec
spec = kubernetes.client.V1ServiceSpec()
# Creating Port object
port = kubernetes.client.V1ServicePort()
port.protocol = 'TCP'
port.target_port = 8000
port.port = 80
spec.ports = [ port ]
spec.selector = {"app": "node-js"}
body.spec = spec
The definition could also be loaded from json / yaml directly as shown in two of the examples within the offical repo - see exec.py create_deployment.py.
Your solution could then look like:
api_instance = kubernetes.client.CoreV1Api()
namespace = 'default'
manifest = {
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "node-js-service"
},
"spec": {
"selector": {
"app": "node-js"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 8000
}
]
}
}
try:
api_response = api_instance.create_namespaced_service(namespace, manifest, pretty='true')
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->create_namespaced_endpoints: %s\n" % e)
Related
I have an existing azure-pipelines.yml file in my branch. I want to invoke this file via Azure RestAPI and let Azure CI Pipelines create. I need to do it by python code.
something I have tried like this but getting some error related 203. It seems ...... 203 Non-Authoritative Information Return Issue when attempting to perform any action (GET/POST/etc) through the Azure DevOps API.
..Main focus is create pipelines by code. If any existing/working examples, it would be helpful..
import json
api_url = "https://dev.azure.com/DevOps/Ops/_apis/pipelines?api-version=6.0-preview.1"
json_data = {
"folder": "/",
"name": "My Pipeline",
"configuration": {
"type": "yaml",
"path": "/Boot/{{ project_name }}/pipelines/azure-pipelines.yaml",
"repository": {
"name": "Boot",
"type": "azureReposGit"
}
}
}
headers = {"Content-Type":"application/json"}
response = requests.post(api_url, data = json.dumps(json_data), headers=headers)
#print(response.json())
print(response.status_code)```
Write a Python demo for you here:
import requests
import json
def create_pipeline_basedon_yaml(Organization, Project, Repository, Yaml_File, Pipeline_Folder, Pipeline_Name, Personal_Access_Token):
##########get repo id##########
url_repoapi = "https://dev.azure.com/"+Organization+"/"+Project+"/_apis/git/repositories/"+Repository+"?api-version=4.1"
payload_repoapi={}
headers_repoapi = {
'Authorization': 'Basic '+Personal_Access_Token,
}
response_repoapi = requests.request("GET", url_repoapi, headers=headers_repoapi, data=payload_repoapi)
repo_id = response_repoapi.json()['id']
##########create pipeline##########
url_pipelineapi = "https://dev.azure.com/"+Organization+"/"+Project+"/_apis/pipelines?api-version=6.0-preview.1"
payload_pipelineapi = json.dumps({
"configuration": {
"path": Yaml_File,
"repository": {
"id": repo_id,
"type": "azureReposGit"
},
"type": "yaml"
},
"folder": Pipeline_Folder,
"name": Pipeline_Name
})
headers_pipelineapi = {
'Authorization': 'Basic '+Personal_Access_Token,
'Content-Type': 'application/json'}
requests.request("POST", url_pipelineapi, headers=headers_pipelineapi, data=payload_pipelineapi)
Organization = "xxx"
Project = "xxx"
Repository = "xxx"
Yaml_File = "xxx.yml"
Pipeline_Folder = "test_folder"
Pipeline_Name = "Pipeline_basedon_yaml"
Personal_Access_Token = "xxx"
create_pipeline_basedon_yaml(Organization, Project, Repository, Yaml_File, Pipeline_Folder, Pipeline_Name, Personal_Access_Token)
I can successfully create the pipeline based on the specific yaml file:
I have this local map variable
(some AWS ECS tasks definition configurations I read from .JSON files)
tasks = {
"service1" = {
task_definition = {
"cpu": 128,
"environment": [
{
"name": "DB_HOST",
"value": "X"
}
],
"essential": true,
"healthCheck": {}
}
}
"service2" = {
"task_definition" = ...
}
}
I want to change DB_HOST value based on other module output :
worth noting DB_HOST won’t appear on each service - should be changed/added
Something like that :
tasks.x.task_definition.environment[x].value = module.example.db_host
-> DB_HOST = module.example.db_host
didn’t manage to do it when looping through the map keys…
Thanks in advance !
I am trying to create a Synapse Workspace Linked Service using Terraform and am running into a constant snag with the "type_properties_json" field that is (required).
When I try to establish a Linked Service to an SFTP resource type, I can do so through the portal no problem, however trying to do so with Terraform is constantly providing me with errors. I am using the JSON code format referenced here, but the "type_properties_json" field keeps erroring out as I believe it is expecting a "String" and I am instead providing a Map[string] type.
The error I keep receiving during the terraform apply is json: cannot unmarshal string into Go value of type map[string]interface {}
My specific code looks like the following:
resource "azurerm_synapse_linked_service" "linked-service" {
synapse_workspace_id = azurerm_synapse_workspace.synapse.id
name = "name"
type = "Sftp"
type_properties_json = <<JSON
{
"host": "x.x.com",
"port": 22,
"skipHostKeyValidation": false,
"hostKeyFingerprint": "ssh-rsa 2048 xx:00:00:00:xx:00:x0:0x:0x:0x:0x:00:00:x0:x0:00",
"authenticationType": "Basic",
"userName": "whatever_name,
"password": "randompassw"
}
JSON
depends_on = [azurerm_synapse_firewall_rule.allow]
}
Running out of hope here and am now looking to crowd source to see if anyone else has ever ran into this problem!!
This is because of the password parameter you are passing. As per this Microsoft documentation, it should be passed as below:
"password": {
"type": "SecureString",
"value": "<value>"
}
Instead of
"password": <value>
I tested the same in my environment using your code where I faced the exact same issue:
So, I used the below code, applying the solution mentioned above:
resource "azurerm_synapse_linked_service" "example" {
name = "SftpLinkedService"
synapse_workspace_id = azurerm_synapse_workspace.example.id
type = "Sftp"
type_properties_json = <<TYPE
{
"host": "xxx.xx.x.x",
"port": 22,
"skipHostKeyValidation": false,
"hostKeyFingerprint": "<SSH-publicKey>",
"authenticationType": "Basic",
"userName": "adminuser",
"password": {
"type": "SecureString",
"value": "<Value>"
}
}
TYPE
depends_on = [
azurerm_synapse_firewall_rule.example,
azurerm_synapse_firewall_rule.example1
]
}
Output:
I try to provision an azure dashboard using terraform module and template file but I get an error message:
Call to function "templatefile" failed:
src/main/terraform/environment/modules/someservice/resources/dashboards/infrastructure-dashboard.tpl:41,39-55:
Invalid template interpolation value; Cannot include the given value in a
string template: string required., and 25 other diagnostic(s).
The error is being caused by the code in the template. It does not accept the variable declared in the dashboard properties. The code in the template causing the error:
"clusterName": "${k8s_cluster_name}",
Terraform code:
resource "azurerm_dashboard" "infra-dashboard" {
name = "${upper(terraform.workspace)}-Infrastructure"
resource_group_name = azurerm_resource_group.rgp-dashboards.name
location = azurerm_resource_group.rgp-dashboards.location
tags = {
source = "terraform"
}
dashboard_properties = templatefile("${path.module}/resources/dashboards/infrastructure-dashboard.tpl",
{
md_content = var.infra_dashboard_md_content,
dashboard_title = var.infra_dashboard_title,
dashboard_subtitle = var.infra_dashboard_subtitle,
sub_id = data.azurerm_subscription.current.subscription_id,
rgp_k8s = azurerm_resource_group.seervice-k8s.name,
k8s_cluster_name = azurerm_kubernetes_cluster.service-k8s.*.name,
rgp_service_env_global_name = azurerm_resource_group.service-env-global.name,
log_analyt_wrkspc = local.log_analyt_wrkspc
})
}
Snippet from the tmeplate where the error occurs:
"1": {
"position": {
"x": 0,
"y": 4,
"colSpan": 9,
"rowSpan": 4
},
"metadata": {
"inputs": [
{
"name": "queryParams",
"value": {
"metricQueryId": "node-count",
"clusterName": "${k8s_cluster_name}",
"clusterResourceId": "/subscriptions/${sub_id}/resourceGroups/${rgp_k8s}/providers/Microsoft.ContainerService/managedClusters/${k8s_cluster_name}",
"workspaceResourceId": "/subscriptions/${sub_id}/resourcegroups/${rgp_service_env_global_name}/providers/microsoft.operationalinsights/workspaces/${log_analyt_wrkspc}",
"timeRange": {
"options": {},
"relative": {
"duration": 21600000
}
},
"cpuFilterSelection": "total",
"memoryFilterSelection": "total_memoryrss"
}
}
For example:
join(",", azurerm_kubernetes_cluster.service-k8s.*.name)
will join all entries with an ",". Replace it with whatever you need
As luk2302 said azurerm_kubernetes_cluster.service-k8s.*.name it is not a string. In fact it is a tuple
Wondering if anyone has ran tackled it. So, I need to be able to generate list of egress CIDR blocks that is currently available for listing over an API. Sample output is the following:
[
{
"description": "blahnet-public-acl",
"metadata": {
"broadcast": "192.168.1.191",
"cidr": "192.168.1.128/26",
"ip": "192.168.1.128",
"ip_range": {
"start": "192.168.1.128",
"end": "192.168.1.191"
},
"netmask": "255.255.255.192",
"network": "192.168.1.128",
"prefix": "26",
"size": "64"
}
},
{
"description": "blahnet-public-acl",
"metadata": {
"broadcast": "192.168.160.127",
"cidr": "192.168.160.0/25",
"ip": "192.168.160.0",
"ip_range": {
"start": "192.168.160.0",
"end": "192.168.160.127"
},
"netmask": "255.255.255.128",
"network": "192.168.160.0",
"prefix": "25",
"size": "128"
}
}
]
So, I need convert it to Azure Firewall
###############################################################################
# Firewall Rules - Allow Access To TEST VMs
###############################################################################
resource "azurerm_firewall_network_rule_collection" "azure-firewall-azure-test-access" {
for_each = local.egress_ips
name = "azure-firewall-azure-test-rule"
azure_firewall_name = azurerm_firewall.public_to_test.name
resource_group_name = var.resource_group_name
priority = 105
action = "Allow"
rule {
name = "test-access"
source_addresses = local.egress_ips[each.key]
destination_ports = ["43043"]
destination_addresses = ["172.16.0.*"]
protocols = [ "TCP"]
}
}
So, bottom line is that allowed IP addresses have to be a list of strings for the "source_addresses" parameter, such as this:
["192.168.44.0/24","192.168.7.0/27","192.168.196.0/24","192.168.229.0/24","192.168.138.0/25",]
I configured data_sources.tf file:
data "http" "allowed_networks_v1" {
url = "https://testapiserver.com/api/allowed/networks/v1"
}
...and in locals.tf, I need to configure
locals {
allowed_networks_json = jsondecode(data.http.allowed_networks_v1.body)
egress_ips = ...
}
...and that's where I am stuck. How can parse that data in locals.tf file so I can reference it from within TF ?
Thanks a metric ton!!
I'm assuming that the list of string you are referring to are the objects under: metadata.cidr we can extract that with a for loop in a local, and also do a distinct just in case we get duplicates.
Here is a sample code
data "http" "allowed_networks_v1" {
url = "https://raw.githack.com/heldersepu/hs-scripts/master/json/networks.json"
}
locals {
allowed_networks_json = jsondecode(data.http.allowed_networks_v1.body)
distinct_cidrs = distinct(flatten([
for key, value in local.allowed_networks_json : [
value.metadata.cidr
]
]))
}
output "data" {
value = local.distinct_cidrs
}
and here is the output of a plan on that:
terraform plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
Terraform will perform the following actions:
Plan: 0 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ data = [
+ "192.168.1.128/26",
+ "192.168.160.0/25",
]
Here is the code for your second sample:
data "http" "allowed_networks_v1" {
url = "https://raw.githack.com/akamalov/testfile/master/networks.json"
}
locals {
allowed_networks_json = jsondecode(data.http.allowed_networks_v1.body)
distinct_cidrs = distinct(flatten([
for key, value in local.allowed_networks_json.egress_nat_ranges : [
value.metadata.cidr
]
]))
}
output "data" {
value = local.distinct_cidrs
}