How to delete a node taint using Python's Kubernetes library - python-3.x

Set the stain like this:
v3.patch_node('nodename',
{"spec": {"taints": [{"effect": "NoSchedule", "key": "test", "value": "1",'tolerationSeconds': '300'}]}}```
however.
How to remove tains?

This was pretty non-intuitive to me, but here's how I accomplished this.
def taint_node(context, node_name):
kube_client = setup_kube_client(context)
taint_patch = {"spec": {"taints": [{"effect": "NoSchedule", "key": "test", "value": "True"}]}}
return kube_client.patch_node(node_name, taint_patch)
def untaint_node(context, node_name):
kube_client = setup_kube_client(context)
remove_taint_patch = {"spec": {"taints": []}}
return kube_client.patch_node(node_name, remove_taint_patch)
That worked for me, but it removes ALL taints, which is maybe not what you want to do.
I tried the following:
def untaint_node(context, node_name):
kube_client = setup_kube_client(context)
remove_taint_patch = {"spec": {"taints": [{"effect": "NoSchedule-", "key": "test", "value": "True"}]}}
return kube_client.patch_node(node_name, remove_taint_patch)
but encountered server side validation preventing it (because the effect isn't in the collection of supported values):
kubernetes.client.exceptions.ApiException: (422)
Reason: Unprocessable Entity
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'bfbad6e1-f37c-4090-898b-b2b9c5500425', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '7c028f53-f0a4-46bd-b400-68641158da78', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'ef92e801-ce03-4abb-a607-20921bf82547', 'Date': 'Sat, 18 Sep 2021 02:45:37 GMT', 'Content-Length': '759'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Node \"aks-agentpool-33938206-vmss000000\" is invalid: metadata.taints[0].effect: Unsupported value: \"NoSchedule-\": supported values: \"NoSchedule\", \"PreferNoSchedule\", \"NoExecute\"","reason":"Invalid","details":{"name":"aks-agentpool-33938206-vmss000000","kind":"Node","causes":[{"reason":"FieldValueNotSupported","message":"Unsupported value: \"NoSchedule-\": supported values: \"NoSchedule\", \"PreferNoSchedule\", \"NoExecute\"","field":"metadata.taints[0].effect"},{"reason":"FieldValueNotSupported","message":"Unsupported value: \"NoSchedule-\": supported values: \"NoSchedule\", \"PreferNoSchedule\", \"NoExecute\"","field":"metadata.taints[0].effect"}]},"code":422}
Finally, if you need to remove a specific taint, you can always shell out to kubectl (though that's kinda cheating, huh?):
def untaint_node_with_cmd(context, node_name):
cmd_env = os.environ.copy()
child = subprocess.Popen(['kubectl', 'taint', 'nodes', node_name, 'test=True:NoSchedule-', '--context', context], env=cmd_env)
exit_code = child.wait()
return exit_code
Sadly, it doesn't look like this issue has gotten much love in the k8s python client repo. https://github.com/kubernetes-client/python/issues/161

One more better way to untainted a particular taint. By doing this way other taints will not get removed.only a particular taint will ve untainted.
def untaint_node(context, node_name, taint_key):
Kube_client = setup_kube_client(context)
node = Kube_client.list_nodes(field_selector={"metadata.name" : node_name}).items[0]
taints = node.spec.taints
filtered_taints = list(filter(lambda x: x.key != taint_key, taints))
body = {"spec": {"taints": filtered_taints}}
return kube_client.patch_node(node_name, body)

There's nothing special, standard update or patch call on the Node object.

Client libraries are used to interact with kubeapiserver. Therefore, kubeapiserver checks body of the request, no need to have custom removing taint in Python client library.
I think you can do it by calling
v3.patch_node('cn-shanghai.10.10.10.249',
{"spec": {"taints": [{"effect": "NoSchedule-", "key": "test", "value": "1","tolerationSeconds": "300"}]}}
An example can be found in python-client examples repository.
from kubernetes import client, config
def main():
config.load_kube_config()
api_instance = client.CoreV1Api()
body = {
"metadata": {
"labels": {
"foo": "bar",
"baz": None}
}
}
# Listing the cluster nodes
node_list = api_instance.list_node()
print("%s\t\t%s" % ("NAME", "LABELS"))
# Patching the node labels
for node in node_list.items:
api_response = api_instance.patch_node(node.metadata.name, body)
print("%s\t%s" % (node.metadata.name, node.metadata.labels))
if __name__ == '__main__':
main()
Reference: https://github.com/kubernetes-client/python/blob/c3f1a1c61efc608a4fe7f103ed103582c77bc30a/examples/node_labels.py

Related

Adding to a database creates a new database within page in notion

Here's my current code:
import json
import requests
def createPage(database_id, page_id, headers, url):
newPageData = {
"parent": {
"database_id": database_id,
"page_id": page_id,
},
"properties": {
"Name": {"title": {"text": "HI THERE"}},
},
}
data = json.dumps(newPageData)
res = requests.request("POST", url, headers=headers, data=data)
print(res.status_code)
print(res.text)
database_id = "ea28de8e9cca4f62b4c4da3522869d03"
page_id = "697fd88570b3420aaa928fa28d0bf230"
url = "https://api.notion.com/v1/databases/"
key = "KEY"
payload = {}
headers = {
"Authorization": f"Bearer {key}",
"accept": "application/json",
"Notion-Version": "2021-05-11",
"content-type": "application/json",
}
createPage(database_id, page_id, headers, url)
But everytime I run this, it appears like I keep getting new databases within the page. This is before running the script:
This is after running the script:
I would like it to be like this after running the script:
How can that be achieved?
It looks as you're calling the API URL that creates a new Database, and not the one that creates a new page.
This URL: https://api.notion.com/v1/databases/ is for creating new databases, and not for creating pages.
In order to create a new page within a database, use the following URL:
https://api.notion.com/v1/pages
Where you'll need to provide the previously created database id, among other identifiers
More detailed documentation can be found here
https://developers.notion.com/reference/post-page

AWS Lambda Function Issue with Slack Webhook

I'm using the AWS lambda function to send alerts to our slack channel. But, due to some unknown issue, I'm not getting slack alert and not even getting any kind of error message from the AWS lambda function. Logs represent that the function ran successfully without any error but I do not receipt any alert
code:
import json, sys, csv, os
import requests
def lambda_handler(event, context):
def Send2Slack(message):
if __name__ == '__main__':
print('inside slack function')
url = "webhook_URL"
title = (f"New Incoming Message")
slack_data = {
"username": "abc",
"channel" : "xyz",
"attachments": [
{
"color": "#ECB22E",
"fields": [
{
"title": title,
"value": message,
"short": "false",
}
]
}
]
}
byte_length = str(sys.getsizeof(slack_data))
headers = {'Content-Type': "application/json", 'Content-Length': byte_length}
response = requests.post(url, data=json.dumps(slack_data), headers=headers)
if response.status_code != 200:
raise Exception(response.status_code, response.text)
output = "Hello Slack "
Send2Slack(output)
Please let me know where I'm doing wrong and help me fix this issue.
I'm able to answer this issue.
def Send2Slack(message):
if __name__ == '__main__':
Once I removed if __name__ == '__main__': from send2slack function the it worked.
Otherwise, I was not able to get into the function.
Thanks for all your help.

moto testing for ecs

I have a python method that lists certain ECS Fargate services based on some tags and some names using boto3 get_paginator.
def list_service_name(environment,
resource_owner_name,
ecs_client):
list_of_service = list()
cluster_name = "my cluster name " + environment
target_string = "-somedummy"
resource_owner_tag = resource_owner_name
service_paginator = ecs_client.get_paginator('list_services')
for page in service_paginator.paginate(cluster=cluster_name,
launchType='FARGATE'):
for service in page['serviceArns']:
response = ecs_client.list_tags_for_resource(resourceArn=service)
for tags in response['tags']:
if tags['key'] == 'ResourceOwner' and \
tags['value'] == resource_owner_tag and \
service.endswith(target_string):
list_of_service.append(service)
return list_of_service
Now I would like to test this using moto.
Hence I have created conftest.py where I have defined all moto mock connections to the services like, ecs. Also, I have created test_main.py file like below where I have created dummy services connected to ECS Fargate. But for some reason, if I try to assert the outcome of the main method in the test file the service lists return empty. Whereas I would like to see test-service-for-successful as the outcome. Is there something I'm missing or pagination is still not available in moto?
from my_code.main import *
#pytest.fixture
def env_name():
return "local"
#pytest.fixture
def cluster_name(env_name):
return "my dummy" + env_name + "cluster_name"
#pytest.fixture
def successful_service_name():
return "test-service-for-successful"
#pytest.fixture
def un_successful_service_name():
return "test-service-for-un-successful"
#pytest.fixture
def resource_owner():
return "dummy_tag"
#pytest.fixture
def test_create_service(ecs_client,
cluster_name,
successful_service_name,
un_successful_service_name,
resource_owner):
_ = ecs_client.create_cluster(clusterName=cluster_name)
_ = ecs_client.register_task_definition(
family="test_ecs_task",
containerDefinitions=[
{
"name": "hello_world",
"image": "docker/hello-world:latest",
"cpu": 1024,
"memory": 400,
"essential": True,
"environment": [
{"name": "environment", "value": "local"}
],
"logConfiguration": {"logDriver": "json-file"},
}
],
)
ecs_client.create_service(
cluster=cluster_name,
serviceName=successful_service_name,
taskDefinition="test_ecs_task",
desiredCount=0,
launchType="FARGATE",
tags=[{"key": "resource_owner", "value": resource_owner}]
)
ecs_client.create_service(
cluster=cluster_name,
serviceName=un_successful_service_name,
taskDefinition="test_ecs_task",
desiredCount=0,
launchType="FARGATE",
tags=[{"key": "resource_owner", "value": resource_owner}]
)
yield
def test_list_service_name(env_name,
resource_owner,
ecs_client):
objects = list_service_name(env_name,
resource_owner,
ecs_client)
# here object is []
# Where as I should see successful_service_name

Kubernetes client-python creating a service error

I am trying to create a new service for one of my deployments named node-js-deployment in GCE hostes Kubernetes Cluster
I followed the Documentation to create_namespaced_service
This is the service data:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "node-js-service"
},
"spec": {
"selector": {
"app": "node-js"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 8000
}
]
}
}
This is the Python function to create the service
api_instance = kubernetes.client.CoreV1Api()
namespace = 'default'
body = kubernetes.client.V1Service() # V1Serice
# Creating Meta Data
metadata = kubernetes.client.V1ObjectMeta()
metadata.name = "node-js-service"
# Creating spec
spec = kubernetes.client.V1ServiceSpec()
# Creating Port object
ports = kubernetes.client.V1ServicePort()
ports.protocol = 'TCP'
ports.target_port = 8000
ports.port = 80
spec.ports = ports
spec.selector = {"app": "node-js"}
body.spec = spec
try:
api_response = api_instance.create_namespaced_service(namespace, body, pretty=pretty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->create_namespaced_service: %s\n" % e)
Error:
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Tue, 21 Feb 2017 03:54:55 GMT', 'Content-Length': '227'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Service in version \"v1\" cannot be handled as a Service: only encoded map or array can be decoded into a struct","reason":"BadRequest","code":400}
But the service is being created if I am passing JSON. Not sure what I am doing wrong.
Any help is greatly appreciated, thank you.
From reading your code, it seems that you miss assigning the metadata to body.metadata. And you missed that the ports field of the V1ServiceSpec is supposed to be a list, but you used a single V1ServicePort so without testing I assume this should works:
api_instance = kubernetes.client.CoreV1Api()
namespace = 'default'
body = kubernetes.client.V1Service() # V1Serice
# Creating Meta Data
metadata = kubernetes.client.V1ObjectMeta()
metadata.name = "node-js-service"
body.metadata = metadata
# Creating spec
spec = kubernetes.client.V1ServiceSpec()
# Creating Port object
port = kubernetes.client.V1ServicePort()
port.protocol = 'TCP'
port.target_port = 8000
port.port = 80
spec.ports = [ port ]
spec.selector = {"app": "node-js"}
body.spec = spec
The definition could also be loaded from json / yaml directly as shown in two of the examples within the offical repo - see exec.py create_deployment.py.
Your solution could then look like:
api_instance = kubernetes.client.CoreV1Api()
namespace = 'default'
manifest = {
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "node-js-service"
},
"spec": {
"selector": {
"app": "node-js"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 8000
}
]
}
}
try:
api_response = api_instance.create_namespaced_service(namespace, manifest, pretty='true')
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->create_namespaced_endpoints: %s\n" % e)

get this JSON representation of your neo4j objects

I want to get data from thisarray of json object :
[
{
"outgoing_relationships": "http://myserver:7474/db/data/node/4/relationships/out",
"data": {
"family": "3",
"batch": "/var/www/utils/batches/d32740d8-b4ad-49c7-8ec8-0d54fcb7d239.resync",
"name": "rahul",
"command": "add",
"type": "document"
},
"traverse": "http://myserver:7474/db/data/node/4/traverse/{returnType}",
"all_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/all/{-list|&|types}",
"property": "http://myserver:7474/db/data/node/4/properties/{key}",
"self": "http://myserver:7474/db/data/node/4",
"properties": "http://myserver:7474/db/data/node/4/properties",
"outgoing_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/out/{-list|&|types}",
"incoming_relationships": "http://myserver:7474/db/data/node/4/relationships/in",
"extensions": {},
"create_relationship": "http://myserver:7474/db/data/node/4/relationships",
"paged_traverse": "http://myserver:7474/db/data/node/4/paged/traverse/{returnType}{?pageSize,leaseTime}",
"all_relationships": "http://myserver:7474/db/data/node/4/relationships/all",
"incoming_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/in/{-list|&|types}"
}
]
what i tried is :
def messages=[];
for ( i in families) {
messages?.add(i);
}
how i can get familes.data.name in message array .
Here is what i tried :
def messages=[];
for ( i in families) {
def map = new groovy.json.JsonSlurper().parseText(i);
def msg=map*.data.name;
messages?.add(i);
}
return messages;
and get this error :
javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of method: groovy.json.JsonSlurper.parseText() is applicable for argument types: (com.tinkerpop.blueprints.pgm.impls.neo4j.Neo4jVertex) values: [v[4]]\nPossible solutions: parseText(java.lang.String), parse(java.io.Reader)
Or use Groovy's native JSON parsing:
def families = new groovy.json.JsonSlurper().parseText( jsonAsString )
def messages = families*.data.name
Since you edited the question to give us the information we needed, you can try:
def messages=[];
families.each { i ->
def map = new groovy.json.JsonSlurper().parseText( i.toString() )
messages.addAll( map*.data.name )
}
messages
Though it should be said that the toString() method in com.tinkerpop.blueprints.pgm.impls.neo4j.Neo4jVertex makes no guarantees to be valid JSON... You should probably be using the getProperty( name ) function of Neo4jVertex rather than relying on a side-effect of toString()
What are you doing to generate the first bit of text (which you state is JSON and make no mention of how it's created)
Use JSON-lib.
GJson.enhanceClasses()
def families = json_string as JSONArray
def messages = families.collect {it.data.name}
If you are using Groovy 1.8, you don't need JSON-lib anymore as a JsonSlurper is included in the GDK.
import groovy.json.JsonSlurper
def families = new JsonSlurper().parseText(json_string)
def messages = families.collect { it.data.name }

Resources