What is chef attribute precedence - attributes

I have a default_attribute (eg :port) defined in environment file.
When I define the same attribute in my role, I am expecting chef to pick up the attribute from Role file. But its not happening.
eg, value of "port" in environment file = 8080
eg, value of "port" in role file = 9090.
I am expecting value of port to be 9090 but actually it is coming as 8080.
What am I doing wrong?

Related

Wrong connection port despite Kubernetes deployments/services ports specified

It might take a while to explain what I'm trying to do but bear with me please.
I have the following infrastructure specified:
I have a job called questo-server-deployment (I know, confusing but this was the only way to access the deployment without using ingress on minikube)
This is how the parts should talk to one another:
And here you can find the entire Kubernetes/Terraform config file for the above setup
I have 2 endpoints exposed from the node.js app (questo-server-deployment)
I'm making the requests using 10.97.189.215 which is the questo-server-service external IP address (as you can see in the first picture)
So I have 2 endpoints:
health - which simply returns 200 OK from the node.js app - and this part is fine confirming the node app is working as expected.
dynamodb - which should be able to send a request to the questo-dynamodb-deployment (pod) and get a response back, but it can't.
When I print env vars I'm getting the following:
➜ kubectl -n minikube-local-ns exec questo-server-deployment--1-7ptnz -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=questo-server-deployment--1-7ptnz
DB_DOCKER_URL=questo-dynamodb-service
DB_REGION=local
DB_SECRET_ACCESS_KEY=local
DB_TABLE_NAME=Questo
DB_ACCESS_KEY=local
QUESTO_SERVER_SERVICE_PORT_4000_TCP=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_PORT=8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PORT=8000
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
QUESTO_SERVER_SERVICE_SERVICE_HOST=10.97.189.215
QUESTO_SERVER_SERVICE_PORT=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PROTO=tcp
QUESTO_SERVER_SERVICE_PORT_4000_TCP_ADDR=10.97.189.215
KUBERNETES_PORT_443_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_ADDR=10.107.45.125
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
QUESTO_SERVER_SERVICE_SERVICE_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_HOST=10.107.45.125
QUESTO_DYNAMODB_SERVICE_PORT=tcp://10.107.45.125:8000
KUBERNETES_SERVICE_PORT_HTTPS=443
NODE_VERSION=12.22.7
YARN_VERSION=1.22.15
HOME=/root
so it looks like the configuration is aware of the dynamodb address and port:
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
You'll also notice in the above env variables that I specified:
DB_DOCKER_URL=questo-dynamodb-service
Which is supposed to be the questo-dynamodb-service url:port which I'm assigning to the config here (in the configmap) which is then used here in the questo-server-deployment (job)
Also, when I log:
kubectl logs -f questo-server-deployment--1-7ptnz -n minikube-local-ns
I'm getting the following results:
Which indicates that the app (node.js) tried to connect to the db (dynamodb) but on the wrong port 443 instead of 8000?
The DB_DOCKER_URL should contain the full address (with port) to the questo-dynamodb-service
What am I doing wrong here?
Edit ----
I've explicitly assigned the port 8000 to the DB_DOCKER_URL as suggested in the answer but now I'm getting the following error:
Seems to me there is some kind of default behaviour in Kubernetes and it tries to communicate between pods using https ?
Any ideas what needs to be done here?
How about specify the port in the ConfigMap:
...
data = {
DB_DOCKER_URL = ${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000
...
Otherwise it may default to 443.
Answering my own question in case anyone have an equally brilliant idea of running local dybamodb in a minikube cluster.
The issue was not only with the port, but also with the protocol, so the final answer to the question is to modify the ConfigMap as follows:
data = {
DB_DOCKER_URL = "http://${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000"
...
}
As a side note:
Also, when you are running various scripts to create a dynamodb table in your amazon/dynamodb-local container, make sure you use the same region for both creating the table like so:
#!/bin/bash
aws dynamodb create-table \
--cli-input-json file://questo_db_definition.json \
--endpoint-url http://questo-dynamodb-service:8000 \
--region local
And the same region when querying the data.
Even though this is just a local copy, where you can type anything you want as a value of your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and actually in the AWS_REGION as well, the region have to match.
If you query the db with a different region it was created with, you get the Cannot do operations on a non-existent table error.

Terraform cyclic dependency issue when referencing IP addresses to generate config file

I am trying to setup an AWS environment with 2 ec2 instances in a VPC that are configured to run a piece of software that requires a config file containing the IP address of the other ec2. To do this, I am creating the config file in a template that I am running to start the ec2 like this:
data "template_file" "init_relay" {
template = file("${path.module}/initRelay.tpl")
vars = {
port = var.node_communication_port
ip = module.block-producing-node.private_ip[0]
self_ip = module.relay-node.public_ip
}
}
module "relay-node" {
source = "terraform-aws-modules/ec2-instance/aws"
name = "relay-node"
ami = var.node_ami
key_name = "aws-keys"
user_data = data.template_file.init_relay.rendered
instance_type = var.instance_type
subnet_id = module.vpc.public_subnets[0]
vpc_security_group_ids = [module.relay_node_sg.this_security_group_id]
associate_public_ip_address = true
monitoring = true
root_block_device = [
{
volume_type = "gp2"
volume_size = 35
},
]
tags = {
Name = "Relay Node"
Environment = var.environment_tag
Version = var.pool_version
}
}
data "template_file" "init_block_producer" {
template = "${file("${path.module}/initBlockProducer.tpl")}"
vars = {
port = var.node_communication_port
ip = module.relay-node.private_ip
self_ip = module.block-producing-node.private_ip
}
}
module "block-producing-node" {
source = "terraform-aws-modules/ec2-instance/aws"
name = "block-producing-node"
ami = var.node_ami
key_name = "aws-keys"
user_data = data.template_file.init_block_producer.rendered
instance_type = var.instance_type
subnet_id = module.vpc.public_subnets[0]
vpc_security_group_ids = [module.block_producing_node_sg.this_security_group_id]
associate_public_ip_address = true
monitoring = true
root_block_device = [
{
volume_type = "gp2"
volume_size = 35
},
]
tags = {
Name = "Block Producing Node"
Environment = var.environment_tag
Version = var.pool_version
}
}
but that gives me a cyclic dependency error:
» terraform apply
Error: Cycle: module.relay-node.output.public_ip, module.block-producing-node.output.private_ip, data.template_file.init_relay, module.relay-node.var.user_data, module.relay-node.aws_instance.this, module.relay-node.output.private_ip, data.template_file.init_block_producer, module.block-producing-node.var.user_data, module.block-producing-node.aws_instance.this
To me that makes sense why I am getting this error because in order to generate the config file for one ec2, the other ec2 already needs to exist and have a ip address assigned to it. But I don't know how to do this in a way.
How do I reference the IP address of the other EC2 in the template file in a way that doesn't cause a cyclic dependency issue?
Generally-speaking, the user data of an EC2 instance cannot contain any of the IP addresses of the instance because the user data is submitted as part of launching the instance and cannot be changed after the instance is launched, and the IP address (unless you specify an explicit one when launching) is also assigned during instance launch, as part of creating the implied main network interface.
If you have only a single instance and it needs to know its own IP address then the easiest answer is for some software installed in your instance to ask the operating system which IP address has been assigned to the main network interface. The operating system already knows the IP address as part of configuring the interface using DHCP, and so there's no need to also pass it in via user data.
A more common problem, though, is when you have a set of instances that all need to talk to each other, such as to form some sort of cluster, and so they need the IP addresses of their fellows in addition to their own IP addresses. In that situation, there are broadly-speaking two approaches:
Arrange for Terraform to publish the IP addresses somewhere that will allow the software running in the instances to retrieve them after the instance has booted.
For example, you could publish the list in AWS SSM Parameter Store using aws_ssm_parameter and then have the software in your instance retrieve it from there, or you could assign all of your instances into a VPC security group and then have the software in your instance query the VPC API to enumerate the IP addresses of all of the network interfaces that belong to that security group.
All variants of this strategy have the problem that the software in your instances may start up before the IP address data is available or before it's complete. Therefore it's usually necessary to periodically poll whatever data source is providing the IP addresses in case new addresses appear. On the other hand, that capability also lends itself well to autoscaling systems where Terraform is not directly managing the instances.
This is the technique used by ElasticSearch EC2 Discovery, for example, looking for network interfaces belonging to a particular security group, or carrying specific tags, etc.
Reserve IP addresses for your instances ahead of creating them so that the addresses will be known before the instance is created.
When we create an aws_instance without saying anything about network interfaces, the EC2 system implicitly creates a primary network interface and chooses a free IP address from whatever subnet the instance is bound to. However, you have the option to create your own network interfaces that are managed separately from the instances they are attached to, which both allows you to reserve a private IP address without creating an instance and allows a particular network interface to be detached from one instance and then connected to another, preserving the reserved IP address.
aws_network_interface is the AWS provider resource type for creating an independently-managed network interface. For example:
resource "aws_network_interface" "example" {
subnet_id = aws_subnet.example.id
}
The aws_network_interface resource type has a private_ips attribute whose first element is equivalent to the private_ip attribute on an aws_instance, so you can refer to aws_network_interface.example.private_ips[0] to get the IP address that was assigned to the network interface when it was created, even though it's not yet attached to any EC2 instance.
When you declare the aws_instance you can include a network_interface block to ask EC2 to attach the pre-existing network interface instead of creating a new one:
resource "aws_instance" "example" {
# ...
user_data = templatefile("${path.module}/user_data.tmpl", {
private_ip = aws_network_interface.example.private_ips[0]
})
network_interface {
device_index = 0 # primary interface
network_interface_id = aws_network_interface.example.id
}
}
Because the network interface is now a separate resource, you can use its attributes as part of the instance configuration. I showed only a single network interface and a single instance above in order to focus on the question as stated, but you could also use resource for_each or count on both resources to create a set of instances and then use aws_network_interface.example[*].private_ips[0] to pass all of the IP addresses into your user_data template.
A caveat with this approach is that because the network interfaces and instances are separate it is likely that a future change will cause an instance to be replaced without also replacing its associated network interface. That will mean that a new instance will be assigned the same IP address as an old one that was already a member of the cluster, which may be confusing to a system that uses IP addresses to uniquely identify cluster members. Whether that is important and what you'd need to do to accommodate it will depend on what software you are using to form the cluster.
This approach is also not really suitable for use with an autoscaling system, because it requires the number of assigned IP addresses to grow and shrink in accordance with the current number of instances, and for the existing instances to somehow become aware when another instance joins or leaves the cluster.
Your template is dependent on your module and your module on your template - that is causing the cycle.
ip = module.block-producing-node.private_ip[0]
and
user_data = data.template_file.init_block_producer.rendered

Puppet - Could not find environment 'none

We had seeing following errors for puppet-server and puppet-agent
Jun 22 19:26:30 node puppet-agent[12345]: Local environment: "production" doesn't match server specified environment "none", restarting agent run with environment "none"
Jun 22 19:44:55 node INFO [puppet-server] Puppet Not Found: Could not find environment 'none
Configuration was verified couple of times and it looks fine. Production env exists.
Anyone has experienced similar issue?
We have enabled debug logging for puppet server however it doesn't seem point us to the root cause.
What part of code could be related to what we see here?
Regards
The master is overriding the agent's requested environment with a different one, but the environment the master chooses is either empty or explicitly "none", and, either way, no such environment is actually known to it. This is a problem with the external node classifier the master is using. Check the master's external_nodes setting if you're uncertain what ENC is in play, or for a summary of Puppet's expectations for such a program.
If the ENC emits an environment attribute for the node in question, then the value of that attribute must be the name of an existing environment ('production', for example). If you want to let the agent choose then the ENC should avoid emitting any environment attribute at all.

BizTalk 2016 SharePoint Adapter - Force Use of Port 443

I'm trying to integrate a BizTalk 2016 FP3 application with a SharePoint 2013 site that is available only over port 443 / https.
I would like to use a dynamic send port, the new(ish) adapter and the CSOM.
I have an orchestration with a logical one-way send port called "SendToSp". Within the orchestration I have an expression shape containing the following:
SendToSp(Microsoft.XLANGs.BaseTypes.Address) = "wss://collaboration.xxx.co.uk/sites/HousingICTSolution/Technical/Lists/BizTalkTestList/"
Following this, there's a construct message shape, containing and assignment shape where the message to send is created and the context properties assigned as follows:
msgNvpToSp(xxx.Integration.Common.Schemas.PropertySchema.FormType) = "DynamicSharePointSend";
msgNvpToSp(WSS.ConfigPropertiesXml) = "<ConfigPropertiesXml><PropertyName1>Title</PropertyName1><PropertySource1>This comes from received xml msg</PropertySource1></ConfigPropertiesXml>";
msgNvpToSp(WSS.ConfigAdapterWSPort) = 443;
msgNvpToSp(WSS.ConfigOverwrite) = "no";
msgNvpToSp(WSS.ConfigUseClientOM) = "yes";
My problem is, when BizTalk sends the message I get a "Transmission Failure" with the following description:
[Microsoft.SharePoint.Client.ClientRequestException] Cannot contact site at the specified URL http://collaboration.xxx.co.uk:80/.
This error was triggered by the Windows SharePoint Services receive location or send port with URI wss://collaboration.xxx.co.uk:80/sites/HousingICTSolution/Technical/Lists/BizTalkTestList/.
Windows SharePoint Services adapter event ID: 12310
If I check the context properties of the suspended message then I see the following:
Notice how the value for "OutboundTransportLocation" property includes port 443.
Any ideas why it's insisting on sending on port 80 even when I've told it to use 443?
In the address, you have to put "wsss://collaboration.xxx.co.uk/sites/HousingICTSolution/Technical/Lists/BizTalkTestList/", then https and port 443 will be used.

Zabbix API 3.4 - unable to create host because of circular requirement

I'm trying to add zabbix server support for a service that is written in Python. This service should send metrics to a zabbix server in active mode. E.g. the service connects to the server periodically, not the other way. (The service can be operated behind firewalls, the only option is to use active mode.)
In the host.create API call, I'm required to give the interfaces for the host. Here is the documentation for that: https://www.zabbix.com/documentation/3.4/manual/api/reference/host/create - the interfaces parameter is required. If I try to give an empty list:
zapi = ZabbixAPI(cfg.url)
zapi.login(cfg.user, cfg.password) # I'm using an administrator user here!
host = zapi.host.create(
host=cfg.host_name,
description=cfg.host_description,
inventory_mode=1, # auto host inventory population
status=0, # monitored host
groups=[host_group_id],
interfaces=[], # active agent, no interface???
)
Then I get this error:
pyzabbix.ZabbixAPIException: ('Error -32500: Application error., No permissions to referred object or it does not exist!', -32500)
I can create hosts using the same user and the zabbix web interface, so I guess the problem is with the interfaces. So I have tried to create an interface first. However, the hostinterface.create method requires a hostid parameter.
See here: https://www.zabbix.com/documentation/3.4/manual/api/reference/hostinterface/create - I must give a hostid.
This is catch 22 - In order to create a host, I need to have a host interface. But to create a host interface, I need to have a host.
What am I missing? Maybe I was wrong and the host.create API call was rejected because of a different reason. How can I figure out what it was?
The host create api will create the hostinterface as well, you need to populate interfaces[] with the correct fields acccording to the documentation
For instance, add this before calling the api:
interfaces = []
interfaces.append( {
'type' : 2,
'main' : 1,
'useip': 1,
'ip' : '1.2.3.4',
'dns' : "",
'port' : '161'
} )
then pass it to the host create api
The referenced documentation not show explicity but, in Zabbix, one host need to have:
- One or more interfaces (active hosts need too)
- One or more host group
So for your code work you will need to change to someting like this:
zapi = ZabbixAPI(cfg.url)
zapi.login(cfg.user, cfg.password) # I'm using an administrator user here!
host = zapi.host.create(
host=cfg.host_name,
description=cfg.host_description,
inventory_mode=1, # auto host inventory population
status=0, # monitored host
groups=[host_group_id],
interfaces=[ {"type": "1",
"main": "1",
"useip": "1",
"ip": "127.0.0.1",
"dns": "mydns", # can be blank
"port": "10051"}],
)
In your case is a "active host" but in Zabbix the concept for Active/Passive is for item, not for hosts. So its possible (and not very unusual) have hosts with passive and active itens at same time.

Resources