We had seeing following errors for puppet-server and puppet-agent
Jun 22 19:26:30 node puppet-agent[12345]: Local environment: "production" doesn't match server specified environment "none", restarting agent run with environment "none"
Jun 22 19:44:55 node INFO [puppet-server] Puppet Not Found: Could not find environment 'none
Configuration was verified couple of times and it looks fine. Production env exists.
Anyone has experienced similar issue?
We have enabled debug logging for puppet server however it doesn't seem point us to the root cause.
What part of code could be related to what we see here?
Regards
The master is overriding the agent's requested environment with a different one, but the environment the master chooses is either empty or explicitly "none", and, either way, no such environment is actually known to it. This is a problem with the external node classifier the master is using. Check the master's external_nodes setting if you're uncertain what ENC is in play, or for a summary of Puppet's expectations for such a program.
If the ENC emits an environment attribute for the node in question, then the value of that attribute must be the name of an existing environment ('production', for example). If you want to let the agent choose then the ENC should avoid emitting any environment attribute at all.
Related
I want to perform puppet(version 6.16) local compile for testing purpose by using following cmd.
puppet catalog compile testhost.int.test.com --environmentpath="xxx" --environment="ccc" --modulepath="ttt" --manifest="hhh" --vardir="eee"
It went well until I hit one custom module which is calling following puppet function:
Puppet::FileServing::Content.indirection.find(file, :environment => compiler.environment)
The error is as per below :
Error: Request to https://test.int.test.com:8140/puppet/v3 failed after 0.002 seconds: The private key is missing from '/etc/puppetlabs/puppet/ssl/private_keys/testhost.int.test.com.pem'
which I think it tries to connect to Puppet Master to query the file.
The thing is I only want to perform a local compile which NOT really want to talking to Puppet Master for any information.
So any workaround that I can do to ask it only looking at local environment not checking with Puppet Master?
BTW following method seems ok which is checking local environment not check Master side.
Puppet::Parser::Files.find_file(file, compiler.environment)
I'm relatively new in Puppet, Thanks in advance.
I expect I can run puppet catalog compile cmd purely locally without talking to Puppet Master to perform a sanity check only to see whether we are missing anything during compile phase before we push anything to production branch.
I use Apache Airflow for daily ETL jobs. I installed it in Azure Kubernetes Service using the provided Helm chart. It's been running fine for half a year, but since recently I'm unable to access the logs in the webserver (this used to always work fine).
I'm getting the following error:
*** Log file does not exist: /opt/airflow/logs/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** Fetching from: http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** !!!! Please make sure that all your Airflow components (e.g. schedulers, webservers and workers) have the same 'secret_key' configured in 'webserver' section and time is synchronized on all your machines (for example with ntpd) !!!!!
****** See more at https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key
****** Failed to fetch log file from worker. Client error '403 FORBIDDEN' for url 'http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log'
For more information check: https://httpstatuses.com/403
What have I tried:
I've made sure that the log file exists (I can exec into the airflow-worker-0 pod and read the file on command line in the location specified in the error).
I've rolled back my deployment to an earlier commit from when I know for sure it was still working, but it made no difference.
I was using webserverSecretKeySecretName in the values.yaml configuration. I changed the secret to which that name was pointing (deleted it and created a new one, as described here: https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#webserver-secret-key) but it didn't work (no difference, same error).
I changed the config to use a webserverSecretKey instead (in plain text), no difference.
My thoughts/observations:
The error states that the log file doesn't exist, but that's not true. It probably just can't access it.
The time is the same in all pods (I double checked be exec-ing into them and typing date in the command line)
The webserver secret is the same in the worker, the scheduler, and the webserver (I double checked by exec-ing into them and finding the corresponding env variable)
Any ideas?
Turns out this was a known bug with the latest release (2.4.0) of the official Airflow Helm chart, reported here:
https://github.com/apache/airflow/discussions/26490
Should be resolved in version 2.4.1 which should be available in the next couple of days.
I have recently setup a VM on azure to use as my build agent.
When the agent is started its name is calculated based on the azure instance name (_myservername) and the name I provide in the buildAgent.properties file is ignored completely.
This is particularly problematic when I have a second agent and the same name is chosen which will result in name conflict.
looking at the teamcity-agent.log I can see the following lines:
[2016-07-14 15:33:04,745] WARN - ds.azure.AzurePropertiesReader - Unable to set self port. Azure integration will experience problems
[2016-07-14 15:33:04,745] INFO - ds.azure.AzurePropertiesReader - Added alternative address is set to
[2016-07-14 15:33:04,745] INFO - ds.azure.AzurePropertiesReader - Instance name and agent name are set to _myservername
...
Question is:
Why is the name I provide via config file is not taking precedence over any other place it reads the name from? -- should it?
How can I possibly force a name on it?
OK, the came to find the answer to this and would share it here in case it would be useful for the humans of the future!
The issue was caused by the the azure-plugin where it was setting a configuration-parameter on the agent called instance name.
https://github.com/JetBrains/teamcity-azure-plugin/issues/17
The issue is fixed in the latest version of the plugin so upgrading it solved my problem. :)
We're using Puppet + Foreman to monitor changes in environment by checking custom facts. For example, whenever a custom fact equals 'true' puppet calls the Notify resource with a message sent to the agent log. Puppet includes this message in the agent report and Foreman shows this in UI.
Problem is that whenever a message is thrown, Foreman considers this action as "Applied" and the node status changes to "Active" (blue icon).
We want to keep the node at "No Changes" (Green) plus show the Notify message.
Is that possible in some way? Maybe define a new custom resource type?
Here is the puppet code:
class mymodule::myclass::mysubclass {
if $::fact023 == 'fail' {
notify {'mynotify1':
message => "WARNING: Node ${::fqdn} failed fact023",
loglevel => hiera('warnings_loglevel'),
} } }
See screenshot of Foreman here
Update:
I'll refine the question: Is there a way to use the Notify resource without causing puppet to report that the node has changed? Meaning just print the message to client log (and therefore the message will be visible in report) but without puppet classify the event as an applied configuration?
The reason is that when puppet triggers the Notify resource, Foreman flags the node as being active (changed)
UPDATE #2
I'm thinking about changing Foreman report file so that the UI will ignore Notify events so that the node's status will remain unchanged but still show the message in the report. Can someone point me to the right direction? Thanks!
UPDATE #3
Problem fixed after switching from the "Notify" resource type to custom type "echo" created by some dude in Puppet Forge. Thanks!
It's not completely clear what you are trying to accomplish. One option would be to use the notice function instead of a resource. Functions execute on the puppet master, so you the log will end up in the puppet master's logs instead of the agent report. That also means that it will not count as an applied resource, and the node should appear to be stable.
I enabled the "Log command assistance commands" option in Websphere > console preferences.
The documentation says the following :
Specifies whether to log all the command assistance wsadmin data to a file. This file is saved to ${LOG_ROOT}/server/commandAssistanceJythonCommands_user name.log:
server is the server process where the console runs, such as server1 or adminagent.
server is the server process where the console runs, such as dmgr, server1, adminagent, or jobmgr.
user name is the administrative console user name.
When you manage a profile using an administrative agent, the command assistance log is put in the location of the profile that the administrative agent is managing. The ${LOG_ROOT} variable defines the profile location.
I am not able to find the default value of LOG_ROOT.
The actual value of LOG_ROOT depends on values of other variables. The variables are defined in AdminConsole -> Environment -> WebSphere Variables. Because variables exists at different scopes (cell, node, cluster, server), finding the actual value can be a bit tricky. The ultimate solution is to use wsadmin and AdminOperations.expandVariable operation.
For ND environment:
adminOperations = AdminControl.queryNames('WebSphere:*,type=AdminOperations,process=dmgr').splitlines()[0]
print AdminControl.invoke(adminOperations, 'expandVariable', ['${LOG_ROOT}/commandAssistance_ssdimmanuel.log'])
For standalone WAS (assuming that the server name is 'server1'):
adminOperations = AdminControl.queryNames('WebSphere:*,type=AdminOperations,process=server1').splitlines()[0]
print AdminControl.invoke(adminOperations, 'expandVariable', ['${LOG_ROOT}/commandAssistance_ssdimmanuel.log'])
Advertisement mode
Using WDR library (http://wdr.github.io/WDR/) you could do it in just one simple line:
For ND:
print getMBean1(type='AdminOperations', process='dmgr').expandVariable('${LOG_ROOT}/commandAssistance_ssdimmanuel.log')
For standalone WAS:
print getMBean1(type='AdminOperations', process='server1').expandVariable('${LOG_ROOT}/commandAssistance_ssdimmanuel.log')