I'm running a Linux LDAP environment with multiple servers on the domain. As we have added and removed users from our environment, I started getting these error messages:
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] lookup of user cn=Deleted1 User,ou=People,dc=company,dc=net failed: No such object
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] ldap_result() failed: No such object
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] lookup of user cn=Deleted2 User,ou=People,dc=company,dc=net failed: No such object
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] ldap_result() failed: No such object
The users showing up are only users who I have deleted. Not just recently, either.
I found the answer to this in this bug:
http://tracker.clearfoundation.com/view.php?id=1752
Basically, I had a group that was still referencing those users as members even though I had deleted them. In my case, they were part of the uniqueMember list.
Related
I am building a SLURM multi-cluster setup, with a slurmdbd hosted on-premises and a slurmctld node in Oracle Cloud. The slurmctld is able to connect to the slurmdbd, but receives this error message when I try to connect to the database in any way:
sacct: error: slurm_persist_conn_open: Something happened with the receiving/processing of the persistent connection init message to <IP_ADDRESS>: Failed to unpack SLURM_PERSIST_INIT message
sacct: error: slurmdbd: Sending PersistInit msg: No error
JobID JobName Partition Account AllocCPUS State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
sacct: error: slurm_persist_conn_open: Something happened with the receiving/processing of the persistent connection init message to <IP_ADDRESS>: Failed to unpack SLURM_PERSIST_INIT message
sacct: error: slurmdbd: Sending PersistInit msg: No error
sacct: error: slurmdbd: DBD_GET_JOBS_COND failure: Unspecified error
Looking at the /var/log/slurm/slurmdbd.log file on my slurmdbd cluster, it records this error:
[2022-03-11T08:29:47.541] error: Munge decode failed: Invalid credential
[2022-03-11T08:29:47.541] auth/munge: _print_cred: ENCODED: Wed Dec 31 19:00:00 1969
[2022-03-11T08:29:47.541] auth/munge: _print_cred: DECODED: Wed Dec 31 19:00:00 1969
[2022-03-11T08:29:47.541] error: slurm_unpack_received_msg: auth_g_verify: REQUEST_PERSIST_INIT has authentication error: Unspecified error
[2022-03-11T08:29:47.541] error: slurm_unpack_received_msg: Protocol authentication error
[2022-03-11T08:29:47.551] error: CONN:10 Failed to unpack SLURM_PERSIST_INIT message
To ensure that my credentials are valid, I have copied the slurmdbd's MUNGE key to the slurmctld via SCP, ensured that the UID and GID of the slurm and munge users on all nodes are identical, and made sure that the clocks are all in sync. When I munge and unmunge on either server, it successfully decodes the encrypted message. However, when I try to authenticate the credential from one server to the other using the echo foo | ssh user#server munge | unmunge command, it gives me a response of unmunge: error: invalid credential. What could I be doing to still receive this response? What should I do to make sure that my credential is valid?
The OPC Publisher marketplace image runs successfully as a standalone container (albeit with server connection problems). But I am not able to deploy it as an edge module, especially after changing container create options.
Background: In my host laptop I was never able to get the module up so I created a Ubuntu VM. When I tried to deploy the edge module in the VM with default container create options the module did show up in the iotedge module list as "running". I wanted to set the "--op" option to set publishing rate so I changed it in the create options using the portal "Set modules" tab. Since there is no update button I used create button to "recreate" the modules. After this the module did not show up.
After that the OPC publisher module is not showing up on the edge VM. I am following the Microsoft tutorial.
Following is the command:
sudo docker run -v /iiotedge:/appdata mcr.microsoft.com/iotedge/opc-publisher:latest --aa --pf=/appdata/publishednodes.json --c="HostName=<iot hub name>.azure-devices.net;DeviceId=iothubowner;SharedAccessKey=<hub primary key>" --dc="HostName=<edge device id/name>.azure-devices.net;DeviceId=<edge device id/name>;SharedAccessKey=<edge primary key>" --op=10000
Container create options:
{
"Hostname": "opcpublisher",
"Cmd": [
"--pf=/appdata/publishednodes.json",
"--aa",
"--op=10000"
],
"HostConfig": {
"Binds": [
"/iiotedge:/appdata"
]
}
}
I have not specified the connection strings explicitly since the documentation from Microsoft assures that the runtime will pass them automatically.
The relevant iotedge journalctl logs are here.
Oct 06 19:36:05 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:05Z [INFO] - Pulling image mcr.microsoft.com/iotedge/opc-publisher:latest...
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Successfully pulled image mcr.microsoft.com/iotedge/opc-publisher:latest
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Creating module OPCPublisher...
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Starting new listener for module OPCPublisher
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [ERR!] - Internal server error: Could not create module OPCPublisher
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: caused by: Could not get module OPCPublisher
The logs from iotedge itself is not much useful. Find below anyway.
~$ iotedge logs OPCPublisher
A module runtime error occurred
I have also tried docker container prune just to be sure but it did not help.
Also strangely in the Azure portal when I try to restart the module from the troubleshoot page it throws an error "module not found in the current environment"
Can someone please help me out in troubleshooting this problem? I will be glad to share more details if required.
I raised a support query in Azure portal. After sending support bundles and trying various suggestions like removing DNS configuration, changing bind path to a non-sudo location etc. the team zeroed in on the edge version mismatch.
After re-reading the documentation I uninstalled the earlier iotedge package and installed aziot-edge instead and problem solved!
The team has raised a github issue for public tracking here:
https://github.com/Azure/Industrial-IoT/issues/1425
#asergaz also pointed to the right direction but did not notice since it came a bit later
I am running odoo 11 with docker, when adding the addons path, I create the database successfully, if I add any path to my module, this error will occur.
KeyError: ('ir.model.data', <function IrModelData.xmlid_lookup at 0x7fba098f1d90>, 'web.assets_frontend')
ValueError: External ID not found in the system: web.assets_frontend
I followed this guide:
https://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/
I stayed with the Active/Passive DRBD file system sharing. I had to reboot my cluster and now I am getting the following error:
Current DC: rbx-1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Tue Nov 28 17:01:14 2017
Last change: Tue Nov 28 16:40:09 2017 by root via cibadmin on rbx-1
2 nodes configured
5 resources configured
Node rbx-2: UNCLEAN (offline)
Online: [ rbx-1 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started rbx-1
WebSite (ocf::heartbeat:apache): Stopped
Master/Slave Set: WebDataClone [WebData]
WebData (ocf::linbit:drbd): FAILED rbx-1 (blocked)
Stopped: [ rbx-2 ]
WebFS (ocf::heartbeat:Filesystem): Stopped
Failed Actions:
* WebData_stop_0 on rbx-1 'invalid parameter' (2): call=20, status=complete, exitreason='none',
last-rc-change='Tue Nov 28 16:27:58 2017', queued=0ms, exec=3ms
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Any ideas?
Also does anyone have any recommended guides for submitting jobs?
This post is relatively old at this point but I'll leave this here for others to find if they stumble upon the same issue.
This problem has to do with an issue with the DRBD integration script that pacemaker uses. If it's broken, missing, has incorrect permissions, etc. you can get an error like this. In CentOS 7 that script is located at /usr/lib/ocf/resource.d/drbd
Note: This is specifically for the guide mentioned by OP but may help you:
Section 7.1 has a big "IMPORTANT" block that talks about replacing the Pacemaker integration script due to a bug. If you use the command it tells you to there, you actually replace the script with a 404 Error page which obviously doesn't work, causing the error. You can fix this issue by replacing the script with the original, either by reinstalling DRBD...
yum remove -y kmod-drbd84 drbd84-utils
yum install -y kmod-drbd84 drbd84-utils
...or finding just the drbd script elsewhere and adding/replacing it to /usr/lib/ocf/resource.d/drbd. Make sure its permissions are correct and that it is set as executable.
Hope that helps!
I'm running puppet 2.7.26 because that's what the redhat package provides.
I'm trying to serve files that are NOT stored within any puppet modules. The files are maintained in another location on the puppet server, and that is where I need to serve them from.
I have this in my /etc/puppet/fileserver.conf
[files]
path /var/www/cobbler/pub
allow *
And then I have a class file like this:
class etchostfile
(
$hostfile /* declare that this class has one parameter */
)
{
File
{
owner => 'root',
group => 'root',
mode => '0644',
}
file { $hostfile :
ensure => file,
source => "puppet:///files/hosts-${hostfile}.txt",
path => '/root/hosts',
}
}
But when my node calls
class { 'etchostfile' :
hostfile => foo,
}
I get this error
err: /Stage[main]/Etchostfile/File[foo]: Could not evaluate: Error 400
on SERVER: Not authorized to call find on
/file_metadata/files/hosts-foo.txt with {:links=>"manage"} Could not
retrieve file metadata for puppet:///files/hosts-foo.txt: Error 400 on
SERVER: Not authorized to call find on
/file_metadata/files/hosts-foo.txt with {:links=>"manage"} at
/etc/puppet/modules/etchostfile/manifests/init.pp:27
This post
https://viewsby.wordpress.com/2013/04/05/puppet-error-400-on-server-not-authorized-to-call-find/
indicates that this is all I need to do. But I must be missing something.
UPDATE
When I run the master in debug mode, I get no error.
The master responds thusly:
info: access[^/catalog/([^/]+)$]: allowing 'method' find
info: access[^/catalog/([^/]+)$]: allowing $1 access
info: access[^/node/([^/]+)$]: allowing 'method' find
info: access[^/node/([^/]+)$]: allowing $1 access
info: access[/certificate_revocation_list/ca]: allowing 'method' find
info: access[/certificate_revocation_list/ca]: allowing * access
info: access[^/report/([^/]+)$]: allowing 'method' save
info: access[^/report/([^/]+)$]: allowing $1 access
info: access[/file]: allowing * access
info: access[/certificate/ca]: adding authentication any
info: access[/certificate/ca]: allowing 'method' find
info: access[/certificate/ca]: allowing * access
info: access[/certificate/]: adding authentication any
info: access[/certificate/]: allowing 'method' find
info: access[/certificate/]: allowing * access
info: access[/certificate_request]: adding authentication any
info: access[/certificate_request]: allowing 'method' find
info: access[/certificate_request]: allowing 'method' save
info: access[/certificate_request]: allowing * access
info: access[/]: adding authentication any
info: Inserting default '/status' (auth true) ACL because none were found in '/etc/puppet/auth.conf'
info: Expiring the node cache of agent.redacted.com
info: Not using expired node for agent.redacted.com from cache; expired at Thu Aug 13 14:18:48 +0000 2015
info: Caching node for agent.redacted.com
debug: importing '/etc/puppet/modules/etchostfile/manifests/init.pp' in environment production
debug: Automatically imported etchostfile from etchostfile into production
debug: File[foo]: Adding default for selrange
debug: File[foo]: Adding default for group
debug: File[foo]: Adding default for seluser
debug: File[foo]: Adding default for selrole
debug: File[foo]: Adding default for owner
debug: File[foo]: Adding default for mode
debug: File[foo]: Adding default for seltype
notice: Compiled catalog for agent.redacted.com in environment production in 0.11 seconds
info: mount[files]: allowing * access
debug: Received report to process from agent.redacted.com
debug: Processing report from agent.redacted.com with processor Puppet::Reports::Store
and the agent responds thusly:
info: Caching catalog for agent.redacted.com
info: Applying configuration version '1439475588'
notice: /Stage[main]/Etchostfile/File[foo]/ensure: defined content as '{md5}75125a96a68a0ff0d42f91f10dca8336'
notice: Finished catalog run in 0.42 seconds
and the file is properly installed/updated.
So it works when the master is in debug mode, but it errors when the master is in standard (?) mode. I can go back and forth, in and out of debug mode at will, and it works every time in debug mode, and it fails every time in standard mode.
UPDATE 2
Running puppetmasterd from the command line, and everything works.
Running service puppetmaster start or /etc/init.d/puppetmaster start from the command line, and it fails. So at least I'm getting closer.
/etc/sysconfig/puppetmaster is entirely commented out. So as of now, I do not see any difference between just starting puppetmasterd and using the service script.
UPDATE 3
I think it's an SELinux problem.
With SELinux "enforcing" on the master, service puppetmaster restart, and I get the error.
I change SELinux to "Permissive" on the master, and I still get the error.
But now that SELinux is set to Permissive, if I service puppetmaster restart, my files get served properly.
But now that it's working, I set SELinux to Enforcing, and I get a different error:
err: /Stage[main]/Etchostfile/File[foo]: Could not evaluate: Could not
retrieve information from environment production source(s)
puppet:///files/hosts-foo.txt at
/etc/puppet/modules/etchostfile/manifests/init.pp:27
Then I do a service puppetmaster restart and I'm back to the original error.
So the situation changes depending on
how I started the service (puppetmasterd or service)
what SELinux was set to when I started the service
what SELinux is set to when the agent runs.
The closer I get, the more confused I get.
UPDATE 4
I think I found it. Once I started looking at SELinux, I found the policy changes I needed to make (allowing ruby/puppet to access cobbler files) and now it appears to be working...
This turned out to be an SELinux problem. I eventually found this error message
SELinux is preventing /usr/bin/ruby from read access
on the file /var/www/cobbler/pub/hosts-foo.txt .
which led me to the audit2allow rules I needed to apply to allow puppet to access my cobbler files.
I was getting this error with puppet server on ubuntu 20.
Error: /Stage[main]/Dvod_tocr/File[/install/wine-data.tar.gz]: Could not evaluate: Could not retrieve file metadata for puppet:///extra_files/wine-data.tar.gz: Error 500 on SERVER: Server Error: Not authorized to call find on /file_metadata/extra_files/wine-data.tar.gz with {:rest=>"extra_files/wine-data.tar.gz", :links=>"manage", :checksum_type=>"sha256", :source_permissions=>"ignore"}
My fileserver.conf file was in the wrong location. The correct location for this puppet version and on ubuntu 20 is /etc/puppetlabs/puppet/fileserver.conf