About ready to pull out my hair - I have done this a few times with success, but now suddenly I am overlooking something.
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class profiles::base for <fqdn> on node <fqdn>
The path look ok:
[root#adm-01 ~]# cat /etc/puppet/modules/profiles/manifests/base.pp
class profiles::base {
include '::ntp'
}
The site manifest as well:
[root#adm-01 ~]# cat /etc/puppet/environments/production/manifests/site.pp
node default {
}
node adm-01 {
notify { "Test": }
include profiles::base
}
I have tried profiles::base as well as ::profiles::base
Environment looks sound:
[root#adm-01 ~]# puppet master --configprint modulepath
/etc/puppet/environments/production/modules:/etc/puppet/environments/common:/etc/puppet/modules:/usr/share/puppet/modules
If I omit the base module, it does return the notify test.
Sure I am missing something glaringly obvious....
Thanks guys, on deeper investigation I saw the daemon silently complained about a cert. Still weird that changing the path fixed it, but I uninstalled all things foreman, re-installed, and now it works.... Very odd that the agent did not complain though, found the references in the production log only.
Related
beforeAll(async () => {
mongo = new MongoMemoryServer();
const mongoURI = await mongo.getConnectionString();
await mongoose.connect(mongoURI, {
useNewUrlParser: true,
useUnifiedTopology: true
});
});
For some reason mongodb-memory-server, doesn't work and it seems that it's because it's downloading mongodb for some reason? Wasn't mongodb supposed to be included with the package, what is the package downloading? How do we prevent mongodb-memory-server from downloading everytime I use it? Is there a way to make it work as it's intended?
$ npm run test
> auth#1.0.0 test C:\Users\admin\Desktop\projects\react-node-docker-kubernetes-app-two\auth
> jest --watchAll --no-cache
2020-06-06T03:12:45.207Z MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method:
2020-06-06T03:12:45.207Z MongoMS:MongoMemoryServer - no running instance, call `start()` command
2020-06-06T03:12:45.207Z MongoMS:MongoMemoryServer Called MongoMemoryServer.start() method
2020-06-06T03:12:45.214Z MongoMS:MongoMemoryServer Starting MongoDB instance with following options: {"port":51830,"dbName":"b67a9bfd-d8af-4d7f-85c7-c2fd37832f59","ip":"127.0.0.1","storageEngine":"ephemeralForTest","dbPath":"C:\\Users\\admin\\AppData\\Local\\Temp\\mongo-mem-205304KB93HW36L9ZD","tmpDir":{"name":"C:\\Users\\admin\\AppData\\Local\\Temp\\mongo-mem-205304KB93HW36L9ZD"},"uri":"mongodb://127.0.0.1:51830/b67a9bfd-d8af-4d7f-85c7-c2fd37832f59?"}
2020-06-06T03:12:45.217Z MongoMS:MongoBinary MongoBinary options: {"downloadDir":"C:\\Users\\admin\\Desktop\\projects\\react-node-docker-kubernetes-app-two\\auth\\node_modules\\.cache\\mongodb-memory-server\\mongodb-binaries","platform":"win32","arch":"ia32","version":"4.0.14"}
2020-06-06T03:12:45.233Z MongoMS:MongoBinaryDownloadUrl Using "mongodb-win32-i386-2008plus-ssl-4.0.14.zip" as the Archive String
2020-06-06T03:12:45.233Z MongoMS:MongoBinaryDownloadUrl Using "https://fastdl.mongodb.org" as the mirror
2020-06-06T03:12:45.235Z MongoMS:MongoBinaryDownload Downloading: "https://fastdl.mongodb.org/win32/mongodb-win32-i386-2008plus-ssl-4.0.14.zip"
2020-06-06T03:14:45.508Z MongoMS:MongoMemoryServer Called MongoMemoryServer.stop() method
2020-06-06T03:14:45.508Z MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method:
FAIL src/test/__test___/Routes.test.ts
● Test suite failed to run
Error: Status Code is 403 (MongoDB's 404)
This means that the requested version-platform combination dosnt exist
at ClientRequest.<anonymous> (node_modules/mongodb-memory-server-core/src/util/MongoBinaryDownload.ts:321:17)
Test Suites: 1 failed, 1 total
Tests: 0 total
Snapshots: 0 total
Time: 127.136s
Ran all test suites.
Seems you have the same issue like I have had.
https://github.com/nodkz/mongodb-memory-server/issues/316
Specify binary version in package.json
E.g:
"config": {
"mongodbMemoryServer": {
"version": "latest"
}
},
I hope it helps.
For me, "latest" (as in accepted answer) did not work, the latest current version "4.4.1" worked:
"config": {
"mongodbMemoryServer": {
"version": "4.4.1"
}
}
For anyone getting the dreaded
''
Error: Status Code is 403 (MongoDB's 404)
This means that the requested version-platform combination doesn't exist
''
I found an easy fix.
in the package.json file we need to add an "arch" for the mongo memory server config
"config": {
"mongodbMemoryServer": {
"debug": "1",
"arch": "x64"
}
},
the error is occurring because the URL link that mongo memory server is creating to download a binary version of mongo is wrong or inaccessible.
By adding debug we now are able to get a console log of the mongo memory server process and it should correctly download because we changed the arch variable to a one that worked for me. **You might need to change the arch depending on you system.
Without adding the arch I was able to see why it was crashing in the console log here:
MongoMS:MongoBinaryDownloadUrl Using "mongodb-win32-i386-2008plus-ssl-latest.zip" as the Archive String +0ms
MongoMS:MongoBinaryDownloadUrl Using "https://fastdl.mongodb.org" as the mirror +1ms
MongoMS:MongoBinaryDownload Downloading: "https://fastdl.mongodb.org/win32/mongodb-win32-i386-2008plus-ssl-latest.zip" +0ms
MongoMS:MongoMemoryServer Called MongoMemoryServer.stop() method +2s
MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method: +0ms
If you notice it is trying to download "https://fastdl.mongodb.org/win32/mongodb-win32-i386-2008plus-ssl-latest.zip" - if you visit the link you will notice it is a BROKEN LINK and that is the reason mongo memory server is failing to download.
For some reason mongo memory server was defaulting to the i386 arch, which didn't work in my case because the link was broken / inaccessible when I visited it. *normally a download should start right away when visiting a link like that.
I was able to configure the to the correct arch manually in the package.json file. Once I did that, it started to download mongo binary and ran all my tests no problem. You will even notice a console log of the download and displaying the correct download link.
You can find your system arch by going to the command prompt and typing
WINDOWS
SET Processor
MAC
uname -a
** EDIT **
The reason I was running into this was because I was running a 32 bit version of Node.js and my Windows machine was a 64 bit system. After installing to a 64 bit version of Node.js I no longer have to specify the arch type in Package.json file.
you can find what architecture type your Node.js is by typing in your terminal:
node -p "process.arch"
Status 403 usually means that your ip is restricted from server(for example maybe your country is in sanction list like iran,syria,...).
The best solution for this challenge is to change dns to dns of vpns.
In linux just type:
sudo nano /etc/resolv.conf
And then type your dns in nameserver place.
Try this version mongodb-memory-server#6.5.1
I found a solution for this problem that worked for me.
I just set writing permissions to the binary file of mongod that is used for mongo-memory and is saved in the .cache path of your computer or in the node_modules folder.
just locale the mongod file and set writing permission to the file with chmod +x mongod
I wrote a simple module to install a package (BioPerl) on a Ubuntu VM. The whole init.pp file is here:
https://gist.github.com/anonymous/17b4c31bf7309aff14dfdcd378e44f40
The problem is it doesn't work, and it gives me no feedback to let me know why it doesn't work. There are 3 simple steps in the module. I checked and it didn't do any of them. Heres the first 2:
Step 1: Download an archive and save it to /usr/local/lib
exec { 'bioperl-download':
command => "sudo /usr/bin/wget --no-check-certificate -O ${archive_path} ${package_uri}",
require => Package['wget']
}
Step 2: Extract the archive
exec { 'bioperl-extract':
command => "sudo /usr/bin/tar zxvf ${archive_path} --directory ${install_path}; sudo rm ${archive_path}",
require => Exec['bioperl-download']
}
pretty simple. But I have no idea where the problem is because I can't see what its doing. The provisioner is set to verbose mode, and here are the output lines for my module:
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-download]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-extract]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-path]/returns: executed successfully
So all I know is it executed these three steps successfully. It doesn't tell me anything about whether the steps did their job properly or not. I know that it didn't download the archive to /usr/local/lib that directory, and that it didn't add an environment variable file to /usr/profile.d. Maybe the issue is the variables containing the directories are wrong. Maybe the variable containing the archives download URI is wrong. How can I find these things out?
UPDATE:
It turns out the module does work. But to improve the module (since I want to upload it to forge.puppetlabs.com, I tried implementing the changes suggested by Matt. Heres the new code:
file { 'bioperl-download':
path => "${archive_path}",
source => "http://cpan.metacpan.org/authors/id/C/CJ/CJFIELDS/${archive_name}",
ensure => "present"
}
exec { 'bioperl-extract':
command => "sudo /bin/tar zxvf ${archive_name}",
cwd => "${bioperl_target_dir}",
require => File['bioperl-download']
}
A problem: It gives me an error telling me that the source cannot be http://. I see in the docs that they do indeed allow http:// files as the source for the file resource. Maybe I'm using an older version of puppet?
I want to try out the puppet-archive module, but I'm not sure how I can set it as a required dependency. By that, I mean how I can make sure its installed first. Do I need to get my module to download the module from github and save it to the modules directory? Or is there a way to let puppet install it automatically? I added it as a dependency to the metadata.json file, but that doesn't install it. I know I can just get my module to download the package, but I was wondering what best practice for this is.
The initial problem you describe is acceptance testing. Verifying that the Puppet resources and code you wrote actually resulted in the desired end state you wanted is normally accomplished with Serverspec: http://serverspec.org/. For example, you can write a Puppet module to deploy an application, but you only know that Puppet did what you told it to, and not that the application actually successfully deployed. Note Serverspec is also what people generally use to solve this problem for Ansible and Chef also.
You can write a Serverspec test similar to the following to help test your module's end state:
describe file('/usr/local/lib/bioperl.tar.gz') do
it { expect(subject).to be_file }
end
describe file('/usr/profile.d/env_file') do
it { expect_subject).to be_file }
its(:content) { is_expected.to match(/env stuff/) }
end
However, your problem also seems to deal with debugging why your acceptance tests failed. For that, you need unit testing. This is normally solved with RSpec-Puppet: http://rspec-puppet.com/. I would show you how to write some tests for your situation, but I don't think you should be writing your Puppet module the way that you did, so it would render the unit tests irrelevant.
Instead, consider using a file resource with the source attribute and a HTTP URI to grab the tarball instead of an exec with wget: https://docs.puppet.com/puppet/latest/type.html#file-attribute-source. Also, you might want to consider using the Puppet archive module to assist you: https://forge.puppet.com/puppet/archive.
If you have questions on how to use these tools to provide unit and acceptance testing, or have questions on how to refactor your module, then don't hesitate to write followup questions on StackOverflow and we can help you.
I am starting using puppet to manage many servers, the problem is that whenever I try to use a class, new relic for example:
node 'mynode' {
class {'newrelic::server::linux':
newrelic_license_key => '***',
}
}
It fails, and returns the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class newrelic::server::linux at /etc/puppet/manifests/site.pp:3 on node mynode
I have installed fsalum-newrelic on the master, and everything works fine when using files, packages, services etc. What am I doing wrong?
The catalog compiler will look for class newrelic::server::linux at newrelic/manifests/server/linux.pp relative to each directory in your module path. (Note: newrelic, NOT fsalum-newrelic.) Make certain that you indeed did install the module such that such a file exists in your modulepath, and make sure that it is readable by the puppetmaster process.
Note, too, that "readable by the puppetmaster process" means more than just the ownership and permissions of the file itself. It also involves ownership and permissions of all the directories in the path to that file, and possibly other forms of access control, such as ACLs and SELinux conext and policy.
Find out where you are actually installing the new puppet forge modules using perhaps a unix utility like "locate".
Then look in the the /etc/puppet/puppet.conf at the "basemodulepath" and check that the place it is installed is in the path
Here is my basemodulepath
basemodulepath = $confdir/environments/production/modules:$confdir/environments/production/local_modules:/etc/puppet/modules
The external modules I am using are either in /etc/puppet/modules or in /etc/puppet/enviroments/production/modules
I'm trying to figure out masterless environment with puppet. I'm using this link to install the newest version of Puppet on Ubuntu.
I'm using this repository https://github.com/szymonrychu/puppet-masterless and running the script: modules/os/files/puppet.sh.
It downloads current Puppet repository to the /opt/puppet directory and then runs the code specified in it. (It sets cronjob pointing to the script, so it will run every hour)
After first run the hiera env is prepared (hiera.yaml) and deployed. From that point the code should start connect to hiera database, but it's not happening.
Most probably there is an issue in modules/os/files/hiera.yaml or in manifests/site.pp, but after several days of struggling I can't get it to work.
Ok! I know what was broken :)
first missing part in common.yaml:
(...)
classes:
- os
os::version: 'ugabuga'
(...)
second mistake in modules/os/manifests/init.pp:
class os (
$version = 'v0.0.0'
){ (...) }
instead of:
class os {
$version = 'v0.0.0'
(...)
}
and finally, the code should be included in manifests/site.pp like this:
node default {
hiera_include('classes')
include os
}
And that's it! But it wasn't trivial - at least for me. Documentation isn't that specific in that case and there are no complex examples about this.
My Puppet master and agent are on the same machine. The master node.pp file contains this:
node 'pear.myserver.com' {
include ntp
}
The ntp.pp file contains this:
class ntp {
package { "ntp":
ensure => installed
}
service { "ntp":
ensure => running,
}
}
The /etc/hosts file contains the line:
96.124.119.41 pear.myserver.com pear
I was able to successfully launch puppetmaster, but when I execute this, ntp doesn't get installed (it is not installed already, I checked).
puppet agent --test --server='pear.myserver.com'
It just reports this:
info: Caching catalog for pear.myserver.com
info: Applying configuration version '1387782253'
notice: Finished catalog run in 0.01 seconds
I don't know what else I could have missed. Can you please help? Note that I replaced the actual server name with 'myserver' for security reasons.
I was following this tutorial: http://bitfieldconsulting.com/puppet-tutorial
$puppet agent --test
This will fetch compiled catalog from Master puppet, which is in /etc/puppetlabs/puppet/manifests/site.pp and run locally.
$puppet apply /etc/puppet/modules/ntp/manifests/ntp.pp
Will apply locally