I am new to Puppet.
I have downloaded a ganglia repo on my puppet master.
How to install this repo on all the nodes?
Master: ubuntu 14.04
Agent: 12.04
I want to make the puppetmaster server the server for required packages. So that packages can be installed on nodes without internet connectivity
I would use something like reprepro on your master to set-up the apt repository for ganglia, then use the puppetlabs-apt module to add the apt-repo on your master to all your agents.
There's a reprepro module on the forge that you can probably use to set that up.
So, if your master is called puppet-master.example.com, and you set it up as an apt server, you could add some code like this to all your agents:
apt::source { 'ganglia-mirror':
location => 'http://puppet-master.example.com/aptserver',
release => 'dist',
repos => 'ganglia',
include_src => false,
}
I'd recommend reading through the documentation on both modules, and you should be able to achieve a Ganglia APT mirror on your Puppet master.
Related
I have a node js app which needs to be pushed to cloud foundry. The oracle binary download is blocked by firewall so npm install fails to download node oracledb dependency. I have manually installed it under local node_modules folder. Now when i push my app to CF, it agains try to download node oracledb dependency, which is already present in local node_modules folder.
My query is how can i mention this in package.json or package-lock.json so that CF does not download node oracledb with every push. I want it to use only bundled dependency.
P.S adding proxy won't work here as this platform specific binary is hosted over S3.AWS on internet and is blocked by our org.
For offline environments, you need to "vendor" your dependencies. The act of "vendoring", means that you download them in advance and cf push both your app and the dependencies. When you do this, the buildpack won't need to download anything because it all exists already.
The process for Node.js apps is here -> https://docs.cloudfoundry.org/buildpacks/node/index.html#vendoring
For non-native code, this is easy, but for native code there is a complication. To vendor your dependencies, you need to make sure that the architecture of your local machine matches that of the target (i.e. your Cloud Foundry stack). If the architecture doesn't match, the binaries won't run on CF and the buildpack will need to try to download and build those resources for you (this will fail in an offline environment).
At the time of writing, there are two stacks available for Cloud Foundry. The most commonly used is cflinuxfs2. This is basically Ubuntu Trusty 14.04. There is also cflinuxfs3 which is basically Ubuntu Bionic 18.04. As I'm writing this, the latter is pretty new and might not be available in all environments. There are also Windows stacks, but that's not relevant here because the Node.js buildpack only runs on the Linux stacks. You can run cf stacks to see which stacks are available in your environment.
To select the stack you want, run cf push -s <stack>, however that's not usually necessary as most environments will default to using one of the Linux stacks.
To bring this back to vendoring your Node.js dependencies, you need to perform the local vendoring operations in an environment that matches the stack. If you're running Windows or MacOS, that means using a VM or a Docker image. You have a few options in terms of your VM or Docker image.
The stacks, also called rootfs, are available as Docker images. You can work on this by running docker run -w /app -vpwd:/app -it cloudfoundry/cflinuxfs2 bash or docker run -w /app -vpwd:/app -it cloudfoundry/cflinuxfs2 bash. That will give you a shell in a matching container where you can run the vendoring process.
Do the same thing, but use the base Ubuntu Trusty 14.04 or Ubuntu Bionic 18.04 image. These are basically the same as the cflinuxfsX images, they just come with the stock set of packages. If you need to apt install dev packages so that your native code builds, that is OK.
Create an Ubuntu Trusty 14.04 or Ubuntu Bionic 18.04 VM. This is the same as the previous option, but you're using a VM instead of Docker.
Once you've properly vendored your dependencies using the correct architecture, you should be able to cf push your app and the buildpack will run and not need to download anything from the Internet.
After much research and experiments, I was able to achieve this without docker image.
In package.json -
"dependencies": {
"crypto": "^1.0.1",
"express": "^4.16.3",
"morgan": "^1.9.0",
"nan": "^2.11.0",
"oracledb": "file:oracledb_build",
"typeorm": "^0.2.7"
}
if we mention the relative file location in project from where npm should look for oracledb dependency instead of going over to internet, it solves this problem.
if we mention -
"oracledb": "^2.3.0" --It always goes over to internet to download platform specific binary, even if you manually copy oracledb into node_modules, and provide binary with matching architecture. I have observed this behavior with oracledb 2.3.0.
My problem got resolved when i provided oracledb 2.0.15 locally.
I'm using R10K to manage my configuration files.
I want to install a puppet module on my master server using a Puppet file.
I go to the branch and add the following to Puppetfile:
mod 'puppetlabs-certregen', '0.2.0'
I then run puppet agent -t on the server. It seems the command is successful, in that the commands in my manifest are run, but when I run puppet certregen healthcheck the module doesn't seem to be installed.
What's the correct way to use the Puppetfile to install a module?
The Puppetfile is similar to a Ruby Gemfile, Python requirements.txt: it lists dependancies which are then installed by a separate tool.
For Puppetfiles, this is r10k.
It's documented here https://puppet.com/docs/pe/2018.1/puppetfile.html
You can also directly download the module with the command line:
puppet module install puppetlabs-certregen
Notice: Downloading from https://forgeapi.puppet.com ...
Notice: Installing -- do not interrupt ...
/Users/petersouter/.puppetlabs/etc/code/modules
└─┬ puppetlabs-certregen (v0.2.0)
└── puppetlabs-stdlib (v4.17.1)
Note, however, that r10k and puppet module install don't play well together:
Restriction: If you are using Code Manager or r10k, do not install, update, or uninstall modules with the puppet module command. With code management, you must install modules with a Puppetfile. Code management purges modules that were installed with the puppet module command. See the Puppetfile documentation for instructions.
Scenario:
Bootstrapping container to chef server in the same way as we bootstrap azure VM's.
Steps to Reproduce:
Install Chef-client using knife bootstrap
Run some recipe/role to install or configure container
Expected Result:
Installation of software such as java, python, or tools such as Jenkins, tomcat
Actual Result:
Error : SSH connection timeout when knife bootstrap command is run on Local workstation
Platform Details
Centos 7.1 (Azure VM)
Docker Container - Centos 6.4
This is not how either Docker or knife bootstrap works. Containers are not tiny VMs and should not be treated as such. If you want to use Chef code to build Docker image files, Packer can do this. Using chef-client inside containers at runtime for production operations is very very not recommended.
I have successfully set up a two node Cassandra cluster using Docker containers on two separate machines. When I try to administer the cluster using OpCenter it fails because the DataStax Agents are not installed.
Automatic installation of the agents via OpCenter fails.
I open up a bash shell in the Cassandra Docker container and try to install the agent manually, but that fails, too. It appears that the agent installer is expecting sudo support, which is not present in the container.
So, I'm wondering what the "right way" to install the agent into a docker container would be. Anyone done this? Any thoughts?
I have installed CoreOS via the VMWare image file. Does anyone know how to install Deis.io? I have read through the documentation and most of it is how to install Deis on other systems.
You can move forward setting up Deis by exporting FLEETCTL_TUNNEL and issuing a make run like the documentation suggests, but you'll be missing some of the provisioning steps that Deis performs as part of the cloud-init script. You'll likely run into trouble.
The recommended path is to install Vagrant and issue a vagrant up in the project root to use the Deis project Vagrantfile. This sets up networking and executes the project cloud-init script.
Vagrant should detect that you have VMWare installed and not VirtualBox, and will provision appropriately.