I'm create puppet configuration structure
puppet
│ ├── data
│ │ └── common.yaml
│ ├── hiera.yaml
│ ├── manifests
│ │ └── site.pp
│ ├── modules
│ │ ├── accessories
│ │ │ └── manifests
│ │ │ └── init.pp
│ │ ├── nginx
│ │ │ ├── manifests
│ │ │ │ ├── config.pp
│ │ │ │ ├── init.pp
│ │ │ │ └── install.pp
│ │ │ └── templates
│ │ │ └── vhost_site.erb
│ │ ├── php
│ │ │ ├── manifests
│ │ │ │ ├── config.pp
│ │ │ │ ├── init.pp
│ │ │ │ └── install.pp
│ │ │ └── templates
│ │ │ ├── php.ini.erb
│ │ │ └── www.conf.erb
│ │ └── site
│ │ └── manifests
│ │ ├── database.pp
│ │ ├── init.pp
│ │ └── webserver.pp
│ └── Puppetfile
Now I have just one server so I sometimes update it manual by runing:
sudo puppet apply --hiera_config=hiera.yaml --modulepath=./modules manifests/site.pp
At this moment I need to use some external modules and for example I added Puppetfile with next lines.
forge "http://forge.puppetlabs.com"
mod 'puppetlabs-mysql', '3.10.0'
...and of course it didn't work.
I tried to find something for configure it in command settings for 'apply' (Configuration Reference) but unsuccessful.
Is it real to auto-configure puppet in standalone mode by using Puppetfile or it possible only with 'puppet module install'???
Puppetfiles are not interpreted or read by the puppet server or client code. They're there to help other tools effectively deploy the proper puppet modules.
In your case in order to take advantage of the Puppetfile you've written you would need to install and configure r10k. HERE are the basics from the Puppet Enterprise documentation. HERE is another great resource, the r10k GitHub page.
Once installed and configured, r10k will read your Puppetfile and download+install the defined entries. In your case, it would install version 3.10.0 of puppetlabs-mysql. This would be installed into your modules directory and then you can execute the puppet agent run and take advantage of the newly installed modules.
In summary, Puppetfiles are not used by the client, they're used by code deployment software (r10k) to download and build the proper modules for the puppet server or agent to consume. Your options are to configure r10k to provision the modules as defined in the Puppetfile, or download the modules manually and eliminate the need for the Puppetfile.
Related
I have the following problem:
I created a Django app (app1) and then installed it in other one (app2). Now I'm trying to make the internationalization of the site, but I want to be able to use the installed app translations and I cannot even compile them.
Some useful information:
APP 1
.
├── MANIFEST.in
├── app1
│ ├── admin.py
│ ├── apps.py
│ ├── forms.py
│ ├── __init__.py
│ ├── locale/
│ │ ├── en-us
│ │ │ └── LC_MESSAGES
│ │ │ └── django.po
│ │ ├── es
│ │ │ └── LC_MESSAGES
│ │ │ └── django.po
│ │ └── pr
│ │ └── LC_MESSAGES
│ │ └── django.po
│ ├── migrations/
│ ├── models.py
│ ├── settings.py
│ ├── static/
│ ├── templates
│ ├── tests.py
│ ├── urls.py
│ ├── utils.py
│ └── views.py
└── setup.py
APP 2 (the one that has APP 1 installed)
├── app2/
│ ├── locale/
│ │ ├── en-us/
│ │ │ └── LC_MESSAGES
│ │ │ ├── django.mo
│ │ │ └── django.po
│ │ ├── es/
│ │ │ └── LC_MESSAGES
│ │ │ ├── django.mo
│ │ │ └── django.po
│ │ └── pr/
│ │ └── LC_MESSAGES
│ │ ├── django.mo
│ │ └── django.po
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'app1.apps.App1SiteConfig',
'app2.apps.App2SiteConfig',
]
LANGUAGE_CODE = 'es'
LANGUAGES = (
('en-us', _('English')),
('es', _('Spanish')),
('pt', _('Portuguese'))
)
LOCALE_PATHS = (
os.path.join(BASE_DIR, "app2", "locale"),
)
Basically, the desired TODOs are:
compile the .mo's from App1
use the App1 transalations (it has its own templates and models so ideally they would be used there)
What I don't want:
compile the App1 .mo's from django-admin of App2 and then translate it there.
Thanks in advance!
Well.. finally I managed to solve this by my way..
With the virtualenv activated I moved to App1 dir and executed django-admin compilemessages, the .mo's has been created in the right path (app1/locale/<lang>/LC_MESSAGES/django.mo) and they're able to be used from the site.
I have made a Terraform script that can successfully spin up 2 DigitalOcean droplets as nodes and install a Kubernetes master on one and a worker on the other.
For this, it uses a bash shell environment variable that is defined as:
export DO_ACCESS_TOKEN="..."
export TF_VAR_DO_ACCESS_TOKEN=$DO_ACCESS_TOKEN
It can then be used in the script:
provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}
Now, having all these files in one directory, is getting a bit messy. So I'm trying to implement this as modules.
I'm thus having a provider module offering access to my DigitalOcean account, a droplet module spinning up a droplet with a given name, a Kubernetes master module and a Kubernetes worker module.
I can run the terraform init command.
But when running the terraform plan command, it asks me for the provider token (which it rightfully did not do before I implemented modules):
$ terraform plan
provider.digitalocean.token
The token key for API operations.
Enter a value:
It seems that it cannot find the token defined in the bash shell environment.
I have the following modules:
.
├── digitalocean
│ ├── droplet
│ │ ├── create-ssh-key-certificate.sh
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── vars.tf
│ └── provider
│ ├── main.tf
│ └── vars.tf
└── kubernetes
├── master
│ ├── configure-cluster.sh
│ ├── configure-user.sh
│ ├── create-namespace.sh
│ ├── create-role-binding-deployment-manager.yml
│ ├── create-role-deployment-manager.yml
│ ├── kubernetes-bootstrap.sh
│ ├── main.tf
│ ├── outputs.tf
│ └── vars.tf
└── worker
├── kubernetes-bootstrap.sh
├── main.tf
├── outputs.tf
└── vars.tf
In my project directory, I have a vars.tf file:
$ cat vars.tf
variable "DO_ACCESS_TOKEN" {}
variable "SSH_PUBLIC_KEY" {}
variable "SSH_PRIVATE_KEY" {}
variable "SSH_FINGERPRINT" {}
and I have a provider.tf file:
$ cat provider.tf
module "digitalocean" {
source = "/home/stephane/dev/terraform/modules/digitalocean/provider"
DO_ACCESS_TOKEN = "${var.DO_ACCESS_TOKEN}"
}
And it calls the digitalocean provider module defined as:
$ cat digitalocean/provider/vars.tf
variable "DO_ACCESS_TOKEN" {}
$ cat digitalocean/provider/main.tf
provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}
UPDATE: The provided solution led me to organize my project like:
.
├── env
│ ├── dev
│ │ ├── backend.tf -> /home/stephane/dev/terraform/utils/backend.tf
│ │ ├── digital-ocean.tf -> /home/stephane/dev/terraform/providers/digital-ocean.tf
│ │ ├── kubernetes-master.tf -> /home/stephane/dev/terraform/stacks/kubernetes-master.tf
│ │ ├── kubernetes-worker-1.tf -> /home/stephane/dev/terraform/stacks/kubernetes-worker-1.tf
│ │ ├── outputs.tf -> /home/stephane/dev/terraform/stacks/outputs.tf
│ │ ├── terraform.tfplan
│ │ ├── terraform.tfstate
│ │ ├── terraform.tfstate.backup
│ │ ├── terraform.tfvars
│ │ └── vars.tf -> /home/stephane/dev/terraform/utils/vars.tf
│ ├── production
│ └── staging
└── README.md
With a custom library of providers, stacks and modules, layered like:
.
├── modules
│ ├── digitalocean
│ │ └── droplet
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── scripts
│ │ │ └── create-ssh-key-and-csr.sh
│ │ └── vars.tf
│ └── kubernetes
│ ├── master
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── scripts
│ │ │ ├── configure-cluster.sh
│ │ │ ├── configure-user.sh
│ │ │ ├── create-namespace.sh
│ │ │ ├── create-role-binding-deployment-manager.yml
│ │ │ ├── create-role-deployment-manager.yml
│ │ │ ├── kubernetes-bootstrap.sh
│ │ │ └── sign-ssh-csr.sh
│ │ └── vars.tf
│ └── worker
│ ├── main.tf
│ ├── outputs.tf
│ ├── scripts
│ │ └── kubernetes-bootstrap.sh -> /home/stephane/dev/terraform/modules/kubernetes/master/scripts/kubernetes-bootstrap.sh
│ └── vars.tf
├── providers
│ └── digital-ocean.tf
├── stacks
│ ├── kubernetes-master.tf
│ ├── kubernetes-worker-1.tf
│ └── outputs.tf
└── utils
├── backend.tf
└── vars.tf
The simplest option you have here is to just not define the provider at all and just use the DIGITALOCEAN_TOKEN environment variable as mentioned in the Digital Ocean provider docs.
This will always use the latest version of the Digital Ocean provider but otherwise will be functionally the same as what you're currently doing.
However, if you did want to define the provider block so that you can specify the version of the provider used or also take this opportunity to define a partial state configuration or set the required Terraform version then you just need to make sure that the provider defining files are in the same directory you are applying or in a sourced module (if you're doing partial state configuration then they must be in the directory, not in a module as state configuration happens before module fetching).
I normally achieve this by simply symlinking my provider file everywhere that I want to apply my Terraform code (so everywhere that isn't just a module).
As an example you might have a directory structure that looks something like this:
.
├── modules
│ └── kubernetes
│ ├── master
│ │ ├── main.tf
│ │ ├── output.tf
│ │ └── variables.tf
│ └── worker
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── production
│ ├── digital-ocean.tf -> ../providers/digital-ocean.tf
│ ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
│ ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
│ └── terraform.tfvars
├── providers
│ └── digital-ocean.tf
├── stacks
│ ├── kubernetes-master.tf
│ └── kubernetes-worker.tf
└── staging
├── digital-ocean.tf -> ../providers/digital-ocean.tf
├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
└── terraform.tfvars
This layout has 2 "locations" where you would perform Terraform actions (eg plan/apply): staging and production (given as an example of keeping things as similar as possible with slight variations between environments). These directories contain only symlinked files other than the terraform.tfvars file which allows you to vary only a few constrained things, keeping your staging and production environments the same.
The provider file that is symlinked would contain any provider specific configuration (in the case of AWS this would normally include the region that things should be created in, with Digital Ocean this is probably just clamping the version of the provider that should be used) but could also contain a partial Terraform state configuration to minimise what configuration you need to pass when running terraform init or even just setting the required Terraform version. An example might look something like this:
provider "digitalocean" {
version = "~> 1.0"
}
terraform {
required_version = "=0.11.10"
backend "s3" {
region = "eu-west-1"
encrypt = true
kms_key_id = "alias/terraform-state"
dynamodb_table = "terraform-locks"
}
}
I'm beginning with React, NodeJs and ExpressJs. I have seen many tutorials but I'm not sure of the correct way to start a project?
I have seen two ways. The first being express <project_name> and the second being npm init.
Which is correct and if there isn't a correct way then why would you initialize them differently when npm init includes express eventually (In the tutorials).
Thanks
npm init is good way to start, as you know it creates a package.json file in your project directory where you can store your project dependencies.
After this you must run the following commands:
npm install --save-dev webpack
npm install --save-dev babel
npm install --save-dev babel-loader
npm install babel-core
npm install babel-preset-env
npm install babel-preset-react
or as a single line command use this:
npm install --save-dev webpack babel babel-loader babel-core babel-preset-env babel-preset-react
first command will create a webpack.config.js file.
Second command will ready the babel to use in your project and the third to use babel-loader.
Now it's time to create project structure which looks like :
projectFolder/
├── package.json
├── public
│ ├── favicon.ico
│ └── index.html
├── README.md
└── src
├── App.css
├── App.js
├── App.test.js
├── index.css
├── index.js
└── logo.png
This is the very basic project structure. It doesn't have server side structure.
full structure looks like:
react/
├── CHANGELOG.md
├── CONTRIBUTING.md
├── docs
│ ├── data-fetching.md
│ ├── getting-started.md
│ ├── how-to-configure-text-editors.md
│ ├── react-style-guide.md
│ ├── README.md
│ └── recipes/
├── LICENSE.txt
├── node_modules/
├── package.json
├── README.md
├── src
│ ├── actions
│ ├── client.js
│ ├── components
│ │ ├── App
│ │ │ ├── App.js
│ │ │ ├── App.scss
│ │ │ ├── package.json
│ │ │ └── __tests__
│ │ │ └── App-test.js
│ │ ├── ContentPage
│ │ │ ├── ContentPage.js
│ │ │ ├── ContentPage.scss
│ │ │ └── package.json
│ │ ├── ErrorPage
│ │ │ ├── ErrorPage.js
│ │ │ ├── ErrorPage.scss
│ │ │ └── package.json
│ │ ├── Feedback
│ │ │ ├── Feedback.js
│ │ │ ├── Feedback.scss
│ │ │ └── package.json
│ │ ├── Footer
│ │ │ ├── Footer.js
│ │ │ ├── Footer.scss
│ │ │ └── package.json
│ │ ├── Header
│ │ │ ├── Header.js
│ │ │ ├── Header.scss
│ │ │ ├── logo-small#2x.png
│ │ │ ├── logo-small.png
│ │ │ └── package.json
│ │ ├── Link
│ │ │ ├── Link.js
│ │ │ └── package.json
│ │ ├── Navigation
│ │ │ ├── Navigation.js
│ │ │ ├── Navigation.scss
│ │ │ └── package.json
│ │ ├── NotFoundPage
│ │ │ ├── NotFoundPage.js
│ │ │ ├── NotFoundPage.scss
│ │ │ └── package.json
│ │ ├── TextBox
│ │ │ ├── package.json
│ │ │ ├── TextBox.js
│ │ │ └── TextBox.scss
│ │ ├── variables.scss
│ │ └── withViewport.js
│ ├── config.js
│ ├── constants
│ │ └── ActionTypes.js
│ ├── content
│ │ ├── about.jade
│ │ ├── index.jade
│ │ └── privacy.jade
│ ├── core
│ │ ├── db.js
│ │ ├── DOMUtils.js
│ │ ├── fetch
│ │ │ ├── fetch.client.js
│ │ │ ├── fetch.server.js
│ │ │ └── package.json
│ │ ├── Location.js
│ │ └── passport.js
│ ├── data
│ │ ├── queries
│ │ │ ├── content.js
│ │ │ ├── me.js
│ │ │ └── news.js
│ │ ├── schema.js
│ │ └── types
│ │ ├── ContentType.js
│ │ ├── NewsItemType.js
│ │ └── UserType.js
│ ├── public
│ │ ├── apple-touch-icon.png
│ │ ├── browserconfig.xml
│ │ ├── crossdomain.xml
│ │ ├── favicon.ico
│ │ ├── humans.txt
│ │ ├── robots.txt
│ │ ├── tile.png
│ │ └── tile-wide.png
│ ├── routes
│ │ ├── contact
│ │ │ ├── Contact.js
│ │ │ ├── Contact.scss
│ │ │ └── index.js
│ │ ├── home
│ │ │ ├── Home.js
│ │ │ ├── Home.scss
│ │ │ └── index.js
│ │ ├── login
│ │ │ ├── index.js
│ │ │ ├── Login.js
│ │ │ └── Login.scss
│ │ └── register
│ │ ├── index.js
│ │ ├── Register.js
│ │ └── Register.scss
│ ├── routes.js
│ ├── server.js
│ ├── stores
│ └── views
│ ├── error.jade
│ └── index.jade
├── test
│ └── stubs
│ └── SCSSStub.js
└── tools
├── build.js
├── bundle.js
├── clean.js
├── copy.js
├── deploy.js
├── lib
│ ├── fetch.js
│ └── fs.js
├── README.md
├── run.js
├── runServer.js
├── start.js
└── webpack.config.js
Created using yeoman generator-react-fullstack
Alternatively you can do all this stuff by simply using yeoman react generator but be careful with generators, they sometimes toughen your deployment process.
I'm following Swaroop's Byte of Vim, and have reached the chapter on personal information management, where it says to install the Viki plugin. The instructions are as follows (and I have no real idea of what is going on, but):
Download multvals.vim [2] and store as $vimfiles/plugin/multvals.vim
Download genutils.zip [3] and unzip this file to $vimfiles
Download Viki.zip [4] and unzip this file to $vimfiles (make sure all the folders and files under the 'Viki' folder name are stored directly in the $vimfiles folder)
After this I open a new text file in vim and run the commmand
:set filetype=viki
but I get a whole slew of errors. I've tried clearing out my ~/.vim folder and reinstalling everything, along with tlib this time as specified on the Viki vimscript page, and extracted the version 4.0 viki.vba instead of using the version 4.08 zip file, but I'm still getting errors about non-existent functions:
Error detected while processing home/user/.vim/ftplugin/viki.vim:
line 100
E117: Unknown function: tlib#balloon#Register
I don't really know what's going on, and am quite a new Vim user, so please be patient. Right now my ~/.vim directory tree looks like this:
.vim
├── autoload
│ ├── genutils.vim
│ ├── tlib
│ │ ├── eval.vim
│ │ ├── list.vim
│ │ ├── notify.vim
│ │ ├── persistent.vim
│ │ ├── progressbar.vim
│ │ ├── TestChild.vim
│ │ └── vim.vim
│ ├── tlib.vim
│ ├── viki
│ │ ├── enc_latin1.vim
│ │ └── enc_utf-8.vim
│ ├── viki_anyword.vim
│ ├── viki_latex.vim
│ ├── viki_viki.vim
│ └── viki.vim
├── colors
│ └── molokai.vim
├── compiler
│ └── deplate.vim
├── doc
│ ├── tlib.txt
│ └── viki.txt
├── ftplugin
│ ├── bib
│ │ └── viki.vim
│ └── viki.vim
├── plugin
│ ├── 02tlib.vim
│ ├── genutils.vim
│ ├── multvals.vim
│ └── viki.vim
└── test
└── tlib.vim
Any help is much appreciated.
The info is outdated. You need to install a current versions of tlib and viki from:
https://github.com/tomtom/viki_vim
https://github.com/tomtom/tlib_vim
I have created a directory structure with an executable file. Following is the output of tree:
program-5
├── debian
│ ├── DEBIAN
│ │ ├── changelog
│ │ ├── compat
│ │ ├── control
│ │ ├── copyright
│ │ ├── docs
│ │ ├── emacsen-install.ex
│ │ ├── emacsen-remove.ex
│ │ ├── emacsen-startup.ex
│ │ ├── init.d.ex
│ │ ├── manpage.1.ex
│ │ ├── manpage.sgml.ex
│ │ ├── manpage.xml.ex
│ │ ├── menu.ex
│ │ ├── postinst.ex
│ │ ├── postrm.ex
│ │ ├── preinst.ex
│ │ ├── prerm.ex
│ │ ├── program.cron.d.ex
│ │ ├── program.debhelper.log
│ │ ├── program.default.ex
│ │ ├── program.doc-base.EX
│ │ ├── README.Debian
│ │ ├── README.source
│ │ ├── rules
│ │ └── watch.ex
│ └── usr
│ └── local
│ └── include
│ └── myprog
│ ├── file.txt
└── program *(executable)*
This however, is not working with "file.txt". I want this file to go into /usr/local/include/myprog/ but that is not happening. it's giving me the error:
(Reading database ...
(Reading database ... 5%
...
(Reading database ... 100%
(Reading database ... 204105 files and directories currently installed.)
Unpacking program-v5 (from .../program-5_1.4.2_i386.deb) ...
dpkg: error processing /tmp/program-5/debian/program-5_1.4.2_i386.deb (--install):
trying to overwrite '/usr/local/include/myprog/file.txt', which is also in package program2 20120329-1
dpkg-deb (subprocess): data: internal gzip write error: Broken pipe
dpkg-deb: error: subprocess <decompress> returned error exit status 2
dpkg-deb (subprocess): failed in write on buffer copy for failed to write to pipe in copy: Broken pipe
Errors were encountered while processing:
/tmp/program-5/debian/program-5_1.4.2_i386.deb
Can anyone offer any advice?
The error is pretty clear: You try to install program-v5 and it attempts to overwrite a file already present and owned by package program2.
So you need to either
manually uninstall program2 before installing program-v5, or
add the required Conflicts:, Provides:, Replaces: flags in debian/control -- see the docs.
Lastly, for packages, /usr is a more natural choice then /usr/local.
From the error message:
trying to overwrite '/usr/local/include/myprog/file.txt', which is
also in package program2
It looks like you have a package program2 already installed on your system that have already installed this file usr/local/include/myprog/file.txt.
You should first uninstall this package dpkg -r program2