I want to install python3.x by use pyenv with ansible.
- name: install pyenv
git: >
repo=https://github.com/pyenv/pyenv.git
dest=/home/www/.pyenv
accept_hostkey=yes
become: yes
become_user: www
- name: enable pyenv
shell: |
echo 'export PYENV_ROOT="/home/www/.pyenv"' >> /home/www/.bashrc
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> /home/www/.bashrc
echo 'eval "$(pyenv init -)"' >> /home/www/.bashrc
- name: install python
shell: pyenv install 3.4.3
How to install python3.x with ansible?
So here is what worked for me well to get any version of python installed with ansible and make it an alternative installation. I first ran configure and make, later compressed the result since this takes a while, then re-distributed the file using a mirror so I can run make altinstall on its own. Here is the recipe:
---
# Check the alt python3 version
- name: check alt python version
shell: /usr/local/bin/python3.6 --version
register: python3_version
ignore_errors: yes # If not installed
tags:
- python-alt
# Stuff I did manually to compile everything first by hand
# Python3 alt-install - steps to create binary:
# wget https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tgz
# tar xf Python-3.6.4.tgz
# mv Python-3.6.4 Python-3.6.4-binary && cd Python-3.6.4-binary
# ./configure --prefix=/usr/local --enable-optimizations
# cd .. && tar -zcvf Python-3.6.4-binary.tar.gz Python-3.6.4-binary (upload to mirror servers)
# make && sudo make altinstall UNINST=1
- name: download and unpack alternative python3
unarchive:
src: http://www.yourmirror.com/centos/python/Python-3.6.4-binary.tar.gz dest=/tmp/Python-3.6.4-binary.tar.gz
dest: /tmp
remote_src: yes
keep_newer: yes
when: python3_version['stderr'] != 'Python 3.6.4'
tags:
- python-alt
# Its possible to install (instead of altinstall) python3 here
- name: make install alt python3
make:
chdir: /tmp/Python-3.6.4-binary
target: altinstall
params:
UNINST: 1 # Replace
when: python3_version['stderr'] != 'Python 3.6.4'
become: yes
tags:
- python-alt
- name: download get-pip.py
get_url:
url: https://bootstrap.pypa.io/get-pip.py
dest: /tmp/get-pip.py
mode: 0664
tags:
- python-alt
- name: install pip for python3
shell: /usr/local/bin/python3.6 /tmp/get-pip.py
become: yes
tags:
- python-alt
# We need virtualenv installed under py3 for the virtualenv command to work
- pip:
name: virtualenv
executable: /usr/local/bin/pip3.6
become: True
tags:
- python-alt
If you want to compile everything on your server you could do the following before the altinstall step and also download the source code package instead of the pre-compiled tar. I don't recommend doing it this way because it does take up resources and you don't want to be doing it in prod. Using Python2.7.14 as an example:
---
# Build the default target
- debug:
var: python2_version
tags:
- python_alt
- make:
chdir: /tmp/Python-2.7.14-binary
when: python2_version['stderr'] != 'Python 2.7.14'
tags:
- python_alt
- name: configure target command
command: ./configure --prefix=/usr/local --enable-optimizations chdir=/tmp/Python-2.7.14-binary
when: python2_version['stderr'] != alt_python_version
tags:
- python_alt
Rather than using the shell module to set environment variables on the remote host, Ansible has the environment keyword, which can set per task or even per playbook.
Assuming the www user already exists I managed to get this working with some more specific path setting:
- name: enable pyenv and install python
shell: /home/www/.pyenv/bin/pyenv init - && /home/www/.pyenv/bin/pyenv install 3.4.3 chdir=/home/www
environment:
pyenv_root: /home/www/.pyenv
path: "{{ pyenv_root }}/bin:$PATH"
become: yes
become_user: www
You will need to run the playbook with:
ansible-playbook --ask-become-pass <playbook-name>
and supply the password for the www user on request.
If that doesn't work, you might have to post the whole playbook here for us to look at :)
Related
I am trying to write a gitlab CI file as follows:
image: ubuntu:latest
variables:
GIT_SUBMODULE_STRATEGY: recursive
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: $BUCKET_TRIAL
stages:
- deploy
.before_script_template: &before_script_definition
stage: deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
.after_script_template: &after_script_definition
after_script:
# Upload package to S3
# Install AWS CLI
- pip install awscli --upgrade # --user
- export PATH=$PATH:~/.local/bin # Add to PATH
# Configure AWS connection
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws sts get-caller-identity --output text --query 'Account' # current account
- aws s3 cp ~/forlambda/archive.zip $BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip
monatliche_strom:
variables:
LAMBDA_NAME: monthly_strom
before_script: *before_script_definition
script:
- mv some.py ~
- mv requirements.txt ~
# Move submodules
- mv submodule1/submodule1 ~
- mv submodule1/submodule2/submodule2 ~
# Setup virtual environment
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forlambda/venv/lib/python3.7/site-packages/
- zip -r9 ~/forlambda/archive.zip .
- cd ~
- zip -g ~/forlambda/archive.zip some.py
- zip -r ~/forlambda/archive.zip submodule1/*
- zip -r ~/forlambda/archive.zip submodule2/*
after_script: *after_script_definition
When I run it in the gitlab CI lint, it gives me the following error:
jobs:monatliche_strom:before_script config should be an array
containing strings and arrays of strings
jobs:monatliche_strom:after_script config should be an array
containing strings and arrays of strings
I am fairly new to gitlab CI, so can someone please tell what is the mistake I am doing?
Try this:
image: ubuntu:latest
variables:
GIT_SUBMODULE_STRATEGY: recursive
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: $BUCKET_TRIAL
stages:
- deploy
.before_script_template: &before_script_definition
stage: deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
.after_script_template: &after_script_definition
after_script:
# Upload package to S3
# Install AWS CLI
- pip install awscli --upgrade # --user
- export PATH=$PATH:~/.local/bin # Add to PATH
# Configure AWS connection
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws sts get-caller-identity --output text --query 'Account' # current account
- aws s3 cp ~/forlambda/archive.zip $BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip
monatliche_strom:
variables:
LAMBDA_NAME: monthly_strom
<<: *before_script_definition
script:
- mv some.py ~
- mv requirements.txt ~
# Move submodules
- mv submodule1/submodule1 ~
- mv submodule1/submodule2/submodule2 ~
# Setup virtual environment
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forlambda/venv/lib/python3.7/site-packages/
- zip -r9 ~/forlambda/archive.zip .
- cd ~
- zip -g ~/forlambda/archive.zip some.py
- zip -r ~/forlambda/archive.zip submodule1/*
- zip -r ~/forlambda/archive.zip submodule2/*
<<: *after_script_definition
Since you already described before_script & after_script in the anchors, you have to use << to merge the given hash into the current one
I'm looking for a way to install a given version of node via ansible and nvm, the installation of nvm is working as expected because if I connect with the root user, I can execute the command nvm install 8.11.3 but this same command doesn't work with Ansible, I don't understand why.
---
- name: Install nvm
git: repo=https://github.com/creationix/nvm.git dest=~/.nvm version=v0.33.11
tags: nvm
- name: Source nvm in ~/.{{ item }}
lineinfile: >
dest=~/.{{ item }}
line="source ~/.nvm/nvm.sh"
create=yes
tags: nvm
with_items:
- bashrc
- profile
- name: Install node and set version
become: yes
become_user: root
shell: nvm install 8.11.3
...
error log
TASK [node : Install node and set version] *************************************************************************************
fatal: [51.15.128.164]: FAILED! => {"changed": true, "cmd": "nvm install 8.11.3", "delta": "0:00:00.005883", "end": "2018-12-03 15:05:10.394433", "msg": "non-zero return code", "rc": 127, "start": "2018-12-03 15:05:10.388550", "stderr": "/bin/sh: 1: nvm: not found", "stderr_lines": ["/bin/sh: 1: nvm: not found"], "stdout": "", "stdout_lines": []}
to retry, use: --limit .../.../ansible/stater-debian/playbook.retry
It's okay, here's the configuration that works
- name: Install node and set version
become: yes
become_user: root
shell: "source /root/.nvm/nvm.sh && nvm install 8.11.3"
args:
executable: /bin/bash
I think the clue in the output you need is:
"/bin/sh: 1: nvm: not found"
To run a command without including the full path to that command (i.e. nvm rather than /the/dir/nvm/is/installed/in/nvm), then the directory that contains the command, must be in the $PATH environment variable for the shell that runs the command.
In this case it looks like that is not present for the shell that Ansible spawns, versus the shell your interactive commands run in. Change:
- name: Install node and set version
become: yes
become_user: root
shell: nvm install 8.11.3
to
- name: Install node and set version
become: yes
become_user: root
shell: /full/path/to/nvm install 8.11.3
If you don't know what to put in place of '/full/path/to', try either:
which nvm
or
find / -name nvm
I will just post under here, because there are hundreds of these posts.
- name: Install node
become: true
become_user: root
shell: "source /root/.nvm/nvm.sh && nvm install {{ personal_node_version }} && nvm alias default {{ personal_node_version }}"
args:
executable: /bin/bash
worked for me.
This worked for me on Ubuntu 20.04 using nvm version 0.39.1:
- name: Install NVM
shell: >
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
args:
creates: "/root/.nvm/nvm.sh"
- name: Install Node Versions
shell: ". /root/.bashrc && nvm install {{item}}"
with_items:
- 'v10.24.1'
- 'v16.17.0'
- '--lts'
- 'node'
Based on all the posts found on stack and tweaked a little for my own needs - I found this solution worked perfectly for both installing NVM (the easy part) and creating a loop that allows you to insert 1 or many versions of Node as needed
# test if nvm has been installed by the user desired
- stat:
path: /home/yournonrootuser/.nvm
register: nvm_path
- name: Setup NodeVersionManager and install node version
become: yes
# Execute config files such as .profile (Ansible uses non-interactive login shells)
become_flags: -i
become_user: yournonrootuser
block:
- name: Install nvm
shell: >
curl -o- https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
args:
executable: /bin/bash
chdir: "$HOME"
creates: "$HOME/.nvm/nvm.sh"
- name: Setup .profile of yournonrootuser
lineinfile:
path: ~/.profile
# This will make sure Node is on the users PATH
line: source ~/.nvm/nvm.sh
create: yes
become_flags: -i
when: nvm_path.stat.exists == false
# if we got here we already know node version manager is installed
- name: installing node versions using loop
command: sudo -iu yournonrootuser nvm install {{item}}
args:
executable: /bin/bash
chdir: "$HOME"
creates: "$HOME/.nvm/versions/node/v{{item}}"
loop:
- 14.18.3
I'm installing ruby from source and using template to export path. My code looks like this:
- name: clone rbenv
git: repo=git://github.com/sstephenson/rbenv.git dest=/usr/local/rbenv
become: yes
- template: src=templates/rbenv.sh.j2 dest=/etc/profile.d/rbenv.sh
become: true
- name: clone ruby-build repo
git: repo=git://github.com/sstephenson/ruby-build.git dest=~/ruby-build
- name: Install ruby-build
shell: ./ruby-build/install.sh
become: yes
- name: install jruby
shell: . /etc/profile.d/rbenv.sh && rbenv install jruby-9.0.5.0
become: yes
I want to use command "rbenv",
this works but in this way I have to source profile with every command.
Is there any way to source profile once in ansible.config file or something else and use it in the whole project without sourcing profile again.
Either add path in .bashrc
or
- name: install jruby
shell: . /etc/profile.d/rbenv.sh && rbenv install jruby-9.0.5.0
become: yes
args:
executable: /bin/bash -l
/bin/bash -l behaves as login shell
I am using the apt-get install the pure-ftp on ubuntu server 14.04.4
sudo apt-get install pure-ftpd
sudo pure-uploadscript -B -r /home/john/hello.sh
the hell.sh file, and it's able to run.
#!/bin/sh
echo "hello"
Then, I use FileZilla to upload the file. I can upload the file, but the script is not called. please help;
official doc
If you install the pure-ftpd server by apt-get, it may not give you the feature that you want to use. I checked the /var/run folder, some file are missing there. I complied the code with --with-uploadscript, it's working now.
I had to also compile from source, fortunately the install is not too heavy. It may be worth uploading the compiled files from your system to your mirror and just downloading and running make install. On the other hand, this works as well:
- name: install pure-ftpd from source
block:
- name: create required pure-ftpd dirs
become: yes
file:
path: /etc/pure-ftpd
state: directory
owner: root
mode: 0755
- name: install deps for building pureftpd
apt: pkg={{ item }} state=present
with_items:
- libssl-dev
- libpam0g-dev
- name: download and unpack pure-ftpd source
unarchive:
src: http://download.pureftpd.org/pub/pure-ftpd/releases/pure-ftpd-1.0.49.tar.gz
dest: /usr/local/src/
remote_src: yes
keep_newer: yes
register: source_unpack
- name: configuring pure-ftpd source with custom modules
command: "./configure --prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --libexecdir=/usr/libexec
--datadir=/usr/share --sysconfdir=/etc --sharedstatedir=/usr/com --localstatedir=/var --libdir=/usr/lib64
--includedir=/usr/include --infodir=/usr/share/info --mandir=/usr/share/man --with-virtualchroot --with-everything
--with-uploadscript --with-tls --with-pam"
args:
chdir: /usr/local/src/pure-ftpd-1.0.49
when: source_unpack|changed
register: pure_ftpd_configure
- name: make and install pure-ftpd
become: yes
shell: make && make install
args:
chdir: /usr/local/src/pure-ftpd-1.0.49
when: pure_ftpd_configure|changed
when: stat_result.stat.exists == False
tags:
- ftp
I have the following configuration as .gitlab-ci.yml
but I found out after successfully pass build stage (which
would create a virtualenv called venv), it seems that
in test stage you would get a brand new environment(there's
no venv directory at all). So I wonder should I put setup
script in before_script therefor it would run in each phase(build/test/deploy). Is it a right way to do it ?
before_script:
- uname -r
types:
- build
- test
- deploy
job_install:
type: build
script:
- apt-get update
- apt-get install -y libncurses5-dev
- apt-get install -y libxml2-dev libxslt1-dev
- apt-get install -y python-dev libffi-dev libssl-dev
- apt-get install -y python-virtualenv
- apt-get install -y python-pip
- virtualenv --no-site-packages venv
- source venv/bin/activate
- pip install -q -r requirements.txt
- ls -al
only:
- master
job_test:
type: test
script:
- ls -al
- source venv/bin/activate
- cp crawler/settings.sample.py crawler/settings.py
- cd crawler
- py.test -s -v
only:
- master
adasd
Gitlab CI jobs supposed to be independent, because they could run on different runners. It is not issue. There two ways to pass files between stages:
The right way. Using artefacts.
The wrong way. Using cache. With cache key "hack". Still need same runner.
So yes, supposed by gitlab way to have everything your job depends on in before script.
Artifacts example:
artifacts:
when: on_success
expire_in: 1 mos
paths:
- some_project_files/
Cache example:
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- node_modules/
- src/bower_components/
For correct running environment i suggest using docker with image containing apt-get dependencies. And use artefacts for passing job results between jobs. Note that artefact also uploaded to gitlab web interface and being able to download them. So if they are quite heavy use small expire_in time, for removing them after all jobs done.