Setting up Cloudberry Backup for Linux for OracleCloud - linux

I'm trying to add an account for OracleCloud, and I'm not sure I'm setting some of the required parameters correctly. My current command looks like this:
cbb addAccount -st OracleCloud -d OracleCloud -un "storage-a229571:" -kv no -c oracle-data-storagea-1 -ep https://a229571.storage.oraclecloud.com/v1/storage-a229571 -reg "US Commercial 2 us2" -ak https://us2.storage.oraclecloud.com/auth/v1.0
I get this response when I run it:
Can't validate account
Code: get openstack token v1. Message: Can't get token
Code: get openstack token v1. Message: Can't get token
Code: Can't get work url. Message: Can't get work url
Has anyone been able to use Cloudberry Backup for Linux against the OracleCloud?

This due to your -kv flag which you set to "no". You need 2 or 3, which guides Cloudberry Backup to use v2 or v3 Keystone version (consult your Oracle reps). Check this page
The below are full list of flags you may want to use:
[root#localhost]# cbb addAccount -d Display_Name -st OracleCloud -h
CloudBerry Backup Command Line Interface started
addAccount -d Display_Name -st OracleCloud <-un UserName> <-ak ApiKey> <-c BackupContainer> <-ep Endpoint> [-useInternalUrl yes/no(default)] [-reg Region] [-bp BackupPrefix] <-kv no> | <-kv 2 <-tn TenantName | -ti TenantID> > | <-kv 3 [-us < yes | no > <-pn project_name> | <-pi project_id>] <-dn DomainName | -di DomainID> >
-d Display_Name : Display name.
-un UserName : User name.
-ak ApiKey : Api key.
-reg Region : Region. Optional.
-c BackupContainer : Backup container.
-ep Endpoint : Auth endpoint.
-bp BackupPrefix : Backup prefix to differentiate between backups from different computers. Optional, by default is computer name.
-kv Keystone Version: Keystone version. Possible values 2, 3, no.
-pn project_name : Project name. Use only with keystone version 3.
-pi project_id : Project id. Use only with keystone version 3.
-tn TenantName : Tenant name. Use only with keystone version 2.
-ti TenantID : Tenant id. Use only with keystone version 2.
-us Use_scope : Use scope. Use only with keystone version 3. Possilble values yes, no.
-dn DomainName : Domain name. Use only with keystone version 3.
-dn DomainID : Domain id. Use only with keystone version 3.
-useInternalUrl : use internal url. Optional, by default is no. Possilble values yes, no.

OK, I finally had time to work on this again. This is what I figured out:
cbb addAccount -st OracleCloud \
-d <Some Name> \
-un "storage-<identity domain>:<Oracle Cloud User Name>" \
-ak "<Password for -un>" \
-c "<An existing Container>" \
-ep <Authentication Endpoint> \
-kv no
-ep is usually https://<data center>.storage.oraclecloud.com/auth/v1.0
-reg is optional, and I could not get cbb to work if I included it

Related

How to get Gitlab runner registration token from command line?

I'm trying to deploy a Gitlab instance and runners ready with Terraform. The script creates both Gitlab and runners without any problem, but I don't know how to register the runners automatically after the creation.
Is there any way to get the registration token from command line? If it's possible I can register just calling external data source using Terraform.
The projects API endpoint response contains the runners_token key. You can use this to automatically fetch the runner tokens for any project.
You can then use that in a few ways. One way would be to have your runner registration script fetch the runner token itself such as with this example:
curl --fail --silent --header "Private-Token: ${GITLAB_API_TOKEN}" "https://$GITLAB_URL/api/v4/projects/${PROJECT}"
Or you could use the Gitlab Terraform provider's gitlab_project data source to fetch this from whatever is running Terraform and then inject it into the thing that runs the registration script such as a templated file:
data "gitlab_project" "example" {
id = 30
}
locals {
runner_config = {
runner_token = data.gitlab_project.example.runners_token
}
}
output "example" {
value = templatefile("${path.module}/register-runners.sh.tpl", local.runner_config)
}
Yes, you can.
The command has to be run on the server hosting your Gitlab instance. The line below will output the current shared runner token.
sudo gitlab-rails runner -e production "puts Gitlab::CurrentSettings.current_application_settings.runners_registration_token"
As others have mentioned, there is not API endpoint that currently allows this (there has been discussion over this for quite some time here. However, I find this solution satisfactory for my needs.
Credits for this answer go to MxNxPx. This script used to work (for me) two days ago:
GITUSER="root"
GITURL="http://127.0.0.1"
GITROOTPWD="mysupersecretgitlabrootuserpassword"
# 1. curl for the login page to get a session cookie and the sources with the auth tokens
body_header=$(curl -k -c gitlab-cookies.txt -i "${GITURL}/users/sign_in" -sS)
# grep the auth token for the user login for
# not sure whether another token on the page will work, too - there are 3 of them
csrf_token=$(echo $body_header | perl -ne 'print "$1\n" if /new_user.*?authenticity_token"[[:blank:]]value="(.+?)"/' | sed -n 1p)
# 2. send login credentials with curl, using cookies and token from previous request
curl -sS -k -b gitlab-cookies.txt -c gitlab-cookies.txt "${GITURL}/users/sign_in" \
--data "user[login]=${GITUSER}&user[password]=${GITROOTPWD}" \
--data-urlencode "authenticity_token=${csrf_token}" -o /dev/null
# 3. send curl GET request to gitlab runners page to get registration token
body_header=$(curl -sS -k -H 'user-agent: curl' -b gitlab-cookies.txt "${GITURL}/admin/runners" -o gitlab-header.txt)
reg_token=$(cat gitlab-header.txt | perl -ne 'print "$1\n" if /code id="registration_token">(.+?)</' | sed -n 1p)
echo $reg_token
However, as of today it stopped working. I noticed the second body_header variable is empty. Upon inspecting the gitlab-header.txt file, I noticed it contained:
You are being redirected.
Whereas I would expect it to be signed in at that point, with a gitlab-header.txt file that contains the respective runner registration token. I expect I am doing something wrong, however, perhaps there has been an update to the gitlab/gitlab-ce:latest package such that a change to the script is required.
Disclaimer, I am involved in creating that code
Here is a horrible but working Python boiler plate code that gets the runner token and exports it to a parent repository: https://github.com/a-t-0/get-gitlab-runner-registration-token.
Independent usage
It requires a few manual steps to set up, and then gets the GitLab runner registration token automatically (from the CLI with:). It requires Conda and Python however, and downloads a browser controller. So it is most likely wiser to look a bit better into the curl commands instead.
Integrated in parent [bash] repository
First install the conda environment, then activate it. After that, you can execute the function below automatically from the CLI (if you put that function in a file at path parent_repo/src/get_gitlab_server_runner_token.sh, assuming you have the credentials etc as specified in the Readme), with:
cd parent_repo
source src/get_gitlab_server_runner_token.sh && get_registration_token_with_python
This bash function gets the token:
get_registration_token_with_python() {
# delete the runner registration token file if it exist
if [ -f "$RUNNER_REGISTRATION_TOKEN_FILEPATH" ] ; then
rm "$RUNNER_REGISTRATION_TOKEN_FILEPATH"
fi
git clone https://github.com/a-t-0/get-gitlab-runner-registration-token.git &&
set +e
cd get-gitlab-runner-registration-token && python -m code.project1.src
cd ..
}
And here is a BATS test that verifies the token is retrieved:
#!./test/libs/bats/bin/bats
load 'libs/bats-support/load'
load 'libs/bats-assert/load'
load 'libs/bats-file/load'
source src/get_gitlab_server_runner_token.sh
source src/hardcoded_variables.txt
#test "Checking if the gitlab runner registration token is obtained correctly." {
get_registration_token_with_python
actual_result=$(cat $RUNNER_REGISTRATION_TOKEN_FILEPATH)
EXPECTED_OUTPUT="somecode"
assert_file_exist $RUNNER_REGISTRATION_TOKEN_FILEPATH
assert_equal ${#actual_result} 20
}

Fork project into group for a new contributor (API)

For the possibility that there is a better approach that I'm thinking about I would like to explain my requirement first (the question starts after the divider line).
Given I have the following GitLab group structure:
- main-group
- sub-group-project-templates
- project-template-1
- sub-group-projects
In main-group/sub-group-project-templates there are multiple projects that function as starter project for a task.
If there is someone who wants to do such a task I want to have a fork of the corresponding project template (e.g. project-template-1) in the group sub-group-projects with the name of the person as project name.
Given the name of the person is John Doe who wants to start the task of project-template-1 there should be a new project main-group/sub-group-projects/john_doe_project-template-1.
John Doe should only be able to see (read / write) the repository john_doe_project-template-1 and NOT the rest of the project (other forks, ...).
My solution would be to fork the template project to the sub group and then add a new contributor.
But I already fail at the first step (forking the project into the sub group with a new name).
My first shot at it was looking at:
POST http://gitlab.com/api/v4/projects/project_id/fork
But I don't know how to define the target directory and set the name of the new fork.
The following isn't working:
POST http://gitlab.com/api/v4/projects/project_id/fork?namespace=main-group%2Fsub-group-projects%2Fjohn_doe_project-template-1
"message": "404 Target Namespace Not Found"
Is something like this even possible or how can I achieve this?
Supposing you have the following configuration with project project-template-1 in subgroup sub-group-project-templates :
It can be done in multiple API requests but there are some work-in-progress for the following features:
(1) forking a project with namespace in a subgroup doesn't work
(2) choosing a custom name when forking is not implemented
But these 2 issues have workarounds eg just perform an additionnal request to get the namespace id (1) and perform an edit project request (2) after forking
The steps are the following :
get namespace id for the subgroup sub-group-projects
fork project main-group/sub-group-project-templates/project-template-1 to namespace with id $namespace_id & get the project id of the created project
rename project with id $project_id from project-template-1 to : johndoe_project-template-1
get user id for user johndoe
add project member for project johndoe_project-template-1 : add user with id $user_id
Note that to add a member, you need the user id which I assume you want to search in the 4th step but you may not need this step if you already have it
Here is a bash script performing all these step, it uses curl & jq :
#!/bin/bash
set -e
gitlab_host=gitlab.your.host
access_token=YOUR_ACCESS_TOKEN
username=johndoe
project=project-template-1
user_access=30
group=main-group
namespace_src=sub-group-project-templates
namespace_dest=sub-group-projects
new_project_name=${username}_${project}
project_src=$group/$namespace_src/$project
encoded_project=$(echo $project_src | sed 's/\//%2F/g')
echo "1 - get namespace id for $namespace_dest"
#https://docs.gitlab.com/ce/api/namespaces.html
namespace_id=$(curl -s "https://$gitlab_host/api/v4/namespaces?search=$namespace_dest" \
-H "Private-Token: $access_token" | jq -r '.[0].id')
echo "2 - fork project $project_src to namespace with id $namespace_id"
#https://docs.gitlab.com/ce/api/projects.html#fork-project
project_id=$(curl -s -X POST "https://$gitlab_host/api/v4/projects/$encoded_project/fork?namespace=$namespace_id" \
-H "Private-Token: $access_token" | jq '.id')
if [ -z $project_id ]; then
echo "fork failed"
exit 1
fi
echo "3 - rename project with id $project_id from $project to : $new_project_name"
#https://docs.gitlab.com/ce/api/projects.html#edit-project
curl -s -X PUT "https://$gitlab_host/api/v4/projects/$project_id" \
-H "Content-Type: application/json" \
-d "{\"path\": \"$new_project_name\",\"name\": \"$new_project_name\" }" \
-H "Private-Token: $access_token" > /dev/null
echo "4 - get user id for : $username"
#https://docs.gitlab.com/ce/api/users.html
user_id=$(curl -s "https://$gitlab_host/api/v4/users?username=johndoe" \
-H "Private-Token: $access_token" | \
jq -r '.[] | select(.name == "johndoe") | .id')
echo "5 - edit project member : add user with id $user_id"
#https://docs.gitlab.com/ce/api/members.html#add-a-member-to-a-group-or-project
curl -s "https://$gitlab_host/api/v4/projects/$project_id/members" \
-H "Private-Token: $access_token" \
-d "user_id=$user_id&access_level=$user_access" > /dev/null
In the 2nd step (fork project), project source namespace is URL encoded so you may want to use a proper manner to encode it (in this case we only have to replace / with %2F)

CouchDb data migration from 1.4 to 2.0

I am updating CouchDB from 1.4 to 2.0 on my windows 8 system.
I have taken backup of my data and view files from /var/lib/couchdb and uninstalled 1.4.
Installed 2.0 successfully and its running.
Now I copied all data to /var/lib/couchdb and /data folder but futon is not showing any database.
I created a new database "test" and its accessible in futon but I could not find it in /data dir.
Configuration:
default.ini:
[couchdb]
uuid =
database_dir = ./data
view_index_dir = ./data
Also I want to understand that: will upgrade require re-indexing?
you might want to look at the local port of the node you copied the data into: when you just copy data files, it will likely work, but they appear at another Port (5986 instead of 5984).
What this means is: when you copy the database file (those residing in the directory specified in /_config/couchdb/database_dir and ending with .couch; quoting https://blog.couchdb.org/2016/08/17/migrating-to-couchdb-2-0/ here) into the data directory of one of the nodes of the CouchDB 2.0 cluster (e.g., lib/node1/data), the database will appear in http://localhost:5986/_all_dbs (note 5986 instead of 5984: this is the so-called local port never intended for production use but helpful here).
As the local port is not a permanent solution, you can now start a replication from the local port to a clustered port (still quoting https://blog.couchdb.org/2016/08/17/migrating-to-couchdb-2-0/ - assuming you're dealing with a database named mydb resulting in a filename mydb.couch):
# create a clustered new mydb on CouchDB 2.0
curl -X PUT 'http://machine2:5984/mydb'
# replicate data (local 2 cluster)
curl -X POST 'http://machine2:5984/_replicate' -H 'Content-type: application/json' -d '{"source": "http://machine2:5986/mydb", "target": "http://machine2:5984/mydb"}'
# trigger re-build index(es) of somedoc with someview;
# do for all to speed up first use of application
curl -X GET 'http://machine2:5984/mydb/_design/_view/?stale=update_after'
As an alternative, you could also replicate from the old CouchDB (running) to the new one as you can replicate between 1.x and 2.0 just as you could replicate between 1.x and 1.x
Use this to migrate all databases residing in CouchDB's database_dir, e.g. /var/lib/couchdb
# cd to database dir, where all .couchdb files reside
cd /var/lib/couchdb
# create new databases in the target instance
for i in ./*.couch; do curl -X PUT http://machine2:5986$( echo $i | grep -oP '[^.]+(?=.couch)'); done
# one-time replication of each database from source to target instance
for i in ./*.couch; do curl -X POST http://machine1:5984/_replicate -H "Content-type: application/json" -d '{"source": "'"$( echo $i | grep -oP '[^./]+(?=.couch)')"'", "target": "http://machine2:5986'$( echo $i | grep -oP '[^.]+(?=.couch)')'"}'; done
If you are running both the source and the target CouchDB within a docker container on the same docker host, you might first check the docker host IP that is mapped into the source container in order to allow the source container to access the target container
/sbin/ip route|awk '/default/ { print $3 }'

automatic docker login within a bash script

How can I preseed my credentials to docker login command within a script ?
I'm using a bash script that basically automates the whole process of setting up my custom VMs etc, but when I need to login to docker within the script to pull the images, I get the following error:
Username: FATA[0000] inappropriate ioctl for device
The command I was using is the following:
( echo "xxx"; echo "yyy"; echo "zzz" ) | docker login docker.somesite.org
Is this possible to achieve without using and coping over an existing .dockercfg file and how,
Many thanks.
Docker 18 and beyond
There's now an officially-documented way to do this:
cat ~/my_password.txt | docker login --username foo --password-stdin
Docker 1.11 through Docker 17
You can pass all the arguments on the command-line:
docker login --username=$DOCKER_USER --password=$DOCKER_PASS $DOCKER_HOST
If you don't specify DOCKER_HOST, you'll get the main Docker repo. If you leave out any of the arguments, you'll be prompted for that argument.
Older than 1.11
The same path as just above, except that you need to also pass an --email flag. The contents of this are not actually checked, so anything is fine:
docker login --username=$DOCKER_USER --password=$DOCKER_PASS $DOCKER_HOST --email whale#docker.com
To run the docker login command non-interactively, you can set the --password-stdin flag to provide a password through STDIN. Using STDIN prevents the password from ending up in the shell’s history, or log-files.
$ echo $DOCKER_PASS | docker login -u$DOCKER_USER --password-stdin $DOCKER_HOST
When you login to your private registry, docker auto create a file $HOME/.docker/config.json The file had the Credentials info, so you could save the file and copy to any host when you want to login the registry.
The file content like this:
{
"auths": {
"example.com": {
"auth": "xxxxxxxxxxxxxxxxxxxxxxx"
}
}
}
Add-on If you want to login multi docker registry on one server ,just add another auth info.like this:
{
"auths": {
"example.com": {
"auth": "xxxxxxxxxxxxxxxxxxxxxxx"
},
"example1.com":{
"auth": "xxxxxxxxxxxxxxxxxxxxxxx"
}
}
}
Now you can push and pull images from the example.com and example1.com.
For any random passer by that may stumble into this looking for a way to use this against an Openshift environment's container registry (Docker) you can use the following to provide the registry URI along with the credentials to log into it using an Openshift token.
$ echo "$(oc whoami -t)" | docker login -u $USER --password-stdin \
$(oc get route docker-registry -n default --no-headers | awk '{print $2}')
Login Succeeded
The above does 3 things:
Passes token retrieved from Openshift oc whoami -t
Determines Openshift's registry URI
$(oc get route docker-registry -n default --no-headers | awk '{print $2}'`)
Logs into registry using $USER + token from above
I was having massive issues with this, just wanted to add that the environment variable DOCKER_HOST has special meaning to docker to define the daemon socket it connects to, causing it to fail login. There's a full list of the environment variables docker uses here: https://docs.docker.com/engine/reference/commandline/cli/
I changed my environment variables to something else, e.g. REG_ and it worked
docker login --username $REG_USERNAME --password $REG_PASSWORD $REG_HOST
Note, if you're doing this in a gitlab runner, there's no need to use the --password-stdin flag if you're already using variable masking (you can, there's just no need).

openam - create a user with ssoadm

I have new goal. Be able to create users of openam with ssoadm.
I have read the documentation of Openam
https://wikis.forgerock.org/confluence/display/openam/ssoadm-identity#ssoadm-identity-create-identity
However, I don't know how to create a user and then assign it a password. For now I just can create users by openam web, but is not desirable, I want to automatize.
Somebody know how can I create a normal user with ssoadm?
./ssoadm create-identity ?
./ssoadm create-agent ?
UPDATE: I have continued with my investigation :) I think I'm closer than before
$ ./ssoadm create-identity -u amadmin -f /tmp/pwd.txt -e / -i Test -t User
Minimum password length is 8.
But where is the parameter for password?
Thanks!
To create a new user in the configured data stores you could execute the following ssoadm command:
$ openam/bin/ssoadm create-identity -e / -i helloworld -t User -u amadmin -f .pass -a givenName=Hello sn=World userPassword=changeit
Here you can see that I've defined the password as the userPassword attribute, which is data store dependent really. For my local OpenDJ this is perfectly legal, but if you are using a database or something else, then you'll have to adjust the command accordingly.
If you don't want to provide the attributes on the command line, then you could put all the values into a properties file, for example:
$ echo "givenName=Hello
sn=World
userPassword=changeit" > hello.txt
$ openam/bin/ssoadm create-identity -e / -i helloworld -t User -u amadmin -f .pass -D hello.txt
But I must say that using OpenAM for identity management is not recommended, you should use your data store's own tools to manage identities (i.e. use an LDAP client within your app, or just simply use the ldap* CLI tools). You may find that OpenAM doesn't handle all the different identity management related tasks as normally people would expect, so to prevent surprises use something else for identity management.

Resources