Generating a default config SIMP file - rhel

I am on a RHEL 7.7 instance that uses SIMP. I am trying to generate a default configuration (YAML) file.
Directly from the SIMP docs:
You can use the --dry-run option to step through the questions without changing anything and then run simp config -a /root/.simp/simp_conf.yaml to apply the changes.
And further down:
If you want to understand what variables apply to your setup, run simp config --dry-run and examine the generated simp_conf.yaml file. That file will contain both the settings and their documentation.
I've tried doing so via:
simp config --dry-run
simp config --dry-run -o default_simp_config.yaml
simp config --dry-run -f -o default_simp_config.yaml
No file is generated as a result of any of these commands. What am I missing?
Info:
# simp version
5.1.0
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.7 (Maipo)

It looks like this is a bug in the output of simp config.
When running simp config --dry-run, you should see something like the following header:
================================================================================
`simp config` will take you through preparing your infrastructure for bootstrap
based on a pre-defined SIMP scenario you select. These preparations include
optional and required general system setup and required Puppet configuration.
All changes will be logged to
/root/.simp/simp_conf.log.20200128T140045
First, `simp config` will ensure you have a SIMP omni-environment in place.
Then, you will be prompted to enter setup information. Each prompt will be
prefaced by a detailed description of the information requested, along with the
OS value and/or recommended value for that item, if available.
At any time, you can exit `simp config` by entering <CTRL-C>. By default,
if you exit early, the configuration you entered will be saved to
/root/.simp/.simp_conf.yaml
The next time you run `simp config`, you will be given the option to continue
where you left off or to start all over.
================================================================================
Note, that the save file is noted as /root/.simp/.simp_conf.yaml instead of /root/.simp/.simp_conf.yaml as specified in the referenced documentation.
This appears to be a bug where a leading dot . is being added to the output file in all cases and has been submitted as SIMP-7533

Related

gcloud app deploy does not remove previous versions

I am running a Node.js app on Google App Engine, using the following command to deploy my code:
gcloud app deploy --stop-previous-version
My desired behavior is for all instances running previous versions to be terminated, but they always seem to stick around. Is there something I'm missing?
I realize they are not receiving traffic, but I am still paying for them and they cause some background telemetry noise. Is there a better way of running this command?
Example output of the gcloud app instances list:
As you can see I have two different versions running.
We accidentally blew through our free Google App Engine credit in less than 30 days because of an errant flexible instance that wasn't cleared by subsequent deployments. When we pinpointed it as the cause it had scaled up to four simultaneous instances that were basically idling away.
tl;dr: Use the --version flag when deploying to specify a version name. An existing instance with the same version will be
replaced then next time you deploy.
That led me down the rabbit hole that is --stop-previous-version. Here's what I've found out so far:
--stop-previous-version doesn't seem to be supported anymore. It's mentioned under Flags on the gcloud app deploy reference page, but if you look at the top of the page where all the flags are listed, it's nowhere to be found.
I tried deploying with that flag set to see what would happen but it seemingly had no effect. A new version was still created, and I still had to go in and manually delete the old instance.
There's an open Github issue on the gcloud-maven-plugin repo that specifically calls this out as an issue with that plugin but the issue has been seemingly ignored.
At this point our best bet at this point is to add --version=staging or whatever to gcloud deploy app. The reference docs for that flag seem to indicate that that it'll replace an existing instance that shares that "version":
--version=VERSION, -v VERSION
The version of the app that will be created or replaced by this deployment. If you do not specify a version, one will be generated for you.
(emphasis mine)
Additionally, Google's own reference documentation on app.yaml (the link's for the Python docs but it's still relevant) specifically calls out the --version flag as the "preferred" way to specify a version when deploying:
The recommended approach is to remove the version element from your app.yaml file and instead, use a command-line flag to specify your version ID
As far as I can tell, for Standard Environment with automatic scaling at least, it is normal for old versions to remain "serving", though they should hopefully have zero instances (even if your scaling configuration specifies a nonzero minimum). At least that's what I've seen. I think (I hope) that those old "serving" instances won't result in any charges, since billing is per instance.
I know most of the above answers are for Flexible Environment, but I thought I'd include this here for people who are wondering.
(And it would be great if someone from Google could confirm.)
I had same problem as OP. Using the flex environment (some of this also applies to standard environment) with Docker (runtime: custom in app.yaml) I've finally solved this! I tried a lot of things and I'm not sure which one fixed it (or whether it was a combination) so I'll list the things I did here, the most likely solutions being listed first.
SOLUTION 1) Ensure that cloud storage deletes old versions
What does cloud storage have to do with anything? (I hear you ask)
Well there's a little tooltip (Google Cloud Platform Web UI (GCP) > App Engine > Versions > Size) that when you hover over it says:
(Google App Engine) Flexible environment code is stored and billed from Google Cloud Storage ... yada yada yada
So based on this info and this answer I visited GCP > Cloud Storage > Browser and found my storage bucket AND a load of other storage buckets I didn't know existed. It turns out that some of the buckets store cached cloud functions code, some store cached docker images and some store other cached code/stuff (you can tell which is which by browsing the buckets).
So I added a deletion policy to all the buckets (except the cloud functions bucket) as follows:
Go to GCP > Cloud Storage > Browser and click the link (for the relevant bucket) in the Lifecycle Rules column > Click ADD A RULE > THEN:
For SELECT ACTION choose "Delete Object" and click continue
For SELECT OBJECT choose "Number of newer versions" and enter 1 in the input
Click CREATE
This will return you to the table view and you should now see the rule in the lifecycle rules column.
REPEAT this process for all relevant buckets (the relevant buckets were described earlier).
THEN delete the contents of the relevant buckets. WARNING: Some buckets warn you NOT to delete the bucket itself, only the contents!
Now re-deploy and your latest version should now get deployed and hopefully you will never have this problem again!
SOLUTION 2) Use deploy flags
I added these flags
gcloud app deploy --quiet --promote --stop-previous-version
This probably doesn't help since these flags seem to be the default but worth adding just in case.
Note that for the standard environment only (I heard on the grapevine) you can also use the --no-cache flag which might help but with flex, this flag caused the deployment to fail (when I tried).
SOLUTION 3)
This probably does not help at all, but I added:
COPY app.yaml .
to the Dockerfile
TIP 1)
This is probably more of a helpful / useful debug approach than a fix.
Visit GCP > App Engine > Versions
This shows all versions of your app (1 per deployment) and it also shows which version each instance is running (instances are configured in app.yaml).
Make sure all instances are running the latest version. This should happen by default. Probably worth deleting old versions.
You can determine your version from the gcloud app deploy logs (at the start of the logs) but it seems that the versions are listed by order of deployment anyway (most recent at top).
TIP 2)
Visit GCP > App Engine > Instances
SSH into an instance. This is just a matter of clicking a few buttons (see screenshot below). Once you have SSH'd in run:
docker exec -it gaeapp /bin/bash
Which will get you into the docker container running your code. Now you can browse around to make sure it has your latest code.
Well I think my answer is long enough now. If this helps, don't thank me, J-ES-US is the one you should thank ;) I belong to Him ^^
Google may have updated their documentation cited in #IAmKale's answer
Note that if the version is running on an instance of an auto-scaled service, using --stop-previous-version will not work and the previous version will continue to run because auto-scaled service instances are always running.
Seems like that flag only works with manually scaled services.
This is a supplementary and optional answer in addition to my other main answer.
I am now, in addition to my other answer, auto incrementing version manually on deploy using a script.
My script contents are below.
Basically, the script auto increments version every time you deploy. I am using node.js so the script uses npm version to bump the version but this line could easily be tweaked to whatever language you use.
The script requires a clean git working directory for deployment.
The script assumes that when the version is bumped, this will result in file changes (e.g. changes to package.json version) that need pushing.
The script essentially tries to find your SSH key and if it finds it then it starts an SSH agent and uses your SSH key to git commit and git push the file changes. Else it just does a git commit without a push.
It then does a deploy using the --version flag ... --version="${deployVer}"
Thought this might help someone, especially since the top answer talks a lot about using the --version flag on a deploy.
#!/usr/bin/env bash
projectName="vehicle-damage-inspector-app-engine"
# Find SSH key
sshFile1=~/.ssh/id_ed25519
sshFile2=~/Desktop/.ssh/id_ed25519
sshFile3=~/.ssh/id_rsa
sshFile4=~/Desktop/.ssh/id_rsa
if [ -f "${sshFile1}" ]; then
sshFile="${sshFile1}"
elif [ -f "${sshFile2}" ]; then
sshFile="${sshFile2}"
elif [ -f "${sshFile3}" ]; then
sshFile="${sshFile3}"
elif [ -f "${sshFile4}" ]; then
sshFile="${sshFile4}"
fi
# If SSH key found then fire up SSH agent
if [ -n "${sshFile}" ]; then
pub=$(cat "${sshFile}.pub")
for i in ${pub}; do email="${i}"; done
name="Auto Deploy ${projectName}"
git config --global user.email "${email}"
git config --global user.name "${name}"
echo "Git SSH key = ${sshFile}"
echo "Git email = ${email}"
echo "Git name = ${name}"
eval "$(ssh-agent -s)"
ssh-add "${sshFile}" &>/dev/null
sshKeyAdded=true
fi
# Bump version and git commit (and git push if SSH key added) and deploy
if [ -z "$(git status --porcelain)" ]; then
echo "Working directory clean"
echo "Bumping patch version"
ver=$(npm version patch --no-git-tag-version)
git add -A
git commit -m "${projectName} version ${ver}"
if [ -n "${sshKeyAdded}" ]; then
echo ">>>>> Bumped patch version to ${ver} with git commit and git push"
git push
else
echo ">>>>> Bumped patch version to ${ver} with git commit only, please git push manually"
fi
deployVer="${ver//"."/"-"}"
gcloud app deploy --quiet --promote --stop-previous-version --version="${deployVer}"
else
echo "Working directory unclean, please commit changes"
fi
For node.js users if you call the script deploy.sh you should add:
"deploy": "sh deploy.sh"
In your package.json scripts and deploy with npm run deploy

can't use different environment for puppet agent

I have an agent/master setup. I have created a new environment in /etc/puppetlabs/code/environments/ called master.
The content of environment.conf for the master directory environment is
modulepath = site:modules:$basemodulepath
manifest = manifests/site.pp
and when I try puppet agent -t --environment master I am getting some error
Notice: Local environment: 'master' doesn't match server specified node environment 'production', switching agent to 'production'.
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for node1.localpuppet.com
Info: Applying configuration version '1490712072'
Notice: Applied catalog in 0.67 seconds
I am new to puppet. What changes do I need?
PE Console Config
This is a "really fun" quirk of Puppet Enterprise that showed up in the last couple of years. You have to specify the nodes in the PE Classifier that are allowed to specify their directory environment in the puppet.conf or in the puppet agent -t --environment arguments.
In the agent-specified environment tab in the Classifier (you see it at the bottom of your picture above), you can enable it for all nodes. Do this by adding a rule, selecting the name fact, using a regular expression (~), then using the regexp for matching all characters (.*). After you fill this out, the PE Classifier will give you a number of matching nodes. It should be all that are subscribed to your master. Remember to click in the bottom right to update your rules. Your nodes will now be able to use master instead of production from the config file or CLI arguments.
That being said, if you are doing this to avoid naming your default Git branch production in your control repository when working with Code Manager, you should really just rename the branch as that is much easier.

.gitconfig section duplicates everytime I set a preference

I have a this to aliases in my .gitconfig file:
setmeld = config --global diff.external git-meld
nomeld = config --global --unset diff.external
That way I can set and unset the visual diff tool meld.
When I issue:
git setmeld
...the following is added to my .gitconfig file:
[diff]
external = git-meld
When I issue:
git setmeld
...the external = git-meld setting is deleted from my .gitconfig file, but the [diff] section header stays.
If I later run git setmeld again my .gitconfig ends up having two [diff] section headers:
[diff]
[diff]
external = git-meld
If I unset and then set again the external git-meld diff tool I end up with this:
[diff]
[diff]
[diff]
external = git-meld
The problem is not because of the aliases. The same happens if I issue the commands by themselves git config --global diff.external git-meld and config --global --unset diff.external.
- Can I avoid that weird behavior?
OS: Ubuntu 12.04.5 LTS
git version: 1.7.9.5
git doesn't like empty sections.
Use the command
git config diff.dummy "dummy line"
You can edit the config to remove the extra [diff] lines, but be careful to save the config file to config.save first. If git finds that the config file is corrupted, it will refuse further commands.
Then set unset cycles will not add more duplicate sections
Your Git is very old (the current version is 2.8+) and you probably should update yours. The bug may be fixed in newer versions (I have not checked).
Meanwhile, though, this seems like a harmless bug: Git is leaving the section header in place even though the entire section is empty, then adding another section header even though there is already an existing section header. Given the way Git scans the file, though, you may repeat a section header (and a setting) as often as you like. For settings that are not cumulative,1 the last setting overrides the earlier ones.
As a rather hacky workaround, you can just set something else so that the section never becomes entirely empty. Any value will do: anything that Git never uses will just sit around unused, keeping the section not-empty.
There is in fact another diff setting you might want to set, though: diff.renameLimit sets the default size of the rename-detection queue for most git diff operations. In some versions, the "default default" is 500, 1000, and 2000 (it has been growing over time). As of the latest upcoming Git, the new default is 0, which means "unlimited" (which really means "use internal maximum"). I have kept mine set to 0 since very early days.
1One example of a cumulative setting is the value(s) for remote.origin.fetch, assuming you have a [remote "origin"] section. Each fetch = ... value in this section is accumulated, and when running git fetch origin, Git runs each reference obtained from that remote through all the mappings to find its local name. If the mapping produces multiple names, Git complains of an error. (Usually there is only one setting anyway, so that there is only one possible output.)

using gitlab-development-kit, spec tests fail

The date is/was 12/17/2014, I'm trying to run gitlab-ce tests from within gitlab-development-kit. I'm hoping someone familiar with gitlab-ce development can help here. I want to have the tests pass before I begin development. I'm not sure if this warrants a bug report, it may be my environment (CentOS 6.5, rvm 1.26.3, ruby 2.1.3p242 )
I followed instructions on gitlab-development-kit to clone it, run make (to download latest gitlab + gitlab-shell).
I run bundle exec foreman start, redis and pgsql start.
Every thing is looking good, I ran gitlab and it worked fine in development env. I reset everything by recloning and following steps and then tested.
Within ./gitlab, I run "rake gitlab:test"; lots of passed, green tests.
Until I the end, I receive this:
...(many, and all, passing tests above here)...
Scenario: Navigate to project feed
✔ Given I sign in as a user # features/steps/shared/authentication.rb:7
✔ And I own a project # features/steps/shared/project.rb:5
✔ And I visit my project's home page # features/steps/shared/paths.rb:169
✔ Given I visit my project's files page # features/steps/shared/paths.rb:177
✔ Given I press "g" and "p" # features/steps/shared/shortcuts.rb:4
✔ Then the active main tab should be Home # features/steps/shared/project_tab.rb:7
/usr/local/rvm/gems/ruby-2.1.3/gems/actionview-4.1.1/lib/action_view/renderer/partial_renderer.rb:436:in `partial_path': 'nil' is not an ActiveModel-compatible object. It must implement :to_partial_path. (ActionView::Template::Error)
from /usr/local/rvm/gems/ruby-2.1.3/gems/actionview-4.1.1/lib/action_view/renderer/partial_renderer.rb:345:in `setup'
from /usr/local/rvm/gems/ruby-2.1.3/gems/actionview-4.1.1/lib/action_view/renderer/partial_renderer.rb:262:in `render'
from /usr/local/rvm/gems/ruby-2.1.3/gems/actionview-4.1.1/lib/action_view/renderer/renderer.rb:47:in `render_partial'
from /usr/local/rvm/gems/ruby-2.1.3/gems/actionview-4.1.1/lib/action_view/helpers/rendering_helper.rb:35:in `render'
from /usr/local/rvm/gems/ruby-2.1.3/gems/haml-4.0.5/lib/haml/helpers/action_view_mods.rb:10:in `block in render_with_haml'
from /usr/local/rvm/gems/ruby-2.1.3/gems/haml-4.0.5/lib/haml/helpers.rb:89:in `non_haml'
from /usr/local/rvm/gems/ruby-2.1.3/gems/haml-4.0.5/lib/haml/helpers/action_view_mods.rb:10:in `render_with_haml'
from /home/git/gitlab-development-kit/gitlab/app/views/projects/blob/_blob.html.haml:20:in `_app_views_projects_blob__blob_html_haml__1171767312904667641_107433960'
from /usr/local/rvm/gems/ruby-2.1.3/gems/actionview-4.1.1/lib/action_view/template.rb:145:in `block in render'
from /usr/local/rvm/gems/ruby-2.1.3/gems/activesupport-4.1.1/lib/active_support/notifications.rb:161:in `instrument'
from /usr/local/rvm/gems/ruby-2.1.3/gems/actionview-4.1.1/lib/action_view/template.rb:339:in `instrument'
from /usr/local/rvm/gems/ruby-2.1.3/gems/actionview-4.1.1/lib/action_view/template.rb:143:in `render'
from /usr/local/rvm/gems/ruby-2.1.3/gems/actionview-4.1.1/lib/action_view/renderer/partial_renderer.rb:306:in `render_partial'
...
When I inspect app/views/projects/blob/_blob.html.haml:20
I can see
%ul.blob-commit-info.bs-callout.bs-callout-info.hidden-xs
- blob_commit = #repository.last_commit_for_path(#commit.id, #blob.path)
= render blob_commit, project: #project
The error is complaining because blob_commit is nil, from the line
#repository.last_commit_for_path(#commit.id, #blob.path)
This is a pure clone of everything, I haven't began to make modifications yet. I waited a day to see if maybe the next update would fix things but it hasn't. I don't want to start a feature branch if I'm already having failing tests.
I found out the answer to my problem.
CentOS 6.5 hasn't upgraded their git rpm to anything beyond 1.7.1
I added some debugging to app/models/repository.rb, in def commit(id = HEAD), and def last_commit_for_path(sha, path)
It appears that in last_commit_for_path, on gitlab-test_bare, the following command is run:
git rev-list --max-count 1 5937ac0a7beb003549fc5fd26fc247adbce4a52e -- CHANGELOG
which results in an exception
fatal: bad revision '1'
which results in '#'repository.last_commit line above to assign nil to blob_commit.
It looks like --max-count 1, in my environment, must be --max-count=1.
git rev-list --max-count=1 5937ac0a7beb003549fc5fd26fc247adbce4a52e -- CHANGELOG
which results in a valid commit sha
913c66a37b4a45b9769037c55c2d238bd0942d2e
When I looked at my git version, i noticed it was extremely out of date (1.7 vs 2.2) so I updated using source and the tests pass since gitlab can execute the --max-count 1 as a parameter to git.

puppet: "applying configuration version ", what does it refer to?

When I run
sudo puppet agent -t
after a long phase of catalog loading, I get a message:
info: Applying configuration version '1403590182'
What is that number 1403590182 referring to?
In fact I have noticed that if I run twice in a row sudo puppet agent -t, I get different configuration version numbers even if the modules have not changed!
How can I determine which version of each module is being applied to the node?
from the documentation config_version
How to determine the configuration version. By default, it will be the
time that the configuration is parsed, but you can provide a shell
script to override how the version is determined. The output of this
script will be added to every log message in the reports, allowing you
to correlate changes on your hosts to the source version on the
server.
Setting a global value for config_version in puppet.conf is not
allowed (but it can be overridden from the commandline). Please set a
per-environment value in environment.conf instead. For more info, see
https://puppet.com/docs/puppet/latest/environments_about.html
The time is represented as a unix time stamp as such yours indicates "06/24/2014 # 6:09am" (and i just realised how old this Q was)
If the manifests are git controlled the administrator can let the Puppet master know how to describe the version with the statement below in /etc/puppet/puppet.conf (on the Puppet master). One such statement goes in each environment section with the path adjusted to where the environment looks for manifests.
config_version = git --git-dir $confdir/modules/production/.git describe --all --long
If you use some other version control system i'm sure there's some equivalent command to get an indication of the revision.

Resources