pre-commit hook yapf returns different results than running yapf in the command line - pre-commit-hook

When running over a file using the command line and yapf, my tags are the following:
-i --verbose --style "google"
When using the same above as args for pre-commit, my pre-commit hook always returns "Pass".
This was tested against the same file for the same changes, so I would have expected similar results. If I exclude --style "google", my pre-commit hook will at least change the format of my file, but not to the style that I want it to.
Can someone tell me what I am doing wrong with the args?
Python File that contains an example:
def hello_world():
print("hello world")
if 5 == 5: print("goodbye world")
.pre-commit-config.yaml file:
- repo: https://github.com/pre-commit/pre-commit-hooks.git
sha: v4.0.1
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- repo: https://github.com/google/yapf
rev: v0.31.0
hooks:
- id: yapf
name: "yapf"
On commit, my file will change and pre-commit has told me yapf has changed my file to the following:
def hello_world():
print("hello world")
if 5 == 5: print("goodbye world")
If I go back to the same python file and update my .pre-commit-config.yaml file to this:
- repo: https://github.com/pre-commit/pre-commit-hooks.git
sha: v4.0.1
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- repo: https://github.com/google/yapf
rev: v0.31.0
hooks:
- id: yapf
name: "yapf"
args: [--style "google" ]
Running a commit will provide Pass instead of making any changes, even the ones from before
Edit 1:
The .pre-commit.config.yaml file was updated to:
- repo: https://github.com/pre-commit/pre-commit-hooks.git
sha: v4.0.1
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- repo: https://github.com/google/yapf
rev: v0.31.0
hooks:
- id: yapf
name: "yapf"
args: [--style, google]
Running pre-commit run only shows Passed instead of reformatting. I've also tried putting in pep8, and other arbitrary words as a replacement for google. These all result in Passed. Maybe there is something on my end where the style arg is being ignored and causing all of yapf to fail?

This post was from a while ago, but for anyone else in the future reading it, but I was able to control the yapf style in pre-commit with a .style.yapf in my parent directory, as outlined in the yapf documentation: https://github.com/google/yapf
This was the .style.yapf I used
[style]
based_on_style = google

Related

Create a file in GitHub action

Inside Github Action I'm using Anchore+grype to scan a container image, using the job below:
name: "CI"
on:
push:
pull_request:
branches:
- main
jobs:
image-analysis:
name: Analyze image
runs-on: ubuntu-18.04
needs: build
steps:
- name: Scan operator image
uses: anchore/scan-action#v3
id: scan
with:
image: "qserv/qserv-operator:2022.1.1-rc1"
acs-report-enable: true
In order to ignore a false-positive during image scan, I want to create the file $HOME/.grype.yaml (see content below) before launching the image scan:
ignore:
# False positive, see https://github.com/anchore/grype/issues/558
- vulnerability: CVE-2015-5237
fix-state: unknown
package:
name: google.golang.org/protobuf
version: v1.26.0
type: go-module
location: "/manager"
Could you please show me how to create this file inside Github Action?
you could do something as simple as creating the file and then writing to it like this:
- name: Create grype.yaml
run: |
touch grype.yaml
echo "
ignore:
# False positive, see https://github.com/anchore/grype/issues/558
- vulnerability: CVE-2015-5237
fix-state: unknown
package:
name: google.golang.org/protobuf
version: v1.26.0
type: go-module
location: "/manager"" > ~/grype.yaml
This one works and has been tested successfully on Github Actions:
name: "CI"
on:
push:
pull_request:
branches:
- main
jobs:
image-analysis:
name: Analyze image
runs-on: ubuntu-18.04
permissions:
security-events: write
needs: build
steps:
- name: Create grype configuration
run: |
cat <<EOF > $HOME/.grype.yaml
ignore:
# False positive, see https://github.com/anchore/grype/issues/558
- vulnerability: CVE-2015-5237
fix-state: unknown
package:
name: google.golang.org/protobuf
version: v1.26.0
type: go-module
location: "/manager"
EOF
- name: Scan operator image
uses: anchore/scan-action#v3
id: scan
with:
image: ""qserv/qserv-operator:2022.1.1-rc1""
acs-report-enable: true
fail-build: false

YAML syntax error (Ansible) using playbooks

I use ansible-module-vcloud, and I want to create VMs via Ansible. For example, I want to create easiest playbook.
I have this code:
---
- name: vCloudDirectorAnsible
hosts: localhost
environment:
env_user: admin
env_password: admin
env_host: vcloud.vmware.ru
env_org: test
env_api_version: 30.0
env_verify_ssl_certs: false
- name: create catalog
vcd_catalog:
catalog_name: "test"
catalog_description: "test_Descr"
state: "present"
But I got error:
ERROR! 'vcd_catalog' is not a valid attribute for a Play
The error appears to have been in '/root/ansible-module-vcloud-director/main.yml': line 14, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: create catalog
^ here
If I delete this part:
- name: create catalog
vcd_catalog:
catalog_name: "test"
catalog_description: "test_Descr"
state: "present"
My playbook will run and successfully completed.
How to fix this?
You missed the tasks keyword.
---
- name: vCloudDirectorAnsible
hosts: localhost
environment:
env_user: admin
env_password: admin
env_host: vcloud.vmware.ru
env_org: test
env_api_version: 30.0
env_verify_ssl_certs: false
tasks:
- name: create catalog
vcd_catalog:
catalog_name: "test"
catalog_description: "test_Descr"
state: "present"

Elixir postgrex with poolboy example on Windows fails with 'module DBConnection.Poolboy not available'

I am exploring using Elixir for fast Postgres data imports of mixed types (CSV, JSON). Being new to Elixir, I am following the the example given in the youtube video "Fast Import and Export with Elixir and Postgrex - Elixir Hex package showcase" (https://www.youtube.com/watch?v=YQyKRXCtq4s). The basic mix application works until the point that Poolboy is introduced, i.e. Postgrex successfully loads records into the database using a single connection.
When I try to follow the Poolboy configuration, and test it by running
FastIoWithPostgrex.import("./data_with_ids.txt")
in iex or the command line, I get the following error, for which I can not determine the cause (username and password removed):
** (UndefinedFunctionError) function DBConnection.Poolboy.child_spec/1 is
undefined (module DBConnection.Poolboy is not available)
DBConnection.Poolboy.child_spec({Postgrex.Protocol, [types:
Postgrex.DefaultTypes, name: :pg, pool: DBConnection.Poolboy, pool_size: 4,
hostname: "localhost", port: 9000, username: "XXXX", password:
"XXXX", database: "ASDDataAnalytics-DEV"]})
(db_connection) lib/db_connection.ex:383: DBConnection.start_link/2
(fast_io_with_postgrex) lib/fast_io_with_postgrex.ex:8:
FastIoWithPostgrex.import/1
I am running this on Windows 10, connecting to a PostgreSQL 10.x Server through a local SSH tunnel. Here is the lib/fast_io_with_postgrex.ex file:
defmodule FastIoWithPostgrex do
#moduledoc """
Documentation for FastIoWithPostgrex.
"""
def import(filepath) do
{:ok, pid} = Postgrex.start_link(name: :pg,
pool: DBConnection.Poolboy,
pool_size: 4,
hostname: "localhost",
port: 9000,
username: "XXXX", password: "XXXX", database: "ASDDataAnalytics-DEV")
File.stream!(filepath)
|> Stream.map(fn line ->
[id_str, word] = line |> String.trim |> String.split("\t", trim: true, parts: 2)
{id, ""} = Integer.parse(id_str)
[id, word]
end)
|> Stream.chunk_every(10_000, 10_000, [])
|> Task.async_stream(fn word_rows ->
Enum.each(word_rows, fn word_sql_params ->
Postgrex.transaction(:pg, fn conn ->
IO.inspect Postgrex.query!(conn, "INSERT INTO asdda_dataload.words (id, word) VALUES ($1, $2)", word_sql_params)
# IO.inspect Postgrex.query!(pid, "INSERT INTO asdda_dataload.words (id, word) VALUES ($1, $2)", word_sql_params)
end , pool: DBConnection.Poolboy, pool_timeout: :infinity, timeout: :infinity)
end)
end, timeout: :infinity)
|> Stream.run
end # def import(file)
end
Here is the mix.exs file:
defmodule FastIoWithPostgrex.MixProject do
use Mix.Project
def project do
[
app: :fast_io_with_postgrex,
version: "0.1.0",
elixir: "~> 1.7",
start_permanent: Mix.env() == :prod,
deps: deps()
]
end
# Run "mix help compile.app" to learn about applications.
def application do
[
extra_applications: [:logger, :poolboy, :connection]
]
end
# Run "mix help deps" to learn about dependencies.
defp deps do
[
# {:dep_from_hexpm, "~> 0.3.0"},
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git",
tag: "0.1.0"},
{:postgrex, "~>0.14.1"},
{:poolboy, "~>1.5.1"}
]
end
end
Here is the config/config.exs file:
# This file is responsible for configuring your application
# and its dependencies with the aid of the Mix.Config module.
use Mix.Config
config :fast_io_with_postgrex, :postgrex,
database: "ASDDataAnalytics-DEV",
username: "XXXX",
password: "XXXX",
name: :pg,
pool: DBConnection.Poolboy,
pool_size: 4
# This configuration is loaded before any dependency and is restricted
# to this project. If another project depends on this project, this
# file won't be loaded nor affect the parent project. For this reason,
# if you want to provide default values for your application for
# 3rd-party users, it should be done in your "mix.exs" file.
# You can configure your application as:
#
# config :fast_io_with_postgrex, key: :value
#
# and access this configuration in your application as:
#
# Application.get_env(:fast_io_with_postgrex, :key)
#
# You can also configure a 3rd-party app:
#
# config :logger, level: :info
#
# It is also possible to import configuration files, relative to this
# directory. For example, you can emulate configuration per environment
# by uncommenting the line below and defining dev.exs, test.exs and such.
# Configuration from the imported file will override the ones defined
# here (which is why it is important to import them last).
#
# import_config "#{Mix.env()}.exs"
Any assistance with finding the cause of this error would be greatly appreciated!
I didn't want to goo to deep into how this isn't working, but that example is a little old, and that poolboy 1.5.1 you get pulled with deps.get is from 2015.. and the example uses elixir 1.4
Also, if you see Postgrex's mix.exs deps, you will notice your freshly installed lib (1.14) depends on elixir_ecto/db_connection 2.x
The code you are referring uses Postgres 1.13.x, which depends on {:db_connection, "~> 1.1"}. So i would expect incompatibilities.
I would play with the versions of the libs you see in examples code mix.lock file, an the elixir version if I wanted to see that working.
Maybe try lowering the Postgrex version first to something around that time (maybe between 0.12.2 and the locked version of the example).
Also, the version of elixir might have some play here, check this
Greetings!
have fun
EDIT:
You can use DBConnection.ConnectionPool instead of poolboy and get way using the latest postgrexand elixir versions, not sure about the performance difference but you can compare, just do this:
on config/config.exs (check if you need passwords, etc..)
config :fast_io_with_postgrex, :postgrex,
database: "fp",
name: :pg,
pool: DBConnection.ConnectionPool,
pool_size: 4
And in lib/fast_io_with.....ex replace both Postgrex.start_link(... lines with:
{:ok, pid} = Application.get_env(:fast_io_with_postgrex, :postgrex)
|> Postgrex.start_link
That gives me:
mix run -e 'FastIoWithPostgrex.import("./data_with_ids.txt")'
1.76s user 0.69s system 106% cpu 2.294 total
on Postgrex 0.14.1 and Elixir 1.7.3
Thank you, using your advice I got the original example to work by downgrading the dependency versions in the mix.exs file and adding a dependency to an earlier version of db_connection:
# Run "mix help deps" to learn about dependencies.
defp deps do
[
# {:dep_from_hexpm, "~> 0.3.0"},
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git", tag: "0.1.0"},
{:postgrex, "0.13.5"},
{:db_connection, "1.1.3"},
{:poolboy, "~>1.5.1"}
]
end
I will also try your suggestion of changing the code to replace Poolboy with the new pool manager in the later version of db_connection to see if that works as well.
I'm sure there was a lot of thought that went into the architecture change, however I must say there is very little out there in terms of why Poolboy was once so popular, but in the latest version of db_connection it's not even supported as a connection type.

How to create secured files in Puppet5 with Hiera?

I want to create SSL certificate and try to secure this operation.
I am using Puppet 5.5.2 and gem hiera-eyaml.
Created simple manifest
cat /etc/puppetlabs/code/environments/production/manifests/site.pp
package { 'tree':
ensure => installed,
}
package { 'httpd':
ensure => installed,
}
$filecrt = lookup('files')
create_resources( 'file', $filecrt )
Hiera config
---
version: 5
defaults:
# The default value for "datadir" is "data" under the same directory as the hiera.yaml
# file (this file)
# When specifying a datadir, make sure the directory exists.
# See https://puppet.com/docs/puppet/latest/environments_about.html for further details on environments.
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Secret data: per-node, per-datacenter, common"
lookup_key: eyaml_lookup_key # eyaml backend
paths:
- "nodes/%{facts.fqdn}.eyaml"
- "nodes/%{trusted.certname}.eyaml" # Include explicit file extension
- "location/%{facts.whereami}.eyaml"
- "common.eyaml"
options:
pkcs7_private_key: /etc/puppetlabs/puppet/eyaml/keys/private_key.pkcs7.pem
pkcs7_public_key: /etc/puppetlabs/puppet/eyaml/keys/public_key.pkcs7.pem
- name: "YAML hierarchy levels"
paths:
- "common.yaml"
- "nodes/%{facts.fqdn}.yaml"
- "nodes/%{::trusted.certname}.yaml"
And common.yaml
---
files:
'/etc/httpd/conf/server.crt':
ensure: present
mode: '0600'
owner: 'root'
group: 'root'
content: 'ENC[PKCS7,{LOT_OF_STRING_SKIPPED}+uaCmcHgDAzsPD51soM+AIkIlv0ANpUXzBpwM3tqQ3ysFtz81S0xuVbKvslK]'
But have en error while applying manifest
Error: Evaluation Error: Error while evaluating a Function Call, create_resources(): second argument must be a hash (file: /etc/puppetlabs/code/environments/production/manifests/site.pp, line: 12, column: 1) on node test1.com
I really dont know what to do )
The problem appears to be that the indentation in common.yaml isn't right - currently, file will be null rather than a hash, which explains the error message. Also, the file should be called common.eyaml, otherwise the ENC string won't be decrypted. Try
---
files:
'/etc/httpd/conf/server.crt':
ensure: present
mode: '0600'
owner: 'root'
group: 'root'
content: 'ENC[PKCS7{LOTS_OF_STRING_SKIPPED}UXzBpwM3tqQ3ysFtz81S0xuVbKvslK]'
There is an online YAML parser at http://yaml-online-parser.appspot.com/ if you want to see the difference the indentation makes.
Found another solution.
Its was a problem with lookup and hashes. When I have multiply lines in hiera hash, I must specify them https://docs.puppet.com/puppet/4.5/function.html#lookup
So i decided use only 'content' variable to lookup
cat site.pp
$filecrt = lookup('files')
file { 'server.crt':
ensure => present,
path => '/etc/httpd/conf/server.crt',
content => $filecrt,
owner => 'root',
group => 'root',
mode => '0600',
}
and Hiera
---
files:'ENC[PKCS7{LOT_OF_STRING_SKIPPED}+uaCmcHgDAzsPD51soM+AIkIlv0ANpUXzBpwM3tqQ3ysFtz81S0xuVbKvslK]'

How can I use the Jenkins Copy Artifacts Plugin from within the pipelines (jenkinsfile)?

I am trying to find an example of using the Jenkins Copy Artifacts Plugin from within Jenkins pipelines (workflows).
Can anyone point to a sample Groovy code that is using it?
With a declarative Jenkinsfile, you can use following pipeline:
pipeline {
agent any
stages {
stage ('push artifact') {
steps {
sh 'mkdir archive'
sh 'echo test > archive/test.txt'
zip zipFile: 'test.zip', archive: false, dir: 'archive'
archiveArtifacts artifacts: 'test.zip', fingerprint: true
}
}
stage('pull artifact') {
steps {
copyArtifacts filter: 'test.zip', fingerprintArtifacts: true, projectName: env.JOB_NAME, selector: specific(env.BUILD_NUMBER)
unzip zipFile: 'test.zip', dir: './archive_new'
sh 'cat archive_new/test.txt'
}
}
}
}
Before version 1.39 of the CopyArtifact, you must replace second stage with following (thanks #Yeroc) :
stage('pull artifact') {
steps {
step([ $class: 'CopyArtifact',
filter: 'test.zip',
fingerprintArtifacts: true,
projectName: '${JOB_NAME}',
selector: [$class: 'SpecificBuildSelector', buildNumber: '${BUILD_NUMBER}']
])
unzip zipFile: 'test.zip', dir: './archive_new'
sh 'cat archive_new/test.txt'
}
}
With CopyArtifact, I use '${JOB_NAME}' as project name which is the current running project.
Default selector used by CopyArtifact use last successful project build number, never current one (because it's not yet successful, or not). With SpecificBuildSelector you can choose '${BUILD_NUMBER}' which contains current running project build number.
This pipeline works with parallel stages and can manage huge files (I'm using a 300Mb file, it not works with stash/unstash)
This pipeline works perfectly with my Jenkins 2.74, provided you have all needed plugins
If you are using agents in your controller and you want to copy artifacts between each other you can use stash/unstash, for example:
stage 'build'
node{
git 'https://github.com/cloudbees/todo-api.git'
stash includes: 'pom.xml', name: 'pom'
}
stage name: 'test', concurrency: 3
node {
unstash 'pom'
sh 'cat pom.xml'
}
You can see this example in this link:
https://dzone.com/refcardz/continuous-delivery-with-jenkins-workflow
If builds are not running in the same pipeline you can use direct CopyArtifact plugin, here is example: https://www.cloudbees.com/blog/copying-artifacts-between-builds-jenkins-workflow and example code:
node {
// setup env..
// copy the deployment unit from another Job...
step ([$class: 'CopyArtifact',
projectName: 'webapp_build',
filter: 'target/orders.war']);
// deploy 'target/orders.war' to an app host
}
name = "/" + "${env.JOB_NAME}"
def archiveName = 'relNum'
try {
step($class: 'hudson.plugins.copyartifact.CopyArtifact', projectName: name, filter: archiveName)
} catch (none) {
echo 'No artifact to copy from ' + name + ' with name relNum'
writeFile file: archiveName, text: '3'
}
def content = readFile(archiveName).trim()
echo 'value archived: ' + content
try that using copy artifact plugin

Resources