Below is an attempt to run a checkov scan on a plan file
terraform init
terraform plan -out tf.plan
terraform show -json tf.plan > tf.json
checkov -f tf.json`
Below is the beginning of what the output shows:
cloudformation scan results:
Passed checks: 0, Failed checks: 0, Skipped checks: 0, Parsing errors: 1
Error parsing file tf.json
terraform_plan scan results:
Passed checks: 32, Failed checks: 4, Skipped checks: 0
I am trying to remove the Parsing Error from file tf.json.
The JSON file is located in the link https://terraform-plan-file-1.s3.amazonaws.com/tf.json.
The closest thing to my error that I found was in the link below
https://issueexplorer.com/issue/bridgecrewio/checkov/1903
However removing nulls manually does not seem like a good fix. Also if I remove them, what should I replace them with?
If you're scanning a plan file, I think it makes sense to specify the framework flag like so:
checkov -f tfplan.json --framework terraform_plan
That should get rid of the parsing error. The parsing error occurs since checkov tries to scan the json plan file assuming it's cloudformation json and fails.
Related
Terraform v0.13.5
provider aws v3.7.0
Backend: AWS S3+DynamoDB
terraform plan was aborted, and now it cannot acquire the state lock. I'm trying to release it manually but get error:
terraform force-unlock -force xxx-xxx-xx-dddd
Failed to unlock state: failed to retrieve lock info:
unexpected end of JSON input
The state file looks complete and passes json syntax validation successfully.
How to fix that?
Solution: double-check you're in correct terraform workspace.
I had the same issue. In my case, there were more than 1 .tfstate files in the working directory. This was causing the problem.
1- Ensure you have only 1 .tfstate file in the working directory
2- Ensure .tfstate file is valid.
I had to switch to the right workspace and issue the command terraform force-unlcok -force to get away with the issue.
kubectl create -f myfirstpod.yaml
error: error validating "myfirstpod.yaml": error validating data: ValidationError(Pod.spec.containers[0].ports[0]): invalid type for io.k8s.api.core.v1.ContainerPort: got "string", expected "map"; if you choose to ignore these errors, turn validation off with --validate=false
This is the error i ran into
I searched alot but could not find the answer to this.
I think that my yaml format is right.
This is the image of my yaml code
You need a space after the colon. containerPort: 80, not containerPort:80. YAML is whitespace-sensitive.
I hope all is well with this pandemic going on. To get to the chase, I'm trying to install rancher on my mac and I've been stuck for the longest at this point. I believe I have everything downloaded properly. Is there something that I'm missing?
this is running on redhat os linux btw
and this is the command that I am trying to run: ./kubectl -n cattle-system apply -R -f ./rancher
and this is a look at my directory
Thank You So Much!
error validating "rancher/Chart.yaml": error validating data: kind not set; [![enter image description here][1]][1]if you choose to ignore these errors, turn validation off with --validate=false
error parsing rancher/templates/clusterRoleBinding.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key
error parsing rancher/templates/deployment.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key
error parsing rancher/templates/ingress.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key
error parsing rancher/templates/issuer-letsEncrypt.yaml: json: line 0: invalid character '{' looking for beginning of object key string
error parsing rancher/templates/issuer-rancher.yaml: json: line 0: invalid character '{' looking for beginning of object key string
error parsing rancher/templates/service.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key
error parsing rancher/templates/serviceAccount.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key
error validating "rancher/values.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
You should follow the installation instructions for HA here: https://rancher.com/docs/rancher/v2.x/en/installation/k8s-install/ or for standalone mode here: https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/
I guess you have downloaded the rancher helm chart directory and trying to install it using kubectl, which wouldn't work.
Under the HA instructions, you will find the helm install command.
I am trying to get a backup of a subdirectory before deleting the parent directory by copying the subdirectory into a different location.
This is how I have done this:
exec { "install_path_exists":
command => "/bin/true",
onlyif => "/usr/bin/test -d ${install_path}",
path => ['/usr/bin','/usr/sbin','/bin','/sbin'],
}
file { "server_backup_dir" :
ensure => 'directory',
path => "${distribution_path}/backup/server",
recurse => true,
source => "file:///${install_path}/repository/deployment/server",
require => Exec["install_path_exists"],
}
Exec checks if the directory exists, and returns true if so. The "server_backup_dir" file resource requires the "install_path_exists" exec to return true if the directory exists.
When the directory does not exist, and "install_path_exists" returns false, "server_backup_dir" executes anyway, and produces the following error.
Error: /Stage[main]/Is/File[server_backup_dir]: Could not evaluate:
Could not retrieve information from environment production source(s)
file:////usr/local/{project_location}/repository/deployment/server
What is wrong with my approach, and how can I fix this? Thanks in advance.
I'll break this up into two parts, what is wrong, and how to fix it.
What is wrong with my approach ...
You are misunderstanding the 'require' line and the nature of relationships in Puppet, and also how Puppet uses the return code of the command executed in an Exec.
When you use any of the four so-called metaparameters for relationships in Puppet - those being: require, before, subscribe & notify - you tell Puppet that you want the application of one resource to be ordered in time in relation to another. (Additionally, the 'subscribe' and 'notify' respond to refresh events but that's not relevant here.)
So, when Puppet applies a catalog built from your code, it will firstly apply the Exec resource, i.e. execute the /bin/true command, if and only if the install path exists; and then it will secondly manage the server_backup_dir File resource. Note also that it will apply the File resource irrespective of whether the Exec command actually was executed; the only guarantee being that /bin/true will never be run after the File resource.
Furthermore, the return code of the command in the Exec functions differently to what you're expecting. An exit status of 0 as the /bin/true command returns only tells Puppet to allow configuration to continue; compare that to an Exec command returning a non-zero exit status, which would cause Puppet to halt execution with an error.
Here's a simple demonstration of that:
▶ puppet apply -e "exec { '/usr/bin/false': }"
Notice: Compiled catalog for alexs-macbook-pro.local in environment production in 0.08 seconds
Error: '/usr/bin/false' returned 1 instead of one of [0]
Error: /Stage[main]/Main/Exec[/usr/bin/false]/returns: change from 'notrun' to ['0'] failed: '/usr/bin/false' returned 1 instead of one of [0]
Notice: Applied catalog in 0.02 seconds
For more info, read the page I linked above carefully. It generally takes a bit of time to get your head around relationships and ordering in Puppet.
how can I fix this?
You would normally use a custom fact like this:
# install_path.rb
Facter.add('install_path') do
setcode do
Facter::Core::Execution.execute('/usr/bin/test -d /my/install/path')
end
end
And then in your manifests:
if $facts['install_path'] {
file { "server_backup_dir" :
ensure => 'directory',
path => "${distribution_path}/backup/server",
recurse => true,
source => "file:///my/install/path/repository/deployment/server",
}
}
Consults docs for more info on writing and including custom facts in your code base.
Note:
I notice at the end that you reuse $install_path in the source parameter. If your requirement is to have a map of install paths to distribution paths, you can also build a structured fact. Without knowing exactly what you're trying to do, however, I can't be sure how you would write that piece.
I'm running test cases with nosetests in Jenkins. In general, it will have 100 test cases and I want to mark the build unstable when less than 20 test cases failed. If more than 20 test cases failed, then, mark the build failed.
The command I ran:
nosetests test.py --tc-file config.yml --tc-format yaml
First of all, I tried to just change the status of the build to Unstable but it still failed.
The groovy script I used:
manager.addWarningBadge("Thou shalt not use deprecated methods.")
manager.createSummary("warning.gif").appendText("<h1>You have been warned!</h1>", false, false, false, "red")
manager.buildUnstable()
The first two lines of code are executed, but the job is still marked as Failed.
Is there anything wrong with my jenkins config? Or the groovy postbuild plugin does not work with nosetest?
This is the console output:
FAILED (failures=2)
Build step 'Execute shell' marked build as failure
Build step 'Groovy Postbuild' marked build as failure
Finished: FAILURE
As DevD outlined, FAILED is a more significant build state than UNSTABLE. This means calling manager.buildUnstable() or manager.build.setResult(hudson.model.Result.UNSTABLE) after a step failed will still leave the build result FAILED.
However, you can override a failed build result state to be UNSTABLE by using reflection:
manager.build.#result = hudson.model.Result.UNSTABLE
Below example iterates over the build log lines looking for particular regex. If found it which will change (downgrade) build status, add badges & append to the build summary.
errpattern = ~/TIMEOUT - Batch \w+ did not complete within \d+ minutes.*/;
pattern = ~/INSERT COMPLETE - Batch of \d+ records was inserted to.*/;
manager.build.logFile.eachLine{ line ->
errmatcher=errpattern.matcher(line)
matcher=pattern.matcher(line)
if (errmatcher.find()) {
// warning message
String errMatchStr = errmatcher.group(0) // line matched
manager.addWarningBadge(errMatchStr);
manager.createSummary("warning.gif").appendText("<h4>${errMatchStr}</h4>", false, false, false, "red");
manager.buildUnstable();
// explicitly set build result
manager.build.#result = hudson.model.Result.UNSTABLE
} else if (matcher.find()) {
// ok
String matchStr = matcher.group(0) // line matched
manager.addInfoBadge(matchStr);
manager.createSummary("clipboard.gif").appendText("<h4>${matchStr}</h4>", false, false, false, "black");
}
}
Note: this iterates over every line, so assumes that these matches are unique, or you want a badge & summary appended for every matched line!
Post-build result is:
Build step 'Execute Groovy script' marked build as failure
Archiving artifacts
Build step 'Groovy Postbuild' changed build result to UNSTABLE
Email was triggered for: Unstable
Actually It is the intended way to work.
Preference
FAILED -> UNSTABLE -> SUCCESS
using groovy post build we can change the lower result(SUCCESS) to higher preference(FAILED/UNSTABLE)..
not vise versa.
as workaround after your Nosetest ,add an execute shell and "exit 0". so always your result will be the lower preference. now by your post build groovy script decide your exit criteria based on test results. This is actually a tweak.. will explore more and update you on this.