I'm using this module - https://registry.terraform.io/modules/cn-terraform/ecs-fargate-scheduled-task/aws/latest
I've managed to build the scheduled task with everything except the command override on the container
I cannot set the command override at the task definition level because multiple scheduled tasks implement the same task definition so the command override needs to happen at the scheduled task level as it's unique per scheduled task
I don't see anything that helps in the modules documentation so i'm wondering if there is another way I could do this by either querying for the scheduled task once it's created and using a different module to set the command override?
If you look at the Terraform documentation for aws_cloudwatch_event_target, there is an example in there for an ECS scheduled task with command override. Notice how they are passing the override via the input parameter to the event target.
Now if you look at the source code for the module you are using, you will see they are passing anything you set in the event_target_input variable to the input parameter of the aws_cloudwatch_event_target resource.
So you need to pass the override as a JSON string (I would copy the example JSON string in the Terraform docs and then modify it to your needs) as event_target_input in your module declaration.
I want to perform puppet(version 6.16) local compile for testing purpose by using following cmd.
puppet catalog compile testhost.int.test.com --environmentpath="xxx" --environment="ccc" --modulepath="ttt" --manifest="hhh" --vardir="eee"
It went well until I hit one custom module which is calling following puppet function:
Puppet::FileServing::Content.indirection.find(file, :environment => compiler.environment)
The error is as per below :
Error: Request to https://test.int.test.com:8140/puppet/v3 failed after 0.002 seconds: The private key is missing from '/etc/puppetlabs/puppet/ssl/private_keys/testhost.int.test.com.pem'
which I think it tries to connect to Puppet Master to query the file.
The thing is I only want to perform a local compile which NOT really want to talking to Puppet Master for any information.
So any workaround that I can do to ask it only looking at local environment not checking with Puppet Master?
BTW following method seems ok which is checking local environment not check Master side.
Puppet::Parser::Files.find_file(file, compiler.environment)
I'm relatively new in Puppet, Thanks in advance.
I expect I can run puppet catalog compile cmd purely locally without talking to Puppet Master to perform a sanity check only to see whether we are missing anything during compile phase before we push anything to production branch.
service{'cron':
ensure => 'running',
enable => 'true',
}
Error:
change from 'running' to 'stopped' failed: systems stop for cron failed.
Drop this
service { 'crond':
ensure => 'running',
enable => 'true',
}
Into a file on a server, let's call the file crontest.pp then as root run puppet apply crontest.pp you should see cron start.
Also, if you're trying to debug this sort of thing a good starting place is to use puppet resource in this case puppet resource service, you should be able to see a list of all your services. Look through that to find the one relating to cron, it gives you the Puppet code for it's current state so you can copy that directly into a class file, just ignore the provider => line as the Puppet resource abstraction layer will take care of that.
I am trying to include in my CI/CD development the update of the script_location and only this parameter. AWS is asking me to include the required parameters such as RoleArn. How can I only update the part of the job configuration I want to change ?
This is what I am trying to use
aws glue update-job --job-name <job_name> --job-update Command="{ScriptLocation=s3://<s3_path_to_script>}
This is what happens :
An error occurred (InvalidInputException) when calling the UpdateJob operation: Command name should not be null or empty.
If I add the default Command Name glueetl, this is what happens :
An error occurred (InvalidInputException) when calling the UpdateJob operation: Role should not be null or empty.
An easy way to update via CLI a glue-job or a glue-trigger is using --cli-input-json option. In order to use correct json you could use aws glue update-job --generate-cli-skeleton what returns a complete structure to insert your changes.
EX:
{"JobName":"","JobUpdate":{"Description":"","LogUri":"","Role":"","ExecutionProperty":{"MaxConcurrentRuns":0},"Command":{"Name":"","ScriptLocation":"","PythonVersion":""},"DefaultArguments":{"KeyName":""},"NonOverridableArguments":{"KeyName":""},"Connections":{"Connections":[""]},"MaxRetries":0,"AllocatedCapacity":0,"Timeout":0,"MaxCapacity":null,"WorkerType":"G.1X","NumberOfWorkers":0,"SecurityConfiguration":"","NotificationProperty":{"NotifyDelayAfter":0},"GlueVersion":""}}
Well here just fill the name of the job and change the options.
After this you have to transform your json into a one-line json and send into the command using ' '
aws glue update-job --cli-input-json '<one-line-json>'
I hope help someone with this problem too.
Ref:
https://docs.aws.amazon.com/cli/latest/reference/glue/update-job.html
https://w3percentagecalculator.com/json-to-one-line-converter/
I don't know whether you've solved this problem, but I managed using this command:
aws glue update-job --job-name <gluejobname> --job-update Role=myRoleNameBB,Command="{Name=<someupdatename>,ScriptLocation=<local_filename.py>}"
You don't need the the ARN of the role, rather the role name. The example above assumes that you have a role with the name myRoleNameBB and it has access to AWS Glue.
Note: I used a local file on my laptop. Also, the "Name" in "Command" part is also compulsory.
When I run it I go this output:
{
"JobName": "<gluejobname>"
}
Based on what I have found, there is no way to update just part of the job using the update-job API.
I ran into the same issue and I provided the role to get past this error. The command worked but the update-job API actually resets other parameters to defaults such as Type of application, Job Language,Class, Timeout, Max Capacity, etc.
So if your pre-existing job is a Spark Application in scala, it will fail as AWS defaults to Python Shell and python as job language as part of the update-job API. And this API provides no way to set job Language type to scala and set a main class (required in case of scala). It provides a way to set the application type to Spark application.
If you do not want to specify the Role to the update-job API. One approach is to copy the new script with the same name and same location that your pre-existing ETL job uses and then trigger your ETL using start-job API as part of the CI process.
Second approach is to run your ETL directly and force it to use the latest script in the start-job API call:
aws glue start-job-run --job-name <job-name> --arguments=scriptLocation="<path to your latest script>"
The only caveat with the second approach is when you look in the console the ETL job will still be referencing the old script Location. The above command just forces this run of the job to use the latest script which you can confirm by looking in the History tab on the Glue ETL console.
We're using Puppet + Foreman to monitor changes in environment by checking custom facts. For example, whenever a custom fact equals 'true' puppet calls the Notify resource with a message sent to the agent log. Puppet includes this message in the agent report and Foreman shows this in UI.
Problem is that whenever a message is thrown, Foreman considers this action as "Applied" and the node status changes to "Active" (blue icon).
We want to keep the node at "No Changes" (Green) plus show the Notify message.
Is that possible in some way? Maybe define a new custom resource type?
Here is the puppet code:
class mymodule::myclass::mysubclass {
if $::fact023 == 'fail' {
notify {'mynotify1':
message => "WARNING: Node ${::fqdn} failed fact023",
loglevel => hiera('warnings_loglevel'),
} } }
See screenshot of Foreman here
Update:
I'll refine the question: Is there a way to use the Notify resource without causing puppet to report that the node has changed? Meaning just print the message to client log (and therefore the message will be visible in report) but without puppet classify the event as an applied configuration?
The reason is that when puppet triggers the Notify resource, Foreman flags the node as being active (changed)
UPDATE #2
I'm thinking about changing Foreman report file so that the UI will ignore Notify events so that the node's status will remain unchanged but still show the message in the report. Can someone point me to the right direction? Thanks!
UPDATE #3
Problem fixed after switching from the "Notify" resource type to custom type "echo" created by some dude in Puppet Forge. Thanks!
It's not completely clear what you are trying to accomplish. One option would be to use the notice function instead of a resource. Functions execute on the puppet master, so you the log will end up in the puppet master's logs instead of the agent report. That also means that it will not count as an applied resource, and the node should appear to be stable.